The Process with an AI Coding Assistant
The GoalPath process has four phases: Discovery, Planning, Execution, and Delivery. Normally, a human navigates each handoff. With the GoalPath skills plugin for Claude Code installed, your AI coding assistant can drive those handoffs for you.
This is not "AI writes your code." It is "AI operates the process." Your team still decides scope. You still approve the plan. You still review the PR. What changes is the mechanical work between those decisions: researching ideas, breaking them into items, working through a backlog, and posting progress to the board.
The result, for a well-scoped milestone, is that you can go from rough idea to a PR with 80% of the feature built, overnight.
What Changes
Without AI skills, the handoffs between process phases require human initiative:
- Someone has to write the PRD
- Someone has to run the planning meeting and create the items
- Someone has to pick up items, set statuses, and post updates
- Someone has to run code review before the PR
With the skills plugin, your AI coding assistant takes responsibility for the mechanical parts of each of these. You provide decisions at key review points. The assistant handles the execution between them.
GoalPath is a good substrate for this because its data is structured: items, milestones, statuses, highlights, and comments are all readable and writable via the MCP server. The assistant can set an item to Started, check off subtasks as work progresses, flag a Question highlight when it needs a decision, and post a completion comment, all without leaving the terminal. Stakeholders watching in GoalPath see real progress, not a batch update at the end.
The Five Skills
The skills plugin installs five slash commands that map onto GoalPath's process phases.
/goalpath-discover: Idea to PRD
When to run: You have a rough idea, an empty milestone, or a half-formed thought that needs shaping before anyone starts planning.
What it takes as input: A phrase, a GoalPath milestone URL, or plain text describing what you want to build.
What it does: The assistant acts as a product sparring partner, not a yes-machine. Before drafting anything, it probes the problem with focused questions: Who specifically benefits from this? What is the single thing that will be true when it ships? What are you explicitly not building? It also does two parallel research tasks that are mandatory, not optional: web research to understand what competitors do and what the known pitfalls are, and a codebase exploration to understand what already exists and what is architecturally feasible.
After that research, it drafts a PRD in structured markdown covering purpose, primary outcome, success metrics, design principles, core scope, non-goals, and open questions. It then iterates with you, typically one to three rounds, until the PRD is sharp enough that someone could reasonably disagree with it. A PRD that nobody can argue with is too vague to build from.
What you review: The PRD content. You can push back on scope, cut sections, or redirect the approach. The assistant revises until you are satisfied.
What it produces: A GoalPath milestone with the PRD saved to its description and the status set to Planning.
Handoff: After saving, the assistant suggests running /goalpath-plan on the milestone URL.
/goalpath-plan: PRD to Work Items
When to run: A milestone has a PRD or a clear enough description to break into concrete items.
What it takes as input: A GoalPath milestone URL or UUID.
What it does: The assistant fetches the milestone, reads the PRD, and launches up to three parallel codebase explorations to understand what already exists in the relevant areas: data models, API endpoints, UI components, and routes. Based on that, it proposes a set of implementation items with titles, types (Feature, Task, Bug), estimates for Feature items on the Fibonacci scale, subtasks, and dependencies.
Items are ordered for implementation: backend before frontend, dependencies before what depends on them, riskiest work first. Estimates are calibrated for AI-assisted development, so a 3-point Feature means one to two hours of focused work, not a traditional full-day estimate. The planner actively resolves ambiguity during this phase by asking you directly, rather than deferring unclear questions as highlights on items. The goal is that every item created is ready to start immediately.
What you review: The proposed item list. You can request splits, combinations, reordering, deferral to Icebox, or clarification on any item. The assistant iterates until you approve.
What it produces: Items created in GoalPath with subtasks attached, and the milestone status updated to Planned.
Handoff: After creating items, the assistant suggests running /goalpath-implement on the milestone URL for a full sweep, or /goalpath-work on individual items.
/goalpath-implement: Milestone to PR
When to run: A milestone has planned items and you want to implement all of them on a single branch with one PR. This is the long-running skill, designed to run overnight or unattended.
What it takes as input: A GoalPath milestone URL or UUID.
What it does: The assistant creates a feature branch, reads every item and its subtasks, and works through them in implementation order. For each item, it sets status to Started in GoalPath, delegates the actual coding to specialist subagents (data model, backend, frontend, test writing), checks off subtasks incrementally as each completes, commits with a descriptive message, posts a completion comment on the item, and sets status to Finished.
If it encounters a decision that needs human input, it sets a Question highlight on the item, documents its recommended default assumption, notes what it will do in the meantime, and continues. It does not block waiting for a response. You can answer asynchronously in GoalPath and the next run will pick it up.
After all items are complete, the assistant runs the build and test suite, fixes any failures, and then runs two rounds of code review before creating the PR. The first round checks spec compliance: does the implementation match the planned items? The second round checks code quality: naming, structure, duplication, and whether the fixes from round one introduced new issues.
What you review: The PR. Items are updated throughout, so you can also watch progress in GoalPath while the skill runs.
What it produces: One branch, one PR, one database migration if applicable. GoalPath items have completion comments, checked subtasks, and status set to Finished.
Re-run behavior: If you reject items after reviewing the PR, run the skill again on the same milestone. The assistant picks up from the current state: it leaves Accepted items alone and works through Rejected or NotStarted items. For a single-item rework, run /goalpath-work with the item URL directly instead. This is the human-in-the-loop mechanism: approve what is good, reject with a sentence of feedback, re-run.
/goalpath-work: Single Item to Code
When to run: You want to implement one specific item, either as part of a milestone sweep or as a standalone piece of work.
What it takes as input: A GoalPath item URL or UUID.
What it does: The assistant fetches the item, its subtasks, and any existing comments. It sets status to Started, then launches an Explore agent to understand the relevant parts of the codebase before writing a line of code. Based on that exploration, it plans the approach and presents it for approval on non-trivial items before proceeding.
Implementation is delegated to the appropriate specialist agent for the domain. Subtasks are checked off as each completes, not batched at the end. After implementation, the assistant runs the build and test suite, fixes any failures, and runs two rounds of code review before marking the item Finished and posting a summary comment.
If the item was previously Rejected, the assistant reads the rejection comments to understand what was wrong, sets status back to Started, and addresses the issues. Rejected items are not failures, they are feedback with context.
What you review: The code changes. The item summary comment in GoalPath tells you what was implemented and what files were affected.
What it produces: Code committed on the current branch, item status set to Finished, subtasks all checked off, and a summary comment on the item.
/goalpath-status: Quick Status Check
When to run: Any time you want a read on where things stand.
What it does: Lists your assigned items grouped by urgency. Items with Blocked, Question, or Discussion highlights appear first. Then items in progress, with subtask completion counts. Then items ready to start, in priority order. Then recently finished items.
What it produces: A summary line showing counts by status and a suggested next action. If something is blocked, it surfaces that first. If nothing is in progress, it points to the highest-priority NotStarted item.
A Worked Example: Feature from Idea to PR
This is what "80% of a feature overnight" looks like in practice.
Morning: Discovery (20 minutes)
You have a rough idea: milestone progress reports that stakeholders can read without attending a meeting. You run:
/goalpath-skills:goalpath-discover "stakeholder progress reports for milestones"The assistant asks a few focused questions. Who are the stakeholders, and what do they need to know? How often? What format would they actually read? You spend about 20 minutes in conversation. The assistant also checks your codebase for what report infrastructure already exists, and searches the web for how other tools handle this.
The session ends with a PRD saved to a new GoalPath milestone. It covers the primary outcome (stakeholders see milestone health without asking the team), the report structure, what is out of scope (no custom scheduling, no email templates in v1), and one open question about language preferences.
Midday: Planning (15 minutes)
You run:
/goalpath-skills:goalpath-plan https://goalpath.app/roadmaps/.../milestones/abc-123The assistant proposes nine items: a data model task for report storage, a background job task for weekly generation, four feature items for the report UI, two feature items for the delivery mechanism, and one task for the email notification scheduling. Total: 18 points. You ask it to defer the language preference feature to a later milestone, which removes one item. You approve. Eight items are created in GoalPath.
Evening: Implement (kicks off, runs overnight)
You run:
/goalpath-skills:goalpath-implement https://goalpath.app/roadmaps/.../milestones/abc-123The assistant creates a feature branch, works through the items in dependency order, and posts comments to GoalPath as each item completes. Partway through, it hits a question about how to handle milestones with no completed items yet: report them as "no data" or skip them entirely? It sets a Question highlight on the relevant item, documents that it will default to "no data" for now, and continues.
You see the highlight in GoalPath the next morning and answer in a comment. The assistant has already proceeded with a reasonable default.
Next morning: Review
The PR is open. Seven of eight items are done cleanly. One item has a Question highlight from the evening. You answer the Question in a comment, and the assistant's default turns out to be fine, so you accept that item. You review the rest of the PR, merge what looks good, and leave a rejection comment on one item where the report format does not match what you described in the PRD.
Later: Rework
You run /goalpath-work with the rejected item's URL. The assistant reads your rejection comment as direction, moves the item back to Started, reworks it, and marks it Finished. A new commit is pushed. The PR is updated. If more than one item had been rejected, you could run /goalpath-implement on the milestone instead: the assistant would leave the Accepted items alone and only work through what still needs changes.
The Rejection Loop
The human-in-the-loop mechanism is built around item status and comments. Here is how it works:
- The skill implements items and marks them Finished
- You review the PR (or individual items in GoalPath)
- For items that need rework, you reject them in GoalPath and add a comment explaining what was wrong
- You re-run. Two options:
- Run
/goalpath-implementon the milestone. The assistant picks up from the current state: it leaves Accepted items alone and works through Rejected and NotStarted items. - Run
/goalpath-workon a single Rejected item. This is useful when only one or two items need changes and you do not want the overhead of a full milestone pass.
- Run
- The rejection comments become direction for the rework. The assistant moves the item back to Started, reworks it, and marks it Finished for another review.
This loop can repeat as many times as needed. Most features need one or two rework passes on a small subset of items. The key is that rejection comments become the brief for the next run: short, direct feedback is more useful than vague dissatisfaction.
When Not to Use This
The skills work best when GoalPath has the context the agent needs. There are situations where they are not the right tool.
Greenfield architecture decisions: The discover skill will research and frame the problem well, but it cannot decide fundamental architectural choices for you. Those require human judgment informed by context the agent cannot fully see: team skills, organizational constraints, long-term technical strategy. Use discover to frame the decision clearly, then make it yourself before planning.
Work spanning systems the agent cannot access: If implementation requires making changes in an external system, coordinating with a third-party API the agent cannot call, or merging changes across multiple repositories, the implement skill cannot complete the loop on its own. It will flag blockers, but the resolution is manual.
Milestones without structure: The plan skill needs a PRD or clear description to work from. The implement skill needs planned items in priority order. If a milestone is a blank name with no description and no items, there is nothing for the skills to anchor to. Run discover first, then plan, before attempting implement.
Teams not yet set up in GoalPath: The skills read and write GoalPath data. If your team does not have a roadmap, milestones, or an agreed priority order, the skills have no substrate to work with. Get the process foundation in place first.
What GoalPath Provides to Make This Work
The skills rely on GoalPath being a structured, writable representation of the process. Three things matter:
Structured data the agent can read and write: Items, milestones, statuses, highlights, subtasks, and comments are all first-class objects with a clean API. The MCP server exposes 38 tools covering the full lifecycle. The agent is not screen-scraping or guessing at state: it is reading and writing the same data your team sees in the UI.
Rejection comments as direction: When you reject an item and write a comment, that comment becomes the brief for the next implement run. GoalPath treats rejection comments as first-class inputs, not just audit trail. This is what makes the feedback loop work.
Highlights for async decisions: The Question and Blocked highlights are designed for situations where the agent needs human input but should not block. The agent flags the question, documents its default assumption, continues working, and the human answers in GoalPath when they have a moment. The next interaction, whether a status check or another implement run, picks up the response. This is how the process stays asynchronous without losing human oversight.
For more on the process phases these skills map onto, see the Process Framework overview. For the planning meeting that /goalpath-plan automates, see Milestone Planning. For the principles behind what GoalPath automates versus what stays with humans, see Automation and Principles. For installation and command reference, see AI Skills.