Git Integration: From Commit to Delivery Insight

GoalPath connects to your git provider to close the loop between code and project management. Tag a commit with #GP-47, and GoalPath updates the item, measures your deployment performance, and feeds the data into your delivery forecasts and progress reports.

This guide covers everything: how to set it up, how linking works, what DORA metrics you get, and how the data flows into the rest of GoalPath.

The Short Version

  • Tag commits and PRs with #GP-47 (the item's short number). GoalPath links the code activity to the item.
  • PRs drive item status: opened = Started, merged = Finished. No manual board updates.
  • DORA metrics appear automatically: deployment frequency, lead time, change failure rate, MTTR.
  • AI fills gaps: forget the tag, and GoalPath suggests matches from commit message similarity.
  • Works with GitHub, GitLab, and Bitbucket. The core engine is provider-neutral.

Setup

Step 1: Connect your git provider

Navigate to Settings > Integrations in your GoalPath project. You will see cards for GitHub, GitLab, and Bitbucket. Click "Connect" on your provider.

GoalPath requests read-only access to your repositories and issues. It never writes to your git provider.

ProviderOAuth scopes requested
GitHubrepo, read:org
GitLabread_api, read_repository, read_user
Bitbucketrepository, pullrequest, issue

Step 2: Register a repository

After connecting, click "Register Repository" and enter the full repository name (e.g., myorg/myrepo). GoalPath generates a unique webhook secret for HMAC signature verification.

You can register multiple repositories per project. Each repository has independent settings for auto-status and AI matching.

Step 3: Add the webhook

In your git provider's repository settings, create a webhook:

SettingValue
URLhttps://goalpath.app/api/webhooks/github (or /gitlab or /bitbucket)
SecretThe webhook secret shown in GoalPath after registration
Content typeapplication/json
EventsPushes, Pull Requests (or Merge Requests), Releases, Deployments

GoalPath validates every incoming webhook using HMAC-SHA256 (GitHub, Bitbucket) or secret token comparison (GitLab). Invalid payloads are rejected with 401.

Step 4: Start tagging

Every item in GoalPath has a sequential short number displayed as #GP-N. You will see it on kanban cards, in item details, and in search results.

Include the tag in your workflow:

Commit messages:

git commit -m "Fix auth redirect loop #GP-47"

Branch names:

git checkout -b feature/GP-47-auth-redirect

PR titles:

Fix auth redirect loop #GP-47

The tag is case-insensitive. #gp-47 and #GP-47 both work. Multiple references are supported: "Fix #GP-47 and #GP-48" links to both items.


How Linking Works

When GoalPath receives a webhook event, it scans commit messages, PR titles/bodies, and branch names for #GP-N patterns. Each match creates a GitCommitLink record that associates the commit with the item.

Confidence Levels

Not all links are created equal. GoalPath assigns a confidence score to each link based on how it was detected:

SourceConfidenceExampleTriggers auto-status?
#GP-N in commit message100%"Fix login #GP-47"Yes
#GP-N in PR title or body100%PR title: "Auth fix #GP-47"Yes
Branch name pattern95%feature/GP-47-authNo
Commit in a linked PR90%Commit in PR that references #GP-47No
AI vector similarity60-90%Semantically similar messageNo
Time + author heuristic40-50%Owner's commit during active periodNo

Only 100% confidence links trigger automatic status changes. All other links appear as suggestions that you can confirm or dismiss.

The Code Activity Panel

Each item has a "Code Activity" section in its detail view. This shows:

  • Confirmed commits (green): 100% confidence or manually confirmed. Shows SHA, message, author, and timestamp.
  • Suggested commits (amber): Below 100% confidence. Shows confidence badge and source label. One-click confirm or dismiss.
  • PR activity: linked PRs with state badges (open, merged, closed) and line change counts.

Confirming a suggestion promotes it to 100% confidence. Dismissing removes it and improves future AI matching.


Automatic Status Updates

When a PR linked to an item changes state, GoalPath can update the item's status automatically.

PR eventItem status change
PR opened (references #GP-N)NotStarted -> Started
PR mergedAny -> Finished
PR closed without mergeNo change

Rules and safeguards

  • Only explicit references trigger status changes. Suggested links (AI, branch, heuristic) never move items. This prevents false positives from affecting your board.
  • Manual overrides always win. If someone manually sets an item to a specific status, GoalPath will not override it.
  • Auto-status is per-repository. Toggle it in Settings > Integrations for each connected repository.

Why "Finished" and not "Delivered"?

GoalPath's item lifecycle is: NotStarted -> Started -> Finished -> Delivered. A merged PR means the code is done, but it may not be deployed or verified yet. "Finished" means ready for stakeholder review. "Delivered" is a separate step that acknowledges the work has been accepted.


DORA Metrics

DORA (DevOps Research and Assessment) defines four metrics that predict software delivery performance. GoalPath computes all four from your deployment data.

Deployment Frequency

How often code reaches production. GoalPath counts production deployments per week and classifies against industry benchmarks:

LevelThresholdWhat it means
Elite7+ deploys/weekMultiple deploys per day. Continuous delivery.
High1-7 deploys/weekAt least daily. Strong deployment pipeline.
Medium0.25-1 deploys/weekWeekly to monthly. Typical for teams with manual release processes.
Low< 0.25/weekLess than monthly. Significant deployment friction.

Lead Time for Changes

Time from first commit to production deployment, measured in hours. GoalPath matches the deployed commit SHA against stored commit timestamps. The metric uses the median across all deployments in the measurement period (default: 12 weeks).

A long lead time often indicates:

  • Slow code review processes
  • Manual QA gates
  • Deployment queue bottlenecks
  • Large batch releases (many changes bundled into one deploy)

Change Failure Rate

Percentage of deployments that are followed by another deployment within a configurable window (default: 4 hours). The assumption: a rapid follow-up deployment indicates the first one caused a problem that needed fixing.

This is a heuristic approximation. It works well for teams that deploy a few times per week. For teams deploying many times per day, the heuristic may over-count failures since back-to-back deployments are normal workflow. GoalPath notes this limitation in the metrics display.

Mean Time to Recovery (MTTR)

Median hours between a "failed" deployment (one followed by a quick fix) and the next successful deployment. Lower MTTR means your team recovers from problems faster.

DORA Level

GoalPath assigns an overall DORA level based on the worst metric across all four:

LevelCriteria
EliteFrequency >= 7/week AND lead time < 24h AND failure rate < 15% AND MTTR < 1h
HighFrequency >= 1/week AND lead time < 168h AND failure rate < 30% AND MTTR < 24h
MediumFrequency >= 0.25/week AND lead time < 720h AND failure rate < 45% AND MTTR < 168h
LowAnything else

Configuration

In Settings > Integrations > DORA Configuration:

  • Deployment source: Choose between GitHub Releases or the Deployment API. Some teams create releases to mark production deploys. Others use GitHub's deployment environment system. Pick whichever matches your workflow.
  • Production environment: When using the Deployment API, GoalPath filters to this environment name (default: "production"). Staging and preview deployments are excluded.
  • Change failure window: Hours within which a follow-up deployment counts as a failure indicator (default: 4 hours).

Backfill

When you first connect a repository, GoalPath imports the last 90 days of releases from your git provider and computes initial DORA metrics. After that, metrics update automatically as new webhook events arrive.


AI Commit Matching

Not every developer will remember to tag every commit. GoalPath uses two strategies to suggest links for untagged commits.

Vector Similarity

GoalPath already maintains vector embeddings for all item titles and descriptions (used for search and duplicate detection). When an untagged commit arrives:

  1. The commit message is embedded using the same model (OpenAI text-embedding-3-small, 1536 dimensions)
  2. GoalPath queries pgvector for items with similar embeddings, filtered to items that are currently Started or were started within the last 90 days
  3. Matches above the confidence threshold (default: 0.6) become suggested links

The confidence score is the cosine similarity between the commit message embedding and the item embedding, with boosters:

  • +0.1 if the commit author's email matches the item owner's email
  • +0.05 if the commit falls within the item's active period (started within the last 2 weeks)
  • Capped at 0.95 (only explicit #GP-N references reach 1.0)

Time + Author Heuristic

A simpler fallback: if a commit's author email matches the owner of a Started item, GoalPath creates a low-confidence suggestion (0.4-0.5). This catches cases where the commit message is too terse for meaningful vector similarity.

Managing Suggestions

Suggested links appear in the item's Code Activity panel with their confidence score and source. You have two options:

  • Confirm: Promotes the link to 100% confidence. The commit is now fully linked to the item.
  • Dismiss: Removes the suggestion. Dismissed links are not recreated by future webhook events.

Both actions improve future matching. Over time, GoalPath learns your team's patterns.

Configuration

  • Toggle AI matching per repository in Settings > Integrations
  • Set confidence threshold (0.3 to 0.9, default 0.6). Higher = fewer suggestions but higher quality. Lower = more suggestions but more noise.

Importing Existing Issues

If your team already tracks issues in GitHub, GitLab, or Bitbucket, you can import them into GoalPath instead of re-entering everything manually.

The import wizard

Available from Settings > Integrations or during onboarding (the "Break It Down" phase):

  1. Connect: Authorize GoalPath with your git provider
  2. Select repository: Choose which repo to import from. See issue counts per repo.
  3. Preview and map: Before importing, configure how issues translate to GoalPath items:
    • Labels -> Item types: Map "bug" to Bug, "enhancement" to Feature, etc. Smart defaults provided.
    • Milestones -> GoalPath milestones: Map each git milestone to an existing GoalPath milestone or create a new one.
    • Assignees -> Owners: GoalPath matches assignees by name/email. Unmatched users are skipped.
    • Options: Include or exclude closed issues.
  4. Import: One click creates all mapped items. Each item stores the original issue URL in metadata for reference.

Idempotent imports

The import is safe to run multiple times. GoalPath tracks which issues have already been imported (by issue number and repository) and skips duplicates. This means you can re-import after adding new issues to catch up without creating duplicates.


How DORA Data Enriches GoalPath

The git integration is not a standalone dashboard. DORA data flows into GoalPath's existing systems:

Delivery Forecasts

GoalPath's Monte Carlo forecasting already uses team velocity to predict delivery dates. DORA data adds deployment reality:

  • Change failure rate above 30%: The pessimistic forecast estimate widens to account for rework from failed deployments. If 40% of your deploys need a follow-up fix, your effective throughput is lower than raw velocity suggests.
  • Deploy lead time: Added as a buffer between "code complete" and "in production." If your median deploy lead time is 48 hours, the forecast adds 2 days to the delivery date.
  • Dropping deployment frequency: If deploys per week drop below 0.5, GoalPath flags velocity data as potentially stale. Teams that stop deploying may not be completing work even if items are marked Finished.

Progress Reports

Automated weekly progress reports include code activity when available:

  • Commits linked to items are referenced in milestone progress summaries
  • DORA trend changes (e.g., deployment frequency dropped from High to Medium) are flagged

Delivery Insights

The insights engine gains four DORA-related questions:

  • "How often are we deploying?"
  • "What's our lead time from code to production?"
  • "Are our deployments reliable?"
  • "How quickly do we recover from failures?"

These are answered from real data, not generic advice.


Provider Comparison

CapabilityGitHubGitLabBitbucket
OAuth connectionYesYesYes
Issue importYesYesYes
Push webhookYesYesYes
PR/MR webhookPull RequestsMerge RequestsPull Requests
Release webhookYesYesN/A (use tags)
Deployment webhookYesYesLimited
DORA: ReleasesYesYesUse Deployment API
DORA: Deployment APIYesYesVia commit statuses
Webhook authHMAC-SHA256Secret tokenHMAC-SHA256

All three providers feed into the same GoalPath engine. Commits, links, DORA metrics, and forecasts work identically regardless of which provider hosts your code.


Troubleshooting

Items don't link to commits:

  • Verify the webhook is configured in your git provider's repository settings
  • Check that the webhook URL matches your provider (/github, /gitlab, or /bitbucket)
  • Confirm the webhook secret matches what GoalPath generated
  • Make sure the tag format is correct: #GP-47 (not #47 or GP47)

Auto-status not updating:

  • Check that auto-status is enabled for the repository in Settings > Integrations
  • Only #GP-N references (100% confidence) trigger status changes. Branch-name and AI matches do not.
  • Manual status overrides prevent auto-updates. If someone manually changed the status after the PR was opened, GoalPath respects the manual choice.

DORA metrics show 0 or are missing:

  • DORA requires at least one deployment or release event. If your team doesn't use GitHub Releases or the Deployment API, DORA metrics won't populate.
  • Check the deployment source setting (Releases vs Deployment API) in DORA Configuration
  • The initial backfill imports the last 90 days of releases. If your repo has no releases in that window, metrics start from the first new webhook event.

AI suggestions are too noisy or too sparse:

  • Adjust the confidence threshold in Settings > Integrations. Higher (0.8) = fewer, more accurate suggestions. Lower (0.4) = more suggestions, more noise.
  • Make sure items have descriptive titles and descriptions. The AI matches against item embeddings, so vague titles produce vague matches.