ScopeGuard: A Scope-Creep Radar for Client Projects
ScopeGuard ingests change requests from Jira and Notion, prices their impact with an LLM, and surfaces a project's trajectory on one dashboard PMs, clients, and engineers can all read.
Key Takeaways
- ScopeGuard turns scope creep into a tracked object: every Jira ticket, Notion paragraph, or hallway ask becomes a priced, auditable change request.
- The LLM only handles unstructured language and impact estimation — dedup, trajectory math, and approval flow are deterministic code.
- A single trajectory gauge (green / amber / red) tells PMs, clients, and engineers the same story without a meeting.
If you have ever shipped client work, you already know the villain. Scope creep never announces itself. It arrives as "one quick change," "a small tweak," "can we also..." — and suddenly the budget is fiction and the deadline is folklore.
ScopeGuard is built to drag that villain into the light. It ingests change requests from the tools teams already use, scores their impact with an LLM, and surfaces a project's trajectory on a dashboard everyone can read — PM, client, and engineer alike.
This post walks through what ScopeGuard hopes to do, what using it actually looks like end-to-end, and the stack underneath.
What problem does ScopeGuard solve?
Most overruns are not caused by one catastrophic change. They are the sum of small, un-priced ones no one bothered to write down. So the goal is not to prevent scope changes — clients will always ask, and they should. The goal is to make every change:
- Visible — surfaced the moment it shows up in Jira, Notion, or a hallway ask.
- Priced — with an estimated cost and day impact attached before anyone commits.
- Accountable — stored as a durable object with rationale, not a Slack thread that scrolls away.
- Legible to non-engineers — a single trajectory gauge the client can read without a meeting.
In short: turn fuzzy "can we also..." requests into structured, auditable change objects so decisions become explicit.
How does ScopeGuard work in a real project?
Here is what using ScopeGuard looks like in action.
Day 0 — Kickoff
Maya (PM) wins a client engagement: build a mobile app, £40,000 budget, 12-week deadline. She logs into ScopeGuard, clicks New Project, fills in the numbers, and hits Lock Baseline.
The app writes those values into projects.baseline_budget and projects.baseline_deadline, then stamps baseline_locked_at. Every future calculation references this frozen line. If the number moves later, it is because someone explicitly re-baselined — not because Maya forgot what she quoted.
She invites her client contact as a client role and her two engineers as member. Roles now gate what everyone sees.
Dashboard state: gauge reads green — 0% burn. Empty change request list.
Week 2 — The first Jira drift
An engineer, Sam, files a ticket in Jira titled "Add offline mode for mobile app." Standard engineering thinking — useful, not in the original scope.
Every 10 minutes, ScopeGuard's Jira sync runs. The flow:
- Pull new tickets via the Jira Cloud API.
- For each, compute
external_ref = jira:PROJ-142and checkchange_requestsfor a dedup hit. - New ticket — send title and description to the LLM (
lib/llm.ts→analyzeScopeCreep). - LLM returns structured JSON:
impact_cost: £3,200,impact_days: 6,risk: medium,ai_analysis: "Offline mode requires local persistence layer, sync reconciliation, and conflict handling. Estimate assumes single-user data only." - Insert into
change_requestsasstatus: draft,source: jira. - Post to Slack: "New change request detected — Offline mode (+£3,200, +6d)."
Maya opens the dashboard. The gauge has nudged to 8% burn — still green. She clicks into the item, reads the AI rationale, edits the cost down to £2,800 because she knows Sam is fast on this kind of work, and moves it to pending for the client to review.
Week 4 — The PRD ambush
The client sends a Notion page: "Phase 2 requirements." Three paragraphs describing push notifications, a referral system, and "a light admin panel, nothing fancy."
Maya pastes the text into Parse PRD. The /api/parse/prd route fires the LLM in extraction mode. It returns three candidate change requests:
- Push notifications — £1,800 / 4d
- Referral system — £4,500 / 9d
- Admin panel — £6,200 / 12d, with
ai_analysisflagging "'nothing fancy' is underspecified — estimate assumes read-only dashboards without role management. Cost may double if write access is required."
All three land as draft. That last line of analysis is the whole point — the model is not hiding uncertainty, it is surfacing it.
Dashboard state: if all three approve, gauge jumps to 43% burn — still green, but visibly moving.
Week 7 — The gauge turns amber
The client approves push notifications and the referral system. Defers the admin panel. Two more Jira tickets drift in during sprint — a design revision and an analytics integration.
ScopeGuard rolls the math:
- Approved + pending cost impact: £28,400 on a £40,000 baseline — 71% cost burn.
- Day impact: 24 days on an 84-day runway — 28% day burn.
- Worst case wins: 71% → amber.
The gauge flips. Slack fires: "Project Aurora: trajectory amber (71% cost burn). 3 pending change requests."
Maya has the conversation with the client before it becomes a budget crisis. She shares the ScopeGuard link — the client sees every line item, the AI rationale, the approval log. The conversation is about tradeoffs, not blame.
Week 10 — Landing it
Client defers two more items to Phase 2. Maya marks the admin panel rejected with a comment. Final trajectory: 67% burn — green.
The project ships at 11 weeks. The decision trail lives in the change_requests table — every approval, every estimate, every AI explanation — ready for the retrospective and the next engagement.
That is the loop. That is what ScopeGuard is trying to do.
The Tech Stack
Frontend
- Next.js (App Router), React 19, TypeScript — serving UI and API route handlers from one deployment.
- shadcn/ui components (
Card,Button,Badge,Dialog,Select) with Lucide icons. - Tailwind CSS via
@tailwindcss/postcss, withclsx+tailwind-mergeinsrc/lib/utils.ts. next.config.tsenables the React compiler (reactCompiler: true).
Backend, data, auth
- Supabase — Postgres for data, GoTrue for auth (Google OAuth + session cookies).
- Prisma as the DB access layer, with a
prisma.tssingleton for connection pooling.
Integrations
- OpenAI for all LLM analysis — the single place the probabilistic work lives.
- Jira Cloud API for ticket ingestion.
- Notion API for document and PRD ingestion.
- Slack webhooks for drift alerts.
The library layer
Route handlers stay thin. The real logic lives in src/lib/*:
lib/llm.ts— OpenAI wrapper,analyzeScopeCreep, PRD extractionlib/jira.ts,lib/notion.ts,lib/slack.ts— integration clientslib/parsers.ts— normalization + deduplib/cache.ts— sessionStorage stale-while-revalidate, 60s TTLlib/supabase/*— SSR helperslib/sync-user.ts— user upsert on auth callback
The data model
Five tables carry the whole product:
- workspaces — multi-tenant root
- workspace_members — roles (
admin | member | client) - projects — delivery unit with
budget,deadline, and frozen baseline fields - change_requests — the atom of scope creep:
impact_cost,impact_days,ai_analysis, status (draft | pending | approved | rejected | deferred),external_reffor dedup - integration_configs — per-workspace provider configs with API-layer secret redaction
That last table matters: a governance tool that leaks Jira tokens is worse than no tool.
The change-request status field is the heart of the workflow. Each value has a defined entry rule, who can move it, and how the trajectory gauge counts it:
| Status | Entry trigger | Counts toward burn? | Who can move it | Next valid states |
|---|---|---|---|---|
draft | LLM creates from Jira/Notion ingestion | No | Member, Admin | pending, rejected |
pending | Member promotes draft for client review | Yes (worst-case) | Member, Admin, Client | approved, rejected, deferred |
approved | Client accepts cost and day impact | Yes (committed) | Client, Admin | — (terminal) |
rejected | Client declines or member kills duplicate | No | Client, Admin | — (terminal) |
deferred | Client wants it but in a future phase | No | Client, Admin | pending (when re-baselined) |
The trajectory gauge sums approved cost as committed burn, adds pending cost as worst-case overlay, and ignores draft, rejected, deferred entirely. That asymmetry is what lets the gauge tell the truth: green when committed work is on plan, amber when the worst-case projection breaches a threshold.
Why use LLMs only for impact analysis?
One rule runs through every component:
Use the LLM for the one thing only the LLM can do — reading unstructured human language and estimating impact. Do everything else in boring, testable, auditable code.
Dedup is a hash lookup, not a prompt. The trajectory gauge is arithmetic, not inference. Approvals are a state machine, not a chat. The LLM is the engine. The scaffolding around it is what makes the output trustworthy.
ScopeGuard will not eliminate scope creep. Nothing will. What it does is make scope creep negotiable — priced, logged, and visible — before the budget quietly becomes fiction.
That is enough.
Related
- Discourse X-Ray: Making Invisible Writing Structure Visible — same principle: LLM for the unstructured semantic work, your own code for everything that needs to be reliable.
- cron-human: why I built yet another cron library — building deterministic tools that AI agents can call without hallucinating.
Frequently asked questions
- What is ScopeGuard?
- ScopeGuard is a tool for client-services teams that ingests change requests from Jira, Notion, and Slack, scores their cost and day impact with an LLM, and surfaces project trajectory on a single dashboard PMs, clients, and engineers can all read. It turns informal 'can we also...' asks into structured, auditable change objects.
- How does ScopeGuard estimate the cost of a change request?
- Each change request is sent to OpenAI in extraction mode. The LLM returns structured JSON with impact_cost, impact_days, risk level, and an ai_analysis explanation. Estimates are editable — Maya the PM can override the model when she has domain context, and the override is logged with rationale.
- What integrations does ScopeGuard support?
- Jira Cloud (ticket sync every 10 minutes), Notion (PRD parsing via /api/parse/prd), and Slack webhooks (drift alerts when trajectory turns amber or red). Integration credentials are redacted at the API layer to prevent token leaks.
- How does ScopeGuard avoid duplicate change requests?
- Every incoming item gets an external_ref like jira:PROJ-142 or notion:page_id, and the change_requests table is checked for a dedup hit before the LLM is invoked. Dedup is a hash lookup, not a prompt — deterministic, fast, and free.
- What tech stack does ScopeGuard use?
- Next.js App Router with React 19 and TypeScript on the frontend; Supabase Postgres + GoTrue auth via Prisma on the data side; OpenAI for LLM analysis; shadcn/ui + Tailwind for components. The library layer (lib/llm.ts, lib/jira.ts, etc.) keeps route handlers thin.