Build-Time Agent Stack
Status: Draft Purpose: Patterns for AI-assisted development (agents that help developers build)
The Problem
Build-time agents (Claude Code, Cursor, Copilot) need:
- Context — Understanding what exists in the codebase
- Constraints — Knowing what patterns to use (and avoid)
- Verification — Checking that generated code is correct
- Guidance — Steering behavior toward project conventions
Without these, AI generates plausible-looking code that doesn't fit the project.
Reference Implementations
| Project | Pattern | What It Solves |
|---|---|---|
| Signal Forge | Skills framework | Reusable content generation with voice preservation |
| AIX | Agent routing | Domain-specific experts for specialized problems |
| AES | Workflow modes | Fast vs careful verification based on risk |
| Shared Cursorrules | Hierarchical config | Inherited conventions across projects |
Pattern 1: Skills Framework
Source: Signal Forge (signal-forge/.claude/skills/)
Skills are self-contained, reusable units that encode domain knowledge for AI:
skills/
├── executive-pov/
│ ├── SKILL.md # Instructions + voice spec
│ └── references/
│ └── voice-guide.md # Consultant perspective patterns
├── solution-architecture/
│ ├── SKILL.md
│ └── references/
│ ├── solution-architecture-guide.md # arc42 template
│ └── adr-template.md
├── thought-leadership/
│ ├── SKILL.md
│ └── references/
│ └── signal-dispatch-voice.md # Empirical voice analysis
└── ...
Each skill encodes:
- Content classification (Thought Leadership vs Architecture vs Advisory)
- Voice principles (tone, structure, language patterns)
- Quality checklists (what success looks like)
- Template structures (SCR framework, arc42, etc.)
Forge Opinion:
For Go + HTMX + Svelte islands, define skills for:
| Skill | Purpose |
|---|---|
handler |
Generate Go HTTP handlers with HTMX responses |
templ-component |
Generate Templ templates following registry |
sqlc-query |
Generate SQL queries + sqlc annotations |
svelte-island |
Generate Svelte components with mount pattern |
sse-endpoint |
Generate SSE streaming handlers |
Pattern 2: Agent Routing
Source: AIX (aix/.claude/AGENTS.md)
Define domain-specific agents that route problems to specialized expertise:
## TypeScript Schema Debugger
**Trigger:** SelectQueryError, property existence errors on DB types
**Expertise:** Supabase schema drift, type regeneration
**Action:** Investigate schema vs types before code changes
## Coding Standards Reviewer
**Trigger:** After significant code changes
**Expertise:** Next.js, TypeScript, Tailwind, Supabase patterns
**Action:** Review for naming, ESLint, Tailwind patterns
## UX/UI Auditor
**Trigger:** Component changes, accessibility concerns
**Expertise:** Design system compliance, a11y
**Action:** Audit against design tokens, WCAG
Forge Opinion:
For Go + HTMX + Svelte islands, define agents for:
| Agent | Trigger | Expertise |
|---|---|---|
go-handler-reviewer |
After handler changes | net/http patterns, middleware, error handling |
templ-auditor |
After template changes | Component registry, HTMX attributes, accessibility |
sqlc-debugger |
Query errors | Schema drift, query correctness, type generation |
island-reviewer |
After island changes | Mount patterns, state isolation, bundle size |
htmx-patterns |
HTMX attribute questions | hx-* attributes, swap strategies, SSE |
Pattern 3: Decision Framework
Source: AIX (aix/.claude/decision-framework.md)
Evaluate every technical decision with Current → Better → Best:
## Real-time Updates Decision
🟡 CURRENT: SSE with polling (1sec latency, database load)
🟢 BETTER: SSE + Supabase Realtime (100ms latency, zero polling)
🔵 BEST: Direct client Supabase subscription (50ms latency)
**Recommendation:** BETTER — 10x improvement with minimal migration cost
Forge Opinion:
Apply this framework to Go + HTMX decisions:
| Decision | Current | Better | Best |
|---|---|---|---|
| Form handling | Full page reload | HTMX hx-post + swap | HTMX + optimistic UI |
| Real-time | Polling | SSE basic | SSE + reconnection + backoff |
| Validation | Server-only | Server + inline errors | Server + client (Alpine.js) |
| State | Hidden inputs | HTMX hx-vals | Server session |
Pattern 4: Hierarchical Configuration
Source: Shared Cursorrules (dev/.shared/cursorrules/)
Layer configurations from general to specific:
base.cursorrules (universal)
├── TypeScript strict mode
├── Service layer architecture
├── Design system tokenization
├── Accessibility requirements
└── Error handling patterns
nextjs-supabase.cursorrules (stack-specific)
├── Server Components by default
├── Supabase SSR patterns
├── Multi-tenant filtering
└── Row Level Security
[project]/.cursorrules (project-specific)
└── Domain-specific overrides
Forge Opinion:
For Go + HTMX + Svelte islands:
base.cursorrules
├── Go formatting (gofmt)
├── Error handling (errors.Is/As)
├── Context propagation
└── Structured logging
go-htmx-templ.cursorrules
├── Handler patterns (method routing)
├── Templ component registry
├── HTMX response headers
├── SSE patterns
└── sqlc query patterns
go-htmx-templ-svelte.cursorrules
├── Island mount patterns
├── Shared state boundaries
├── Build configuration
└── Bundle size limits
Pattern 5: Workflow Modes
Source: AES (agentic-engineering-system/.claude/)
Match verification effort to risk:
/fast (95% of work, <1000 tokens)
├── ESLint compliance
├── TypeScript strict mode
├── Unit tests passing
└── Build verification
/careful (5% of work, <5000 tokens)
├── All fast checks +
├── Integration tests
├── Security scanning
└── Manual review (auth, architecture)
Forge Opinion:
For Go + HTMX:
/fast
├── go build
├── go test ./...
├── sqlc compile
├── templ generate
└── golangci-lint
/careful
├── All fast checks +
├── go test -race
├── Integration tests (testcontainers)
├── Security audit (gosec)
├── Bundle size check (islands)
└── Manual review
Pattern 6: Component/Template Registry
Source: Inferred from A2UI patterns in SIX
Provide AI with a finite set of components to choose from:
// templates/registry.go
var ComponentRegistry = map[string]templ.Component{
"tournament-card": TournamentCard,
"team-roster": TeamRoster,
"match-card": MatchCard,
"bracket-view": BracketView, // Svelte island
"score-input": ScoreInput,
"status-badge": StatusBadge,
}
// AI instruction: "Select components from ComponentRegistry.
// Do not invent new components without explicit approval."
Forge Opinion:
- Maintain explicit registry in code
- Document each component's purpose and variants
- AI must select from registry, not invent
- New components require human approval + registry update
Verification Pipeline
For Forge apps, the verification pipeline should be:
# Pre-commit (fast, <10s)
templ generate # Generate Go from Templ
sqlc compile # Validate SQL queries
go build ./... # Type check
golangci-lint run # Style + common errors
# Pre-push (thorough, <60s)
go test ./... # Unit tests
go test -race ./... # Race detection
# CI (comprehensive, <5min)
go test -cover ./... # Coverage
gosec ./... # Security scan
npm run build # Build Svelte islands
npm run test # Island unit tests
playwright test # E2E tests
MCP Server Opportunity
Current gap: No MCP servers exposing project context to AI tools.
Opportunity: Build MCP server for Forge that exposes:
// Hypothetical Forge MCP Server
tools: [
'list-components', // Returns ComponentRegistry
'list-handlers', // Returns route definitions
'list-queries', // Returns sqlc query names
'validate-component', // Checks if component exists
'get-decision-log', // Returns recent ADRs
'check-conventions', // Validates against cursorrules
]
This would enable AI to query project structure programmatically rather than relying on file reads.
Context Files for AI
Every Forge project should include:
| File | Purpose |
|---|---|
AGENTS.md |
Project-level coding standards (industry standard) |
.claude/context.md |
Project overview, domain model, key decisions |
.claude/conventions.md |
Coding patterns specific to this project |
.claude/components.md |
Component registry with usage examples |
.claude/queries.md |
sqlc query catalog |
DECISIONS.md |
Stack choices (already exists) |
ARCHITECTURE.md |
Technical architecture (already exists) |
AGENTS.md: Project-Level Standards
Source: AGENTS.md Specification, GitHub Blog
AGENTS.md is an open standard for guiding AI coding agents, now under the Linux Foundation's Agentic AI Foundation. Over 60,000 projects use it, and it's supported by 20+ AI tools (Claude, GitHub Copilot, Cursor, etc.).
Why AGENTS.md?
FORGE DOCUMENTATION LAYERS:
┌─────────────────────────────────────────────────────────────────┐
│ Framework Layer (Forge docs) │
│ "How to build with AI agents" │
│ └─ 12-Factor, BUILD-TIME, RUNTIME, AI-INTEGRATION-LEVELS │
│ │
│ Project Layer (AGENTS.md) ◀── NEW │
│ "What standards AI should follow in THIS project" │
│ └─ Commands, testing, style, git workflow, boundaries │
│ │
│ Skill Layer (Skills framework) │
│ "How to generate specific artifacts" │
│ └─ handler, templ-component, sqlc-query, svelte-island │
└─────────────────────────────────────────────────────────────────┘
Forge AGENTS.md Template
Every Forge project should include an AGENTS.md at the root:
# AGENTS.md
## Stack
- Go 1.23+ with net/http stdlib router
- Templ for HTML templates
- HTMX 2.x for interactivity
- Svelte 5 for islands (escape hatch only)
- PostgreSQL 16 with sqlc
- Tailwind CSS 4.0
## Commands
- `make dev` — Start development server with hot reload
- `make test` — Run all tests
- `make build` — Production build
- `templ generate` — Regenerate templates after .templ changes
- `sqlc generate` — Regenerate queries after .sql changes
## Code Style
### Go
- Handlers return HTML via Templ, not JSON (except for islands)
- All database access via sqlc-generated code
- No ORMs — write SQL directly
- Use `internal/` for non-exported packages
- Error handling: wrap with context, use errors.Is/As
### Templates (Templ)
- Components in `templates/components/`
- Pages in `templates/pages/`
- Layouts in `templates/layouts/`
- All components must be in ComponentRegistry
### HTMX
- Prefer `hx-get`/`hx-post` over JavaScript
- Use SSE (`hx-ext="sse"`) for real-time updates
- Return HTML partials, not JSON
- Always include `hx-swap` attribute
### Islands (Svelte)
- ONLY use for: drag-drop, rich text, complex charts, gestures
- If HTMX can do it, don't use an island
- Self-contained with props, no global state
- Mount via `data-island` attribute
### SQL (sqlc)
- Queries in `internal/repository/queries.sql`
- Use named parameters ($1, $2, etc.)
- Include `-- name: QueryName :one/:many/:exec` annotations
- No raw SQL in handlers
## Testing
- Handler tests use `httptest`
- Templates have snapshot tests
- sqlc queries have integration tests
- Run `go test -race ./...` before push
## Git Workflow
- Run `make test` before committing
- Commit messages: `type(scope): description`
- No force pushes to main
- PRs require passing CI
## Boundaries (Do Not Touch)
- Never commit `.env` files
- Never use `any` in TypeScript (islands)
- Never inline SQL in handlers
- Never return JSON from handlers (except `/api/` for islands)
- `migrations/` — Schema changes require manual review
- `internal/auth/` — Security-critical, no AI edits
Nested AGENTS.md
Monorepos can include multiple AGENTS.md files. The closest one to the edited file takes precedence:
project/
├── AGENTS.md # Root: general standards
├── internal/
│ └── ai/
│ └── AGENTS.md # AI-specific: prompt patterns, validation
├── web/
│ └── islands/
│ └── AGENTS.md # Islands-specific: Svelte patterns
Key Sections
| Section | Purpose |
|---|---|
| Stack | Technologies and versions |
| Commands | Executable commands AI can run |
| Code Style | Language-specific conventions |
| Testing | How to verify changes |
| Git Workflow | Commit and PR standards |
| Boundaries | What AI should never touch |
Pattern 7: Repo Map
Source: Machine Learning Mastery
"Agents get generic when they don't understand the topology of your codebase."
A repo map is a machine-readable project snapshot (under 600 lines) that prevents blind refactors.
Structure:
# Repo Map: rally-hq
## File Structure (code files only)
cmd/ ├── server/main.go # Entry point internal/ ├── handler/ # HTTP handlers │ ├── tournament.go # Tournament CRUD │ ├── team.go # Team management │ └── match.go # Match scoring ├── service/ # Business logic ├── repository/ # Database access (sqlc) ├── ai/ # AI service layer └── templates/ # Templ components web/ ├── islands/ # Svelte islands └── static/ # Assets
## Entry Points
- `cmd/server/main.go` — HTTP server, routes, middleware
- `internal/handler/*.go` — All HTTP endpoints
- `internal/templates/registry.go` — Component whitelist
## Key Conventions
- Handlers return HTML (Templ), not JSON
- All DB queries in `internal/repository/queries.sql`
- Islands only for: bracket-view, score-input, live-feed
- No new components without updating registry
## Do Not Touch
- `migrations/` — Schema changes require manual review
- `internal/auth/` — Security-critical, no AI edits
- `.env*` — Never commit secrets
Generation script:
#!/bin/bash
# scripts/generate-repo-map.sh
echo "# Repo Map: $(basename $(pwd))"
echo ""
echo "## File Structure"
echo '```'
find . -type f \( -name "*.go" -o -name "*.svelte" -o -name "*.sql" -o -name "*.templ" \) \
-not -path "./vendor/*" -not -path "./node_modules/*" | \
head -100 | \
tree --fromfile
echo '```'
echo ""
echo "## Entry Points"
grep -r "func main" cmd/ 2>/dev/null | head -5
echo ""
echo "## Routes"
grep -r "HandleFunc\|Handle\|router\." internal/handler/ 2>/dev/null | head -20
Forge Opinion:
- Generate repo map on every significant change
- Keep under 600 lines (fits in context window)
- Include "Do Not Touch" section for sensitive areas
- Update
.claude/context.mdto reference the map
Pattern 8: Diff Budget
Source: Machine Learning Mastery
"Agents derail when they edit like a human with unlimited time."
A diff budget is an explicit limit on lines changed per iteration.
Implementation:
// scripts/check-diff-budget.go
package main
import (
"fmt"
"os"
"os/exec"
"strconv"
"strings"
)
const maxLinesChanged = 120
func main() {
cmd := exec.Command("git", "diff", "--stat", "--cached")
output, _ := cmd.Output()
// Parse "X files changed, Y insertions(+), Z deletions(-)"
lines := strings.Split(string(output), "\n")
summary := lines[len(lines)-2]
// Extract total changes
changes := parseDiffStat(summary)
if changes > maxLinesChanged {
fmt.Printf("❌ Diff budget exceeded: %d lines (max %d)\n", changes, maxLinesChanged)
fmt.Println("Break this into smaller commits.")
os.Exit(1)
}
fmt.Printf("✅ Diff budget OK: %d lines\n", changes)
}
Pre-commit hook:
# .git/hooks/pre-commit
#!/bin/bash
CHANGES=$(git diff --cached --stat | tail -1 | grep -oE '[0-9]+' | head -1)
MAX=120
if [ "$CHANGES" -gt "$MAX" ]; then
echo "❌ Diff budget exceeded: $CHANGES lines (max $MAX)"
echo "Break this into smaller commits or use --no-verify to bypass."
exit 1
fi
Forge Opinion:
| Context | Budget | Rationale |
|---|---|---|
| Single feature | 120 lines | Reviewable in one pass |
| Bug fix | 50 lines | Should be surgical |
| Refactor | 200 lines | May touch many files |
| New file | 300 lines | Initial scaffolding allowed |
- Enforce mechanically, not as suggestion
- Agent prompts should include: "Keep changes under 120 lines"
- Exceeding budget requires human approval
Pattern 9: Test-First Workflow
Source: Machine Learning Mastery, Anthropic Best Practices
"Executable tests turn hand-wavy requirements into objective targets."
Write failing tests BEFORE implementation. Tests are enforceable contracts.
Workflow:
1. Human: Describe feature in acceptance criteria
2. Agent: Write failing tests (no implementation)
3. Human: Review and approve tests
4. Agent: Implement until tests pass
5. Human: Review implementation
Example prompt:
Write acceptance tests for rate limiting:
Requirements:
- 10 requests per minute per IP
- Returns 429 after limit exceeded
- Resets after 1 minute window
Do NOT write implementation code. Only tests.
Resulting tests:
// internal/middleware/ratelimit_test.go
func TestRateLimiter_AllowsUnderLimit(t *testing.T) {
rl := NewRateLimiter()
ip := "192.168.1.1"
for i := 0; i < 10; i++ {
assert.True(t, rl.Allow(ip, 10, time.Minute))
}
}
func TestRateLimiter_BlocksOverLimit(t *testing.T) {
rl := NewRateLimiter()
ip := "192.168.1.1"
// Exhaust limit
for i := 0; i < 10; i++ {
rl.Allow(ip, 10, time.Minute)
}
// 11th request should fail
assert.False(t, rl.Allow(ip, 10, time.Minute))
}
func TestRateLimiter_ResetsAfterWindow(t *testing.T) {
rl := NewRateLimiter()
ip := "192.168.1.1"
// Exhaust limit
for i := 0; i < 10; i++ {
rl.Allow(ip, 10, time.Minute)
}
// Advance time past window
rl.clock = func() time.Time { return time.Now().Add(61 * time.Second) }
// Should allow again
assert.True(t, rl.Allow(ip, 10, time.Minute))
}
Forge Opinion:
- Tests first, always
- Agent cannot modify tests without human approval
- Tests define the contract; implementation serves the tests
- Use table-driven tests for Go (multiple cases, one test function)
Pattern 10: Assumption Checkpoint
Source: Machine Learning Mastery, Anthropic Best Practices
"Agents make silent assumptions about data shapes, time zones, error handling, and concurrency."
Force a rubber duck moment before coding to surface assumptions.
Checkpoint prompt:
Before writing any code, list your assumptions about:
1. **Data shapes**: What types/structures are you expecting?
2. **Error cases**: What can fail? How should failures be handled?
3. **Concurrency**: Is this called from multiple goroutines?
4. **Time**: Are there time zones, durations, or deadlines involved?
5. **State**: What state exists before/after this code runs?
6. **Dependencies**: What packages/services does this need?
Format as a numbered list. Do not write code until I confirm.
Example output:
## Assumptions for Rate Limiter
1. **Data shapes**: IP addresses are strings, limits are integers
2. **Error cases**:
- Redis connection failure → fall back to in-memory
- Invalid IP format → treat as unique key anyway
3. **Concurrency**: Multiple goroutines will call Allow() simultaneously
- Need mutex or atomic operations
4. **Time**: Using server time, no time zones involved
- Window is sliding, not fixed
5. **State**:
- Limiter is initialized once at startup
- State persists across requests (not per-request)
6. **Dependencies**:
- sync package for mutex
- time package for window calculation
- Optional: redis client for distributed mode
Forge Opinion:
- Require assumption checkpoint for any feature > 50 lines
- Human must confirm assumptions before coding proceeds
- If assumptions are wrong, correct them BEFORE code exists
- Add confirmed assumptions as code comments
Pattern 11: Run Recipes
Source: Machine Learning Mastery
"Agentic coding fails in teams when nobody can reproduce what the agent did."
Every significant change needs a run recipe: exact commands to reproduce the result.
Recipe format:
## Run Recipe: Add Rate Limiting Middleware
### Environment
- Go 1.23+
- Redis 7.x (optional, for distributed mode)
### Commands
```bash
# 1. Generate types
sqlc generate
# 2. Run tests (should pass)
go test ./internal/middleware/... -v
# 3. Start server with rate limiting enabled
RATE_LIMIT_ENABLED=true go run cmd/server/main.go
# 4. Test rate limiting
for i in {1..15}; do curl -s http://localhost:8080/api/health; done
# Should see 429 after 10 requests
Expected Output
- Tests: 3 passed, 0 failed
- Server: Starts on :8080
- Rate limit: 429 response after 10 requests
Files Changed
- internal/middleware/ratelimit.go (new)
- internal/middleware/ratelimit_test.go (new)
- cmd/server/main.go (modified: added middleware)
**Forge Opinion:**
- Every PR must include run recipe
- Store recipes in `.claude/recipes/` for reuse
- CI should execute recipe to verify
- Recipes are documentation AND verification
**Recipe template:**
```markdown
## Run Recipe: [Feature Name]
### Environment
- [Required tools and versions]
### Commands
```bash
[Exact commands, copy-pasteable]
Expected Output
- [What success looks like]
Files Changed
- [List of files with brief description]
---
## Pattern 12: Verification Layer (Pre-commit AI Review)
> **Source:** [Gentleman Guardian Angel](https://github.com/Gentleman-Programming/gentleman-guardian-angel)
A **verification layer** uses AI to validate code against AGENTS.md standards before commits—catching violations at the earliest possible point.
### The Problem
WITHOUT VERIFICATION: ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ Generate│ ──▶ │ Commit │ ──▶ │ Push │ ──▶ │ Review │ │ (AI) │ │ (git) │ │ (remote)│ │ (human) │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ Violations found LATE (expensive to fix)
WITH VERIFICATION: ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ Generate│ ──▶ │ Verify │ ──▶ │ Commit │ ──▶ │ Push │ │ (AI) │ │ (AI) │ │ (git) │ │ (remote)│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ Violations caught EARLY (cheap to fix)
### GGA: Pre-commit AI Validation
[Gentleman Guardian Angel (GGA)](https://github.com/Gentleman-Programming/gentleman-guardian-angel) is a provider-agnostic AI-powered code review tool that runs as a Git pre-commit hook.
**Key features:**
- Validates staged files against AGENTS.md rules
- Supports Claude, Gemini, OpenAI, Ollama
- Smart caching (skips unchanged files)
- Pure Bash, zero runtime dependencies
### Installation
```bash
# macOS
brew tap Gentleman-Programming/gga
brew install gga
# Initialize in project
cd your-forge-project
gga init # Creates .gga config and AGENTS.md template
gga install # Installs pre-commit hook
Configuration
# .gga (project config)
PROVIDER=claude # AI provider
RULES_FILE=AGENTS.md # Standards file
INCLUDE_PATTERNS="*.go,*.templ,*.svelte,*.sql"
EXCLUDE_PATTERNS="*_test.go,vendor/*,node_modules/*"
STRICT_MODE=false # Fail on ambiguous responses
How It Works
PRE-COMMIT FLOW:
┌─────────────────────────────────────────────────────────────────┐
│ 1. Developer runs `git commit` │
│ 2. GGA pre-commit hook triggers │
│ 3. GGA checks cache for unchanged files │
│ 4. For changed files: sends to AI with AGENTS.md rules │
│ 5. AI returns: PASS, FAIL (with violations), or AMBIGUOUS │
│ 6. If FAIL: commit blocked, violations displayed │
│ 7. If PASS: commit proceeds │
│ 8. Results cached (until AGENTS.md changes) │
└─────────────────────────────────────────────────────────────────┘
Cache Invalidation
GGA maintains a smart cache to avoid re-reviewing unchanged files:
| Event | Cache Behavior |
|---|---|
| File unchanged | Use cached result |
| File modified | Re-review file |
| AGENTS.md changed | Invalidate ALL cache, re-review everything |
| .gga config changed | Invalidate ALL cache |
# Cache management
gga cache status # Show cache stats
gga cache clear # Clear project cache
gga cache clear-all # Clear all caches (all projects)
# Bypass cache for one commit
gga run --no-cache
Forge Integration
For Forge projects, use this verification flow:
# Makefile
.PHONY: verify
verify:
@echo "Running verification pipeline..."
templ generate
sqlc compile
go build ./...
golangci-lint run
gga run # AI verification against AGENTS.md
.PHONY: commit
commit: verify
git add -A
git commit
Pre-commit hook (alternative to GGA):
#!/bin/bash
# .git/hooks/pre-commit
# Fast checks first
templ generate || exit 1
sqlc compile || exit 1
go build ./... || exit 1
golangci-lint run || exit 1
# AI verification (only staged Go/Templ files)
STAGED=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(go|templ|svelte)$')
if [ -n "$STAGED" ]; then
gga run || exit 1
fi
echo "✅ All checks passed"
What AI Validates
The AI reviews staged files against AGENTS.md rules:
| Category | Example Violations |
|---|---|
| Code Style | JSON returned from handler (should be HTML) |
| Patterns | Raw SQL in handler (should use sqlc) |
| Boundaries | Modifications to internal/auth/ |
| Testing | Missing tests for new handlers |
| HTMX | Missing hx-swap attribute |
| Islands | Island used where HTMX would suffice |
Output Format
$ gga run
Reviewing 3 staged files...
✅ internal/handler/tournament.go
PASS: Follows handler patterns
❌ internal/handler/team.go
FAIL: Line 45: Returns JSON instead of HTML
Rule: "Handlers return HTML via Templ, not JSON"
Suggestion: Use templates.TeamCard(team).Render(ctx, w)
❌ web/islands/ScoreInput.svelte
FAIL: Line 12: Uses global state
Rule: "Self-contained with props, no global state"
Suggestion: Pass score as prop instead of importing from store
1 passed, 2 failed
Commit blocked. Fix violations and try again.
Forge Opinion
| Scenario | Recommendation |
|---|---|
| Solo developer | Optional—use for learning standards |
| Team (2-5) | Recommended—consistent enforcement |
| Enterprise | Required—governance compliance |
| CI/CD | Run gga run in pipeline for auditing |
Benefits:
- Catches violations before commit (not in PR review)
- Enforces AGENTS.md standards automatically
- Reduces cognitive load on human reviewers
- Creates audit trail of compliance
Anti-Patterns to Avoid
| Anti-Pattern | Why It's Bad | Better Approach |
|---|---|---|
| No component registry | AI invents inconsistent components | Finite registry + approval process |
| No verification | AI code may not compile | Fast feedback loop (pre-commit) |
| Generic prompts | AI doesn't know project conventions | Project-specific context files |
| Manual code review only | Slow, inconsistent | Automated checks + targeted review |
| Single workflow mode | Over-verifies trivial changes | Fast/careful based on risk |
| No repo map | Agent makes blind, broad refactors | Machine-readable project snapshot |
| Unlimited edits | Agent changes too much at once | Diff budget (120 lines default) |
| Implementation first | Vague requirements, wrong code | Test-first workflow |
| Silent assumptions | Wrong architectural decisions | Assumption checkpoint before coding |
| No reproducibility | Team can't verify agent work | Run recipes for every change |
| No AGENTS.md | AI doesn't know project standards | AGENTS.md at project root |
| No verification layer | Violations caught late in PR | Pre-commit AI review (GGA) |
References
Internal
- Signal Forge Skills:
signal-forge/.claude/skills/ - AIX Agents:
aix/.claude/AGENTS.md - AES Workflow:
agentic-engineering-system/.claude/ - Shared Rules:
dev/.shared/cursorrules/
AGENTS.md Standard
- AGENTS.md Specification: agents.md
- How to Write a Great AGENTS.md: GitHub Blog
- Improve AI Code Output: Builder.io
- Factory Documentation: docs.factory.ai
Verification Tools
- Gentleman Guardian Angel: GitHub
Best Practices
- Agentic Coding Tips: Machine Learning Mastery
- Claude Code Best Practices: Anthropic Engineering
- Agentic Coding Principles: agentic-coding.github.io