Project Doc

Progress

What's done, what's next

Updated: December 2025 Source: PROGRESS.md

Progress

What's done. What's in progress. What's next.

Last updated: December 2025


Current Phase: Foundation

Rebuilding rally-hq (an existing Next.js app) in Go + HTMX + Svelte islands to prove conventions before automating.

Why rebuild? rally-hq already exists as a working Next.js application. Rebuilding it with Go + HTMX provides known functional requirements and enables direct comparison of AI code generation quality between the React-era approach and primitive-first approach.


Done

  • Research phase complete (see research/)

    • Red team analysis of AI-native stack options
    • Framework comparison by AI-friendliness
    • Identified 6 core conventions
    • Documented competitive landscape
  • Project documentation initialized

    • README.md - project overview
    • DECISIONS.md - stack choices
    • SCOPE.md - what we're building
    • PROGRESS.md - this file
    • LEARNINGS.md - retrospective log
  • Stack decisions finalized

    • Core (80%): Go 1.23+, net/http, Templ, HTMX 2.x
    • Islands (20%): Svelte 5, Vite, TypeScript
    • Database: PostgreSQL 16 + sqlc
    • Real-time: SSE (stdlib)
    • Styling: Tailwind 4.0
    • Deploy: Fly.io

In Progress

  • Initialize rally-hq project with chosen stack

Next Up

After rally-hq initialization:

  1. Define tournament domain model (Go structs + SQL schema)
  2. Set up PostgreSQL + sqlc code generation
  3. Build first handler (CreateTournament)
  4. Build first Templ template (TournamentCard)
  5. Add HTMX interactions (form submission, live updates)
  6. Establish testing pattern (Go table-driven tests)
  7. Deploy to Fly.io to prove pipeline

Milestones

Milestone Status
Stack decided Done
rally-hq initialized Not started
Empty app deployed Not started
First feature (create tournament) Not started
Tournament registration flow Not started
Bracket generation Not started
Match scoring Not started
rally-hq v1 complete Not started
forge new extracted Not started

Blockers

None. Stack decisions are finalized. Ready to initialize rally-hq.


Key Hypotheses to Validate

These hypotheses will be tested during rally-hq development:

ID Hypothesis Measurement Target
H1 Go + HTMX produces more reliable AI code AI code compile rate on first attempt >85%
H2 HTML responses are simpler than JSON → JS → DOM Bug count vs equivalent SvelteKit implementation Fewer bugs
H3 Islands needed for <20% of features Feature count requiring Svelte islands <20%
H4 SSE handles real-time without WebSockets Latency for live score updates <100ms
H5 sqlc is more AI-friendly than ORMs AI generates correct queries >90% correct
H6 Bundle stays under 50kb for most pages Bundle size per page <50kb

Note on H1: The simpler "request → handler → HTML → browser" pattern should produce more reliable AI code than client-state frameworks.

Document findings in LEARNINGS.md.


Success Criteria

Phase 1 (Foundation) succeeds if:

  • rally-hq MVP is deployed and functional on Fly.io
  • H1-H6 hypotheses are validated or documented as failed
  • Template component registry pattern established (Templ)
  • At least one workflow is modeled as a state machine
  • Go table-driven tests are running in CI
  • Svelte islands used for <20% of features
  • Bundle size under 50kb for most pages

Failure Criteria

Consider alternatives if any of these occur:

  • Go + HTMX pattern blocks 2+ features with no island workaround
  • AI error rate on Go exceeds SvelteKit equivalent
  • Templ templating proves too limiting for UI needs
  • SSE proves insufficient for real-time requirements
  • Island integration with HTMX is too complex

If pivot is required, document in LEARNINGS.md and update DECISIONS.md.


Check-in Schedule

Checkpoint Date Questions to Answer
Week 1 +7 days First handler and Templ template working? PostgreSQL connected?
Week 2 +14 days Auth working? First deploy to Fly.io successful?
Week 4 +28 days Core feature complete? SSE live updates working?
Phase 1 End TBD All success criteria met? Hypotheses validated? Island usage measured?

AI Interaction Log

Track AI code generation quality during development:

Date Task AI Tool Compile Success Errors Notes
template description Claude/GPT Yes/No count what went wrong

This log provides primary data for validating H1 and H2.