AI Integration Levels
Status: Draft Purpose: Decision framework for how deeply to integrate AI into an application
The Four Dimensions
AI integration isn't just one spectrum. There are four orthogonal dimensions:
┌─────────────────────────────────────────────────────────────────────┐
│ DIMENSION 1: AI CONSUMER │
│ How your app USES AI │
│ Level 0 → Level 1 → Level 2 → Level 3 │
│ (none) (assist) (generate) (co-create) │
└─────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────┐
│ DIMENSION 2: AI PRODUCER │
│ How your app IS CONSUMED BY external AI │
│ Invisible → Crawlable → Agent-Ready → Agent-Enabled │
└─────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────┐
│ DIMENSION 3: AGENT ORCHESTRATION │
│ How your app COORDINATES with other agents │
│ None → MCP Tools → A2A Protocol → Orchestration Hub │
└─────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────┐
│ DIMENSION 4: AGENT COMMERCE │
│ How your app TRANSACTS via agents │
│ None → Read-Only → ACP Checkout → AP2 Delegated │
└─────────────────────────────────────────────────────────────────────┘
An app can be at any combination. For example:
- SIX: Consumer L3, Producer Agent-Ready, No Orchestration, No Commerce
- Walmart (Sparky): Consumer L2, Producer Agent-Enabled, Orchestration Hub, Full Commerce
Dimension 1: AI Consumer (Original L0-L3)
How your app USES AI at runtime.
Level 0 Level 1 Level 2 Level 3
No Runtime AI AI-Assisted AI-Generated AI Co-Creation
│ │ │ │
▼ ▼ ▼ ▼
Static CRUD Enhanced Agent picks Bidirectional
queries components state sync
Level 0: No Runtime AI
What it is: Traditional web application. AI may assist development (build-time), but the running app has no AI features.
User experience: Deterministic. Same input → same output.
Examples from workspace:
- Home Dashboard (IoT control)
- Signal Dispatch Blog (static content)
- Signal X Studio Website (portfolio)
Tech requirements:
- Standard Go + HTMX + Templ
- No LLM API calls
- No streaming complexity
- No AI cost management
When to use:
- CRUD applications
- Content sites
- Admin dashboards
- Apps where determinism is critical
Forge stack mapping:
Go + net/http + Templ + HTMX + PostgreSQL
No additional AI infrastructure needed
Level 1: AI-Assisted Queries
What it is: AI enhances existing features but doesn't drive the UI. Search is smarter. Analytics include insights. Suggestions appear.
User experience: Mostly deterministic with AI-enhanced features. User may not realize AI is involved.
Examples from workspace:
- RIX (tournament management with smart scheduling)
- Nino Chavez Gallery (AI-powered photo tagging)
- Nino Chavez Website (SEO insights)
AI Features:
- Search enhancement (semantic search, query expansion)
- Content enrichment (auto-tagging, summarization)
- Recommendations (based on embeddings)
- Analytics insights (trend detection)
Tech requirements:
- AI service layer (internal/ai/)
- Rate limiting (simple, per-IP)
- Error fallback (degrade to non-AI)
- Cost tracking (basic logging)
Forge stack mapping:
Go + HTMX + Templ
+ internal/ai/ package
+ Rate limiting middleware
+ Fallback to non-AI
SSE optional (for streaming summaries)
Svelte islands NOT needed
Decision criteria:
| Question | If Yes → Level 1 |
|---|---|
| Can the feature work without AI? | Yes, AI just makes it better |
| Is AI output one of many data sources? | Yes, mixed with DB queries |
| Does user interact directly with AI? | No, AI is behind the scenes |
| Is streaming necessary? | No, batch response is fine |
Level 2: AI-Generated Content
What it is: AI decides what to show. Agent selects components, generates layouts, creates content. But it's one-way: user requests → AI responds.
User experience: AI-driven output. User sees what AI chose. No back-and-forth refinement.
Examples from workspace:
- Signal Forge (generates decks, POVs, papers)
- Commerce Prompt Analyzer (generates visibility analysis)
- BIX (generates analytics dashboards)
AI Features:
- Layout generation (AI picks components from registry)
- Content creation (documents, reports, summaries)
- Multi-model comparison (council pattern)
- Structured output (JSON specs → rendered components)
Tech requirements:
- Everything from Level 1, plus:
- Output validation (schema validation)
- Component registry (whitelist what AI can use)
- SSE streaming (for real-time generation feedback)
- Cost guards (per-request and per-session limits)
Forge stack mapping:
Go + HTMX + Templ
+ internal/ai/ with validation
+ Component registry
+ SSE streaming for progress
+ Cost tracking middleware
For complex visualizations:
+ Svelte island (renders AI-selected components)
+ A2UI-like pattern (specs → components)
A2UI-like pattern in Go + Templ:
// AI returns spec
type ComponentSpec struct {
Type string `json:"type"`
Props map[string]any `json:"props"`
}
// Server renders based on spec
func RenderFromSpec(spec ComponentSpec) templ.Component {
switch spec.Type {
case "tournament-card":
return TournamentCard(spec.Props)
case "match-grid":
return MatchGrid(spec.Props)
case "bracket-view":
// Svelte island for complex interactivity
return BracketIsland(spec.Props)
default:
return ErrorComponent("Unknown component")
}
}
Decision criteria:
| Question | If Yes → Level 2 |
|---|---|
| Does AI decide what components to show? | Yes |
| Is output structured (not just text)? | Yes, JSON specs or structured data |
| Does user need to see generation progress? | Yes, streaming helps UX |
| Is this one-shot? (request → response, done) | Yes, no refinement loop |
Level 3: AI Co-Creation
What it is: User and AI collaborate in real-time. Bidirectional state sync. User can refine, redirect, iterate with AI.
User experience: Conversational. "Make it cheaper." "Add more variety." AI responds, user adjusts, repeat.
Examples from workspace:
- SIX (real-time layout generation with refinement)
- AIX (multi-LLM analysis with iteration)
- CIX (AI advisor for commerce transformation)
AI Features:
- Bidirectional state sync (AG-UI pattern)
- Conversational refinement ("make it X")
- Real-time updates (live inventory, live data)
- Collaborative editing (user + AI modify same artifact)
Tech requirements:
- Everything from Level 2, plus:
- AG-UI protocol (or equivalent)
- Shared state model (validated, synchronized)
- Refinement history (track conversation)
- Complex streaming (events, not just text)
Forge stack mapping:
Go + HTMX + Templ (for static parts)
+ Svelte island (for co-creation UI)
+ AG-UI streaming (SSE with event types)
+ Shared state (Go ↔ Svelte sync)
+ A2UI widget specs (validated JSON)
This is where the "20% islands" justification comes in.
AG-UI in Go context:
// Event types
type EventType string
const (
RunStarted EventType = "RUN_STARTED"
RunFinished EventType = "RUN_FINISHED"
StateSnapshot EventType = "STATE_SNAPSHOT"
StateDelta EventType = "STATE_DELTA"
ToolCallStart EventType = "TOOL_CALL_START"
ToolCallEnd EventType = "TOOL_CALL_END"
)
type Event struct {
Type EventType `json:"type"`
Timestamp int64 `json:"timestamp"`
Data map[string]any `json:"data,omitempty"`
}
// Stream events via SSE
func StreamAGUI(w http.ResponseWriter, events <-chan Event) {
for event := range events {
data, _ := json.Marshal(event)
fmt.Fprintf(w, "data: %s\n\n", data)
w.(http.Flusher).Flush()
}
}
Decision criteria:
| Question | If Yes → Level 3 |
|---|---|
| Does user refine AI output conversationally? | Yes, "make it X" |
| Is there real-time state sync? | Yes, both sides update |
| Does the interaction feel like collaboration? | Yes, back-and-forth |
| Is complex client-side logic needed? | Yes, can't do with HTMX alone |
Decision Matrix
Use this to pick the right level:
| Factor | Level 0 | Level 1 | Level 2 | Level 3 |
|---|---|---|---|---|
| AI runtime calls | None | Few | Many | Continuous |
| User awareness of AI | N/A | Low | Medium | High |
| Output structure | N/A | Simple | Structured | Complex |
| Interactivity | None | Async | One-shot | Real-time |
| State model | Server | Server | Server | Shared |
| Cost concern | None | Low | Medium | High |
| Fallback complexity | N/A | Simple | Medium | Complex |
| Svelte islands | No | No | Maybe | Yes |
Forge Default: Level 0-1
The Forge opinion:
Default to Level 0 or 1. Escalate to Level 2 only when AI-driven output is a core feature. Escalate to Level 3 only when collaborative refinement is essential to the UX.
Why:
- Levels 0-1 work entirely with Go + HTMX + Templ
- Levels 2-3 require Svelte islands and additional infrastructure
- Higher levels have higher costs, complexity, and failure modes
- Most features don't need co-creation
Level by Feature (rally-hq Example)
| Feature | Level | Rationale |
|---|---|---|
| Create tournament | 0 | Form submission, no AI |
| Team registration | 0 | CRUD, no AI |
| Browse tournaments | 0-1 | Maybe AI search enhancement |
| Bracket generation | 2 | AI suggests optimal brackets |
| Match scheduling | 2 | AI optimizes schedule |
| Live scoring | 0 | SSE, but no AI (just real-time data) |
| "Rebalance bracket" | 3 | Conversational refinement |
| Tournament insights | 1 | AI-generated analytics |
Result for rally-hq:
- 80% Level 0-1 (Go + HTMX + Templ)
- 15% Level 2 (AI generation, maybe island)
- 5% Level 3 (Svelte island with AG-UI)
This matches the "Svelte islands for 20%" hypothesis.
Cost and Complexity by Level
| Level | Infra Cost | AI Cost | Complexity | Failure Modes |
|---|---|---|---|---|
| 0 | Low | $0 | Low | Standard web |
| 1 | Low | Low ($10-50/mo) | Low-Medium | AI fallback |
| 2 | Medium | Medium ($50-200/mo) | Medium | Validation, streaming |
| 3 | High | High ($200+/mo) | High | State sync, real-time |
Migration Path
Start low, escalate when needed:
Level 0 → Level 1
Add: internal/ai/ package
Add: Rate limiting
Add: Fallback handlers
Keep: All existing HTMX/Templ
Level 1 → Level 2
Add: Output validation
Add: Component registry
Add: SSE streaming
Add: Cost tracking
Maybe: Svelte island for complex renders
Level 2 → Level 3
Add: AG-UI event streaming
Add: Shared state model
Add: Svelte island (required)
Add: Refinement history
Add: Complex error handling
When NOT to Escalate
Stay at Level 0 if:
- Feature works fine without AI
- Determinism is important
- Cost is a concern
- You're building MVP
Stay at Level 1 if:
- AI is "nice to have" not core
- Simple enhancement is enough
- No need for structured AI output
Stay at Level 2 if:
- One-shot generation is sufficient
- No conversational refinement needed
- Server-side rendering works
Only go to Level 3 if:
- Collaboration is the core UX
- Real-time sync is essential
- You've validated the need with users
Dimension 2: AI Producer
How your app IS CONSUMED BY external AI agents.
Invisible → Crawlable → Agent-Ready → Agent-Enabled
│ │ │ │
▼ ▼ ▼ ▼
No AI Schema.org LLMs.txt MCP/A2A
consideration markup + feeds endpoints
Invisible (Default)
- No consideration for AI consumers
- Standard HTML, no structured data
- AI can scrape but gets poor results
Crawlable
- Schema.org markup for products, organizations, etc.
- OpenGraph/meta tags for social sharing
- Sitemap for discovery
- AI gets better context from structured data
Forge opinion: All apps should be at least Crawlable.
<!-- Templ component with Schema.org -->
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Product",
"name": "{{ .Name }}",
"description": "{{ .Description }}",
"offers": {
"@type": "Offer",
"price": "{{ .Price }}",
"availability": "https://schema.org/InStock"
}
}
</script>
Agent-Ready
Source: Walmart GEO Architecture
- LLMs.txt — Governance file for AI crawlers (like robots.txt for agents)
- Agent-exclusive feeds — Real-time inventory, pricing, availability APIs
- Standardized formats — JSON feeds optimized for LLM consumption
- GEO optimization — Content structured for AI extraction
/.well-known/llms.txt # What agents can/can't access
/api/feeds/products.json # Structured product feed
/api/feeds/inventory.json # Real-time stock
/api/feeds/pricing.json # Current prices
LLMs.txt example:
# LLMs.txt - AI Agent Access Policy
User-agent: *
Allow: /api/feeds/
Disallow: /api/internal/
Rate-limit: 100/minute
Attribution-required: yes
Agent-Enabled
Source: Walmart Sparky, Best Buy ACP/AP2
- MCP Server — Expose tools for AI agents to use
- A2A Protocol — Agent-to-agent communication endpoints
- Orchestration hub — Central point coordinating multiple agents
// MCP Server exposing tools
mcp.NewServer().
Tool("get_product", getProductHandler).
Tool("check_inventory", checkInventoryHandler).
Tool("get_recommendations", getRecommendationsHandler).
Serve()
Decision criteria:
| Question | If Yes → Level |
|---|---|
| Do you want AI to understand your content? | → Crawlable |
| Do external AI agents need real-time data? | → Agent-Ready |
| Should AI agents be able to take actions? | → Agent-Enabled |
Dimension 3: Agent Orchestration
How your app COORDINATES with other agents.
None → MCP Tools → A2A Protocol → Orchestration Hub
│ │ │ │
▼ ▼ ▼ ▼
Solo Expose Talk to Coordinate
app tools other agents many agents
None (Default)
- App doesn't interact with other AI agents
- All AI is internal (if any)
MCP Tools
Source: Anthropic MCP Protocol
- Expose discrete tools for AI agents to call
- Each tool has schema, description, handler
- Stateless operations
// internal/mcp/server.go
type Tool struct {
Name string
Description string
InputSchema json.RawMessage
Handler func(input json.RawMessage) (json.RawMessage, error)
}
var tools = []Tool{
{
Name: "get_tournament",
Description: "Get tournament details by ID",
InputSchema: `{"type":"object","properties":{"id":{"type":"string"}}}`,
Handler: getTournamentHandler,
},
{
Name: "list_matches",
Description: "List matches for a tournament",
InputSchema: `{"type":"object","properties":{"tournament_id":{"type":"string"}}}`,
Handler: listMatchesHandler,
},
}
A2A Protocol
Source: Google A2A Protocol
- Agent-to-agent communication
- Capability discovery
- Task delegation between agents
// A2A capability advertisement
type AgentCapabilities struct {
AgentID string `json:"agent_id"`
Capabilities []string `json:"capabilities"`
Endpoint string `json:"endpoint"`
}
// A2A task request
type TaskRequest struct {
FromAgent string `json:"from_agent"`
ToAgent string `json:"to_agent"`
Task string `json:"task"`
Context map[string]any `json:"context"`
}
Orchestration Hub
Source: Walmart Sparky
- Central agent that coordinates multiple sub-agents
- Protocol mediation (MCP, A2A, ACP)
- Governance and rate limiting
- Response assembly
┌─────────────────────────────────────────────────────────────┐
│ ORCHESTRATION HUB │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Product │ │ Inventory │ │ Pricing │ │
│ │ Agent │ │ Agent │ │ Agent │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ └────────────────┼────────────────┘ │
│ ▼ │
│ ┌─────────────┐ │
│ │ Sparky │ ← External agent requests │
│ │ (Hub) │ │
│ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
Forge opinion: Most apps need None or MCP Tools. A2A and Hub are for platform-scale applications.
Dimension 4: Agent Commerce
How your app TRANSACTS via agents.
None → Read-Only → ACP Checkout → AP2 Delegated
│ │ │ │
▼ ▼ ▼ ▼
No Product Agent can Autonomous
commerce info checkout purchase
None (Default)
- No commerce via agents
- Deep-link to website for transactions
Read-Only
- Agents can query products, inventory, pricing
- Cannot complete transactions
- Must redirect to website for checkout
ACP Checkout
Source: OpenAI + Stripe Agentic Commerce Protocol
- Agent can complete checkout within agent interface
- Merchant remains Merchant of Record
- Delegated payment tokens (Stripe SPT)
Required endpoints:
POST /checkout_sessions # Create session with cart
POST /checkout_sessions/{id} # Update (shipping, discounts)
POST /checkout_sessions/{id}/complete # Finalize with payment token
POST /checkout_sessions/{id}/cancel # Cancel session
GET /checkout_sessions/{id} # Get current state
Webhook events:
order.created # Order placed
order.updated # Status change (shipped, fulfilled, canceled)
AP2 Delegated
Source: Google + 60 Partners Agent Payments Protocol
- Human-not-present transactions
- Cryptographic mandates for authorization
- Autonomous purchase when conditions met
Mandate types:
Intent Mandate → "Buy when price drops below $X"
Cart Mandate → "I approve this specific cart"
Payment Mandate → Issuer authorization with agent context
Forge opinion:
| App Type | Commerce Level |
|---|---|
| Non-commerce | None |
| Content/SaaS | None or Read-Only |
| E-commerce | ACP Checkout (start here) |
| Enterprise retail | ACP + AP2 (full stack) |
The Full Matrix
| App | Consumer | Producer | Orchestration | Commerce |
|---|---|---|---|---|
| Home Dashboard | L0 | Invisible | None | None |
| Signal Dispatch | L0 | Crawlable | None | None |
| RIX | L1 | Crawlable | None | None |
| Signal Forge | L2 | Invisible | None | None |
| SIX | L3 | Agent-Ready | None | Read-Only |
| AIX | L3 | Crawlable | MCP Tools | None |
| Best Buy | L1 | Agent-Ready | None | ACP + AP2 |
| Walmart | L2 | Agent-Enabled | Hub | ACP + AP2 |
Forge Defaults by Dimension
| Dimension | Default | When to Escalate |
|---|---|---|
| Consumer | L0-L1 | AI is core feature |
| Producer | Crawlable | External AI needs your data |
| Orchestration | None | Platform-scale coordination |
| Commerce | None | E-commerce with agent checkout |
References
- Level 0 examples: Home Dashboard, Signal Dispatch Blog
- Level 1 examples: RIX, Gallery sites
- Level 2 examples: Signal Forge, Commerce Prompt Analyzer, BIX
- Level 3 examples: SIX, AIX, CIX
- AG-UI Protocol:
six/src/lib/ag-ui/ - A2UI Specs:
six/src/lib/a2ui/ - ACP Protocol:
bby/docs/ACP-AP2-INTEGRATION-PLAN.md - GEO Architecture:
wlmt/decks/geo-agentic-enablement.html - Walmart Sparky:
wlmt/decks/architecture-stack.html