AI Agent Surfaces
Objective
Bring conversational AI to every Dataface interface — terminal, web, and IDE — using the shared MCP tool layer. Each surface reuses the canonical tool implementations from dataface/ai/mcp/tools.py. No tool duplication across surfaces.
Architecture
┌─────────────────────────────────────────────────────────┐
│ Surfaces (consumers) │
│ ├── dft agent (terminal) ← terminal-agent-tui │
│ ├── Cloud chat home (web) ← chat-first-home │
│ ├── Cloud editor copilot (web) ← embedded-agent │
│ ├── Cursor / Claude / Codex ← mcp-auto-install │
│ └── VS Code / Copilot ← mcp-auto-install │
├─────────────────────────────────────────────────────────┤
│ Shared Cloud chat layer (one implementation) │
│ chat.js (shared JS) · chat_stream() (shared SSE) │
│ Parameterized by: scope (org/project/chart), │
│ onWriteToEditor callback, blankStateEl │
├─────────────────────────────────────────────────────────┤
│ MCP Tools (shared, canonical) │
│ catalog · execute_query · render_dashboard │
│ review_dashboard · search_dashboards · list_sources │
│ save_dashboard (new) ← save-dashboard-tool │
├─────────────────────────────────────────────────────────┤
│ Cloud-only hooks (post-processing, not tools) │
│ update_dashboard_cache · GitService.commit │
│ DashboardSnapshot · ChatSession/ChatMessage │
│ write_to_editor (UI-layer, not MCP tool) │
└─────────────────────────────────────────────────────────┘
Key Decisions
-
MCP tools are the single source of truth. No surface creates its own tool implementations. Cloud's existing bespoke dispatch (
_execute_tool_sync) must be replaced with the canonicalhandle_tool_call(). -
One chat, many placements. The home page chat, the dashboard editor copilot, and any future chart builder agent are all the same chat component (
chat.js+chat_stream()) with different context. The only things that change per placement are the system prompt context (org vs project vs chart scope) and optional UI-layer callbacks (likewrite_to_editor). The existingai-copilot.jsis the v0 — it gets evolved into the sharedchat.js, not replaced by a parallel implementation. -
htmx-first web, JS-isolated streaming. The Cloud chat uses Django+htmx for all non-streaming UI (history sidebar, modals, save forms, suggestions). The streaming chat message area is a vanilla JS island (~500 lines) that owns the SSE connection. This is the same pattern already blessed by the design philosophy for "AI copilot chat."
-
Surfaces are thin. Each surface (terminal, web, IDE) is a thin presentation layer over the shared tool + prompt stack. Skills, system prompts, and schema context come from
dataface/ai/. -
Multi-LLM support: OpenAI default, Anthropic supported. All AI surfaces use a shared
LLMClientabstraction (dataface/ai/llm.py) that supports both OpenAI (Responses API) and Anthropic (Messages API). OpenAI is the default provider. Anthropic is supported for evaluation and comparison. The tool schemas, system prompts, and agent loop are provider-agnostic — only the API wire format differs. This applies to both the terminal agent and the Cloud chat backend (AIService).
Milestone Spread
This initiative spans M1 through M3. The M1 bar is: "internal analysts can execute at least one weekly real workflow... without bespoke engineering intervention." That requires the editor copilot to work with MCP tools and analysts to be able to save dashboards. It does NOT require a chat home page, a terminal agent, or conversation persistence.
| Milestone | What ships | Why |
|---|---|---|
| M1 | Unified tool dispatch + save tool + editor copilot upgrade | Analysts can use the AI copilot in the editor with the full MCP tool set and save results. Minimum viable AI workflow. |
| M2 | Chat home page + shared chat.js + TUI + embeddable dashboards + auto-install + chat persistence | Multiple teams and design partners get the conversational interface, terminal agent, and broader IDE support. These expand reach but aren't required for the initial pilot. |
| M3 | Desktop app | Depends on web UI maturity. |
Task Dependency Graph
M1 ─────────────────────────────────────────────────────
unify-cloud-ai-tool-dispatch (P0, foundation)
↓
extract-shared-chat-js-and-chat-stream (P0, shared infra)
↓
embedded-agent-in-builder (P0, consumes shared chat.js)
save-dashboard-mcp-tool (P1, parallel)
M2 ─────────────────────────────────────────────────────
chat-first-home-page (P1, consumes shared chat.js, adds org scope + blank state)
├──→ embeddable-dashboards-in-chat (P1)
│ ↑ (also depends on save_dashboard tool from M1)
└──→ chat-persistence (P2, enhancement)
terminal-agent-tui (P1, independent)
mcp-auto-install (P1, independent)
M3 ─────────────────────────────────────────────────────
desktop-app (P2, depends on mature web UI)
Tasks
M1: Foundation + editor copilot (pilot-ready)
- Unify Cloud AI Tool Dispatch — M1, P0 — Replace bespoke dispatch with canonical
handle_tool_call() - Extract shared chat.js + chat_stream() — M1, P0 — Shared chat component + SSE endpoint, extracted from
ai-copilot.js - Save dashboard MCP tool — M1, P1 — Universal save tool for all surfaces
- Embed AI agent in dashboard/chart builder — M1, P0 (cloud-suite workstream) — Upgrade editor copilot to use full MCP tools, consumes shared
chat.js
M2: Surfaces + features (adoption)
- Chat-First Home Page — M2, P1 — Adds org scope, blank state, suggestions; consumes shared
chat.js+chat_stream() - Terminal Agent TUI - dft agent — M2, P1 —
dft agentterminal surface - MCP auto-install across all AI clients — M2, P1 — VS Code, Claude Code, Copilot support
- Embeddable Dashboards in Chat — M2, P1 — Inline dashboard preview, modal expand, save from chat
- Chat Conversation Persistence — M2, P2 — Save/resume conversations
M3: Desktop
- Desktop app — M3, P2 — Tauri/Electron wrapper
Already completed
- Wire Playground AI to MCP tools — completed — Established the pattern: Playground uses canonical
dispatch_tool_call()