Dataface Tasks

AI Agent Surfaces

Completed · M1 — 5T Internal Pilot Ready8 / 8 (100%)

Objective

Bring conversational AI to every Dataface interface — terminal, web, and IDE — using the shared MCP tool layer. Each surface reuses the canonical tool implementations from dataface/ai/mcp/tools.py. No tool duplication across surfaces.

Architecture

┌─────────────────────────────────────────────────────────┐
│  Surfaces (consumers)                                   │
│  ├── dft agent (terminal)         ← terminal-agent-tui  │
│  ├── Cloud chat home (web)        ← chat-first-home     │
│  ├── Cloud editor copilot (web)   ← embedded-agent      │
│  ├── Cursor / Claude / Codex      ← mcp-auto-install    │
│  └── VS Code / Copilot           ← mcp-auto-install    │
├─────────────────────────────────────────────────────────┤
│  Shared Cloud chat layer (one implementation)           │
│  chat.js (shared JS) · chat_stream() (shared SSE)      │
│  Parameterized by: scope (org/project/chart),           │
│  onWriteToEditor callback, blankStateEl                 │
├─────────────────────────────────────────────────────────┤
│  MCP Tools (shared, canonical)                          │
│  catalog · execute_query · render_dashboard             │
│  review_dashboard · search_dashboards · list_sources    │
│  save_dashboard (new)             ← save-dashboard-tool │
├─────────────────────────────────────────────────────────┤
│  Cloud-only hooks (post-processing, not tools)          │
│  update_dashboard_cache · GitService.commit             │
│  DashboardSnapshot · ChatSession/ChatMessage            │
│  write_to_editor (UI-layer, not MCP tool)               │
└─────────────────────────────────────────────────────────┘

Key Decisions

  1. MCP tools are the single source of truth. No surface creates its own tool implementations. Cloud's existing bespoke dispatch (_execute_tool_sync) must be replaced with the canonical handle_tool_call().

  2. One chat, many placements. The home page chat, the dashboard editor copilot, and any future chart builder agent are all the same chat component (chat.js + chat_stream()) with different context. The only things that change per placement are the system prompt context (org vs project vs chart scope) and optional UI-layer callbacks (like write_to_editor). The existing ai-copilot.js is the v0 — it gets evolved into the shared chat.js, not replaced by a parallel implementation.

  3. htmx-first web, JS-isolated streaming. The Cloud chat uses Django+htmx for all non-streaming UI (history sidebar, modals, save forms, suggestions). The streaming chat message area is a vanilla JS island (~500 lines) that owns the SSE connection. This is the same pattern already blessed by the design philosophy for "AI copilot chat."

  4. Surfaces are thin. Each surface (terminal, web, IDE) is a thin presentation layer over the shared tool + prompt stack. Skills, system prompts, and schema context come from dataface/ai/.

  5. Multi-LLM support: OpenAI default, Anthropic supported. All AI surfaces use a shared LLMClient abstraction (dataface/ai/llm.py) that supports both OpenAI (Responses API) and Anthropic (Messages API). OpenAI is the default provider. Anthropic is supported for evaluation and comparison. The tool schemas, system prompts, and agent loop are provider-agnostic — only the API wire format differs. This applies to both the terminal agent and the Cloud chat backend (AIService).

Milestone Spread

This initiative spans M1 through M3. The M1 bar is: "internal analysts can execute at least one weekly real workflow... without bespoke engineering intervention." That requires the editor copilot to work with MCP tools and analysts to be able to save dashboards. It does NOT require a chat home page, a terminal agent, or conversation persistence.

Milestone What ships Why
M1 Unified tool dispatch + save tool + editor copilot upgrade Analysts can use the AI copilot in the editor with the full MCP tool set and save results. Minimum viable AI workflow.
M2 Chat home page + shared chat.js + TUI + embeddable dashboards + auto-install + chat persistence Multiple teams and design partners get the conversational interface, terminal agent, and broader IDE support. These expand reach but aren't required for the initial pilot.
M3 Desktop app Depends on web UI maturity.

Task Dependency Graph

M1 ─────────────────────────────────────────────────────
  unify-cloud-ai-tool-dispatch (P0, foundation)
    ↓
  extract-shared-chat-js-and-chat-stream (P0, shared infra)
    ↓
  embedded-agent-in-builder (P0, consumes shared chat.js)
  save-dashboard-mcp-tool (P1, parallel)

M2 ─────────────────────────────────────────────────────
  chat-first-home-page (P1, consumes shared chat.js, adds org scope + blank state)
    ├──→ embeddable-dashboards-in-chat (P1)
    │      ↑ (also depends on save_dashboard tool from M1)
    └──→ chat-persistence (P2, enhancement)
  terminal-agent-tui (P1, independent)
  mcp-auto-install (P1, independent)

M3 ─────────────────────────────────────────────────────
  desktop-app (P2, depends on mature web UI)

Tasks

M1: Foundation + editor copilot (pilot-ready)

M2: Surfaces + features (adoption)

M3: Desktop

Already completed