Dataface Tasks

Chat-First Home Page - Conversational AI Interface for Dataface Cloud

IDMCP_ANALYST_AGENT-CHAT_FIRST_HOME_PAGE_CONVERSATIONAL_AI_INTERFACE_FOR_DATAFACE_CLOUD
Statuscompleted
Priorityp1
Milestonem2-internal-adoption-design-partners
Ownerdata-ai-engineer-architect
Initiativeai-agent-surfaces
Completed bydave
Completed2026-03-17

Problem

The current Dataface Cloud home page (org_home) is a static dashboard grid — users see project folders with dashboard thumbnails, or an empty state with a "Create Dashboard" button. There is no way to ask a question, explore data, or get AI-generated insights without first navigating into a specific project and dashboard. This creates two gaps:

  1. No conversational entry point. The most natural thing an analyst wants to do is ask "what happened to revenue this week?" — not browse a file tree of dashboards. The home page should be a chat interface where the AI can answer questions, generate charts, and surface relevant existing dashboards.

  2. Wasted blank state. When a user logs in, the home page should proactively show useful context: recent dashboards, trending metrics, suggested questions based on the data catalog. Today it's just a grid or an empty state CTA.

The chat interface should reuse the same MCP tools (catalog, execute_query, render_dashboard, review_dashboard, search_dashboards, list_sources) that power the CLI agent and external MCP clients. No duplicate tool implementations.

Related work — one chat, many placements: - Terminal Agent TUI (terminal-agent-tui-dft-agent.md) — same MCP tools, different surface (terminal vs web) - Embeddable Dashboards in Chat (embeddable-dashboards-in-chat-*.md) — companion task for dashboard rendering inside chat messages - Embedded agent in dashboard builder (cloud-suite: task-m1-suite-embedded-agent-in-dashboard-builder.md) — the existing ai-copilot.js in the editor sidebar is a chat with MCP tools, scoped to one dashboard. This is the same chat — just with narrower context. - Wire Playground AI to MCP tools (wire-playground-ai-to-use-mcp-tools-instead-of-bespoke-tool-set.md) — already completed; the Playground was unified to use canonical MCP dispatch. The Cloud app needs the same treatment.

The DRY principle here: The home page chat, the dashboard editor copilot, and any future chart builder agent are all the same chat component with different context injected:

Placement Context scope Extra tools
Home page (this task) Org-wide: all projects, all sources
Dashboard editor sidebar One dashboard: current YAML, project sources write_to_editor
Chart builder (future) One chart: current chart YAML, query results write_to_editor

They should share one chat.js, one SSE endpoint (parameterized by scope), and one AIService backend. The only thing that changes per placement is the system prompt context and the set of UI-layer tools (like write_to_editor). The existing ai-copilot.js (430 lines) is the v0 of this — it gets evolved into the shared chat.js, not replaced by a separate implementation.

Context

Current home page flow: - apps/cloud/apps/dashboards/views.py::home() — redirects to org_home if user has one org - apps/cloud/apps/projects/views.py::org_home() — renders projects/org_home.html with dashboard grid grouped by project - Template: apps/cloud/templates/projects/org_home.html — grid/list view of dashboards with thumbnails

Existing AI infrastructure: - apps/cloud/apps/ai/views.pyai_stream() SSE endpoint for chat with tool calling (currently scoped to project-level copilot in the dashboard editor) - apps/cloud/apps/ai/service.pyAIService class using OpenAI Chat Completions API with function calling - dataface/ai/tools.pyget_tools() returns OpenAI function-calling tool definitions; handle_tool_call() dispatches - dataface/ai/mcp/tools.py — canonical tool implementations (catalog, execute_query, render_dashboard, etc.) - dataface/ai/tool_schemas.py — canonical schema definitions shared by MCP and OpenAI formats - dataface/ai/prompts.py + dataface/ai/skills/ — system prompts, design guides, YAML reference - dataface/ai/schema_context.py — token-efficient catalog summary for LLM context

Design philosophy (from apps/cloud/DESIGN_PHILOSOPHY.md): - Django-first, server-side rendering - Minimal JavaScript — htmx for interactions, JS only for: visual editor, code editor, AI copilot chat, Vega-Lite interactivity - YAML as source of truth - AI copilot chat is an explicitly approved JavaScript use case

Key constraint — MCP tools, not forks: The chat interface must call the same MCP tool implementations that power the terminal agent and IDE clients. It must NOT create a parallel tool set. The existing _execute_tool_sync() in apps/cloud/apps/ai/views.py has a bespoke 4-tool dispatch — this is being replaced by the canonical handle_tool_call() dispatch in a prerequisite task (unify-cloud-ai-tool-dispatch-to-use-canonical-mcp-tools.md). The chat backend builds on top of that unified dispatch.

Key constraint — multi-LLM backend: The chat backend uses the shared LLMClient abstraction from dataface/ai/llm.py (built as part of the terminal agent task). OpenAI (Responses API) is the default; Anthropic (Messages API) is supported for evaluation. AIService migrates from hard-coded OpenAI Chat Completions to LLMClient, getting both providers and the Responses API upgrade. Provider selection is configurable per-deployment (env var / Django settings).

Key constraint — htmx-first, JS-isolated chat stream: The design philosophy says "Django-first, minimal JS, htmx for interactions" — but also explicitly blesses "AI copilot chat" as an acceptable JS use case. The question is: can a streaming chat interface fit within the htmx world?

htmx + SSE analysis: - The htmx SSE extension (hx-ext="sse", sse-connect, sse-swap) is designed for server-push of complete HTML fragments — e.g., a notification badge update, a new table row appearing. - Chat streaming is different: you get partial text tokens that build up a single message incrementally, interleaved with tool call events, thinking indicators, and dashboard embeds. htmx can't do token-by-token streaming into a single <div> that's still being constructed. - The existing ai-copilot.js (430 lines, apps/cloud/static/js/dashboard/ai-copilot.js) already uses exactly the right pattern: vanilla JS reading a fetch() SSE stream, switching on event types, appending DOM nodes as tokens arrive.

The hybrid pattern (what we do):

┌──────────────────────────────────────────────────┐
│  Page shell — Django-rendered, htmx-managed      │
│  ┌────────────────┐ ┌─────────────────────────┐  │
│  │ History sidebar │ │  Blank state panel      │  │
│  │ (htmx partial)  │ │  (htmx: hides on chat) │  │
│  │ hx-get=history  │ │  Dashboard grid,        │  │
│  │                 │ │  suggestions             │  │
│  └────────────────┘ └─────────────────────────┘  │
│  ┌──────────────────────────────────────────────┐│
│  │ Chat area — JS-managed, htmx-ignored         ││
│  │ (no hx-* attributes, vanilla JS + SSE)       ││
│  │ ┌──────────────────────────────────────────┐ ││
│  │ │ Message stream (token-by-token render)   │ ││
│  │ │ Tool call indicators                     │ ││
│  │ │ Dashboard SVG embeds                     │ ││
│  │ └──────────────────────────────────────────┘ ││
│  │ ┌──────────────────────────────────────────┐ ││
│  │ │ Chat input bar (JS: Enter to send)       │ ││
│  │ └──────────────────────────────────────────┘ ││
│  └──────────────────────────────────────────────┘│
│  ┌──────────────────────────────────────────────┐│
│  │ Modals — htmx-loaded                         ││
│  │ Dashboard expand: hx-get=modal endpoint      ││
│  │ Save form: hx-post=save endpoint             ││
│  └──────────────────────────────────────────────┘│
└──────────────────────────────────────────────────┘

Rule: htmx manages everything except the streaming chat message area. The chat area is a JS island — it owns the SSE connection, token rendering, and message DOM. Everything around it (history sidebar, blank state, modals, save forms, navigation) is standard Django+htmx. This keeps the JS surface minimal (~500 lines for the chat, evolved from the existing ai-copilot.js) while using htmx for all the non-streaming parts.

Possible Solutions

Add a chat interface directly to the org home page. The existing dashboard grid/recommendations become the "blank state" content that sits below the chat input. When the user types their first message, the recommendations animate away and the chat conversation takes over the page.

Architecture:

┌──────────────────────────────────────────────────┐
│  Dataface Cloud — org home                       │
│                                                  │
│  ┌──────────────────────────────────────────┐    │
│  │  Chat input bar (always visible)         │    │
│  │  "Ask a question about your data..."     │    │
│  └──────────────────────────────────────────┘    │
│                                                  │
│  ── blank state (visible when chat is empty) ──  │
│  │ 💡 Suggested questions                    │   │
│  │   "What are the top revenue drivers?"     │   │
│  │   "Show me customer churn this quarter"   │   │
│  │                                           │   │
│  │ 📊 Recent dashboards (grid, from current  │   │
│  │    org_home data)                         │   │
│  │                                           │   │
│  │ 📈 Recommended for you                    │   │
│  └───────────────────────────────────────────┘   │
│                                                  │
│  ── chat messages (visible when conversation) ── │
│  │ User: Show me revenue trends              │   │
│  │ AI: [querying catalog...] [running SQL...]│   │
│  │     [embedded dashboard preview]          │   │
│  └───────────────────────────────────────────┘   │
└──────────────────────────────────────────────────┘

Pros: Single page, smooth transition from browsing to chatting. Dashboard grid isn't lost — it's the chat's empty state. Builds on existing ai_stream SSE pattern.

Cons: Needs new template and JS for chat UI. The existing ai_stream endpoint is project-scoped — needs to be generalized to org-level.

Option B: Separate /chat route with redirect

Create a dedicated /org/chat page. The org home stays as-is but adds a prominent "Ask AI" button that navigates to the chat page.

Pros: Clean separation. Easy to iterate on chat without touching the dashboard grid.

Cons: Extra navigation step. Doesn't deliver the "chat IS the home page" vision. Users still land on the old grid first.

Option C: Full SPA chat widget (React/Vue)

Build the chat interface as a standalone SPA component embedded in the Django template.

Pros: Rich interactivity, streaming rendering, client-side state.

Cons: Violates design philosophy (no React/Vue). Adds build tooling, bundle management, testing complexity. Overkill for SSE-based chat.

Plan

Selected approach: Option A — Extend org_home with embedded chat component.

Prerequisites

  • unify-cloud-ai-tool-dispatch-to-use-canonical-mcp-tools (M1 P0) — Must land first. The chat backend uses handle_tool_call() for all MCP tools.
  • extract-shared-chat-js-and-chat-stream-sse-endpoint (M1 P0) — Builds the shared chat.js and chat_stream() endpoint. This task consumes those components, adding org-level scope, blank state, and suggestions on top.
  • save-dashboard-mcp-tool-persist-agent-work-to-project (M1 P1) — Needed for dashboard saving from chat.

Files to Create

  1. apps/cloud/templates/chat/org_chat_home.html — New template replacing projects/org_home.html as the org landing page. Contains: - Chat input bar (always visible at bottom — like ChatGPT/Claude) - Blank state panel (suggestions + recent dashboards + recommendations) — Django-rendered, htmx-managed - Chat message container (hidden initially, shown on first message) — JS-managed island - History sidebar (htmx partial, loaded via hx-get)

  2. apps/cloud/templates/chat/_suggestions.html — htmx partial for suggested questions and recent dashboards (blank state content)

  3. apps/cloud/apps/chat/ — New Django app: - views.py — Chat home view, SSE streaming endpoint, suggestions endpoint - urls.py — URL routing

  4. apps/cloud/static/js/chat/chat.jsShared chat component (replaces ai-copilot.js). Vanilla JS, ~500 lines. Used by home page, dashboard editor sidebar, and any future chat placement. Parameterized by: - streamUrl — the SSE endpoint (changes per placement: org-level vs project-level) - containerEl — the DOM element to render into - onWriteToEditor — optional callback for write_to_editor tool results (only in editor context) - blankStateEl — optional element to hide on first message (only on home page) Core functionality: - SSE fetch() stream reader (same pattern as ai-copilot.js lines 262-375) - Token-by-token message rendering - Tool call indicators (full MCP tool set: catalog, execute_query, render_dashboard, etc.) - Dashboard SVG embed insertion (from dashboard_embed SSE events) - No htmx attributes — this is the JS island

  5. apps/cloud/static/css/chat.css — Chat-specific styles (or extend suite.css)

Files to Modify

  1. apps/cloud/apps/projects/views.py — Modify org_home() to render the new chat home template. Pass dashboard grid data for the blank state.

  2. apps/cloud/apps/ai/views.py — Generalize the existing ai_stream() into a shared chat_stream() SSE endpoint that accepts a scope parameter: - scope=org (home page): org-level context, all projects, all sources - scope=project (dashboard editor): project-level context, current dashboard YAML - scope=chart (future chart builder): single chart context - Uses handle_tool_call() from dataface/ai/tools.py (canonical dispatch, NOT bespoke) - System prompt: load from dataface/ai/prompts.py + dataface/ai/schema_context.py + skills, augmented with scope-specific context - This replaces the current ai_stream() — one endpoint, not two. The existing ai-copilot.js callers in dashboard_view.html get migrated to the new shared chat.js pointing at the same endpoint with scope=project.

  3. apps/cloud/urls.py — Add chat endpoints

  4. apps/cloud/templates/base.html — Add htmx (currently not loaded; needed for sidebar, modals, suggestions)

Implementation Steps

Step 1: Shared chat SSE backend (~2 days) - Generalize ai_stream() in apps/cloud/apps/ai/views.py into chat_stream(): - Accepts {prompt, scope, session_id?, conversation_history?, current_yaml?} - scope=org: aggregates all project sources via get_project_adapter_registry(), builds org-level system prompt - scope=project: uses project-specific sources + current YAML context (existing behavior) - Calls AIService.chat_with_tools() with canonical tool dispatch via handle_tool_call() - Streams SSE events: thinking, tool_call, tool_result, content, dashboard_embed, done, error - Deprecate old ai_stream()chat_stream() is the single SSE endpoint for all chat placements - Add org-level URL route alongside existing project-level route

Step 2: Shared chat.js + home page template (~3 days) - Refactor ai-copilot.jschat.js (shared chat component): - Extract core SSE stream reading, message rendering, tool call display into parameterized module - Accept {streamUrl, containerEl, inputEl, onWriteToEditor?, blankStateEl?} config - Add: dashboard embed handling, full MCP tool set display - Keep under 500 lines of vanilla JS - Update the dashboard editor to use the new shared chat.js instead of ai-copilot.js (same behavior, shared code) - Build org_chat_home.html: - Page shell: Django-rendered, extends base.html - Blank state panel: include the chat/_suggestions.html partial — shown when no messages - Chat message area: bare <div id="chat-messages"> — the JS island - Chat input bar: <textarea> + send button at bottom - Initializes chat.js with {streamUrl: chatStreamOrgUrl, blankStateEl: ...} - Suggested questions: generate server-side from catalog profiling data, render in _suggestions.html

Step 3: Replace org_home (~1 day) - Update org_home() view to render the new chat template - Pass dashboard grid data as the blank state content - Keep dashboard grid accessible via a "Browse All" link (existing template becomes a partial or stays at a sub-URL)

Step 4: Suggestions engine (~1 day) - Generate suggested questions from catalog profiling data (column names, table descriptions, existing dashboards) - Surface recent/popular dashboards - Show "recently viewed" for the current user - htmx endpoint returns _suggestions.html partial (refreshable)

Separate Tasks (broken out)

  • Conversation persistence and historychat-conversation-persistence-and-history.md (P2 — can launch without it, add later)
  • Dashboard embedding modal/save flowembeddable-dashboards-in-chat-*.md (companion task)
  • Unify tool dispatchunify-cloud-ai-tool-dispatch-to-use-canonical-mcp-tools.md (P0 prerequisite)

Non-Goals (for this task)

  • Multi-user collaboration (shared chat sessions) → future
  • Voice input → future
  • File upload (CSV, etc.) → future

Implementation Progress

Wave 1: Chat-first org home page (core surface)

Files created: - apps/cloud/apps/chat/__init__.py — Chat Django app - apps/cloud/apps/chat/dashboard_cards.py — Shared helper to build dashboard cards with latest snapshot metadata for both the initial org-home render and the htmx suggestions refresh - apps/cloud/apps/chat/suggestions.py — Suggestions engine: generates suggested questions from catalog data (dashboards, projects). Returns list of {text, category} dicts. Includes DEFAULT_SUGGESTIONS for data exploration and dashboard creation prompts, plus dynamic suggestions from existing dashboards and projects. - apps/cloud/apps/chat/views.pysuggestions() htmx endpoint returning _suggestions.html partial - apps/cloud/apps/chat/urls.py — URL routing (suggestions/ endpoint) - apps/cloud/templates/chat/org_chat_home.html — Chat-first org home page. Extends base.html. Contains: chat messages area (JS island), blank state panel with suggestions and recent dashboards, chat input bar at bottom. Initializes shared chat.js with scope: 'org', blankStateEl, and chat_stream_org SSE URL. - apps/cloud/templates/chat/_suggestions.html — Blank-state partial: suggestion cards (clickable to populate input) + compact recent dashboard grid with thumbnails - tests/cloud/test_org_chat_home.py — 10 focused tests covering suggestion generation, recent-dashboard helper behavior, and executable view-level coverage for org_home() and the htmx suggestions() partial

Files modified: - apps/cloud/apps/projects/views.pyorg_home() now renders chat/org_chat_home.html instead of projects/org_home.html. Generates suggestions via get_suggestions() and passes to template context. - apps/cloud/urls.py — Added chat/ URL include under org routes for suggestions endpoint - apps/cloud/static/css/suite.css — Added .chat-home, .blank-state, .suggestion-card, .chat-input-bar, .chat-messages-area styles with responsive breakpoints

Architecture: - Reuses chat.js shared component from PR #616 - Reuses chat_stream_org SSE endpoint from PR #616 - Blank state hides on first chat message via blankStateEl param - Suggestions engine is pure Python — no LLM call, no external deps - Dashboard grid in blank state shows recent dashboards across the org using shared get_recent_dashboards() and build_dashboard_cards() helpers so the initial render and htmx suggestions refresh stay in sync - CSRF token passed into the chat island uses escapejs in the template config

Tests: 10 focused tests passed (uv run pytest tests/cloud/test_org_chat_home.py -q) and full just ci passed on latest main (2742 passed, 40 skipped, 3 xfailed). Lint/typecheck: Clean (ruff, mypy). just review verdict: Caution after fixing the final CSRF escaping and shared-query review notes.

QA Exploration

  • [ ] QA exploration completed (or N/A for non-UI tasks)
  • Note: Requires running server with just cloud and navigating to /<org>/ to visually verify. Not yet tested in browser.

Review Feedback

  • just review raised and resolved: 1. BLOCK: broad snapshot lookup and missing view-level tests → Fixed via latest-snapshot subquery in build_dashboard_cards() and executable RequestFactory-based view tests 2. BLOCK: inconsistent recent-dashboard ordering between initial render and htmx refresh → Fixed via shared get_recent_dashboards() helper 3. CAUTION: bare csrf_token interpolation in JS config → Fixed with {% raw %}{{ csrf_token|escapejs }}{% endraw %}
  • [x] Review cleared (Caution verdict)