Dataface Tasks

Save dashboard MCP tool - persist agent work to project

IDMCP_ANALYST_AGENT-SAVE_DASHBOARD_MCP_TOOL_PERSIST_AGENT_WORK_TO_PROJECT
Statuscompleted
Priorityp1
Milestonem1-ft-analytics-analyst-pilot
Ownerdata-ai-engineer-architect
Completed bydave
Completed2026-03-25

Problem

The agent workflow has a gap between exploration and persistence. Today's tools are all stateless:

  1. render_dashboard takes YAML content, returns HTML — but the YAML vanishes after the call.
  2. execute_query returns data — but the query isn't saved anywhere.
  3. review_dashboard checks YAML — but has no way to commit the reviewed result.

An agent can build a perfect dashboard through 10 iterations of render/review, but when the conversation ends, the work is gone. The user has to manually copy YAML from the chat, create a file, put it in the right directory.

This matters because the natural analyst workflow is explore first, save later. You try different queries, chart types, layouts — most of it throwaway. When something works, you want to say "save this" and have it land in the project as a proper .yml file in faces/, ready to serve.

Without a save tool, the agent can never close the loop from "idea" to "artifact in the repo."

Context

Current tool surface (all read/render, no write): - render_dashboard — validate + render YAML → HTML - execute_query — run SQL → rows - catalog — browse schema - search_dashboards — find existing dashboards - review_dashboard — check against design heuristics - list_sources — discover data sources

Project conventions: - Dashboards live in faces/ directory as .yml files (e.g., faces/sales_dashboard.yml) - Partials live in faces/partials/ prefixed with _ (e.g., faces/partials/_header.yml) - The DatafaceAIContext has dashboards_directory for scoping path resolution and resolve_dashboard_path() with path-traversal protection (dataface/ai/context.py) - Tool schemas live in dataface/ai/tool_schemas.py (canonical source of truth) - Tool dispatch in dataface/ai/tools.pydispatch_tool_call() routes to implementations - MCP server wiring in dataface/ai/mcp/server.py

Security consideration: The MCP server runs with filesystem access. A save tool that writes arbitrary paths is a security risk. We need path scoping (must be within dashboards directory) and validation (must be valid YAML that compiles).

Current state: The implementation work appears to be largely done: tool schema, dispatch wiring, MCP registration, save behavior, and focused tests are all recorded below. This task should not be treated as shipped yet, though. The current blocker is closure: the task was marked completed before review/CI/PR handoff was actually cleared, and the remaining work now is to verify the save contract, resolve the review findings, and land it cleanly.

Contract: The primary contract is validate + scope + write dashboard YAML. Git commit is optional follow-on behavior. A commit failure must never make a successfully written file look unsaved.

Primary risks to keep explicit: - blurring file-save success with optional git-commit success - silently writing outside the dashboards directory - marking the task done based on focused tests while repo-level closure is still blocked

Possible Solutions

One tool that takes YAML content + a path, validates, and writes:

SAVE_DASHBOARD = {
    "name": "save_dashboard",
    "description": (
        "Save a dashboard YAML file to the project. Validates the YAML "
        "first — returns errors if invalid, so fix before re-saving. "
        "Path is relative to the faces/ directory. Use this after "
        "iterating with render_dashboard to persist the final version. "
        "Will not overwrite existing files unless overwrite=true."
    ),
    "input_schema": {
        "type": "object",
        "properties": {
            "yaml_content": {
                "type": "string",
                "description": "Dashboard YAML content to save",
            },
            "path": {
                "type": "string",
                "description": (
                    "File path relative to faces/ directory "
                    "(e.g., 'revenue.yml', 'reports/monthly.yml')"
                ),
            },
            "overwrite": {
                "type": "boolean",
                "description": "Overwrite if file already exists (default false)",
            },
            "commit": {
                "type": "boolean",
                "description": "Git add + commit after saving (default from context config)",
            },
        },
        "required": ["yaml_content", "path"],
    },
}

Agent workflow:

> Build me a revenue dashboard by region

[agent iterates with render_dashboard, tweaks layout, reviews...]

> This looks good, save it

Saving to faces/revenue-by-region.yml...
  ✓ YAML validates
  ✓ Written to faces/revenue-by-region.yml
  View: http://localhost:9876/faces/revenue-by-region/

Trade-offs: Simple, single-purpose. Does one thing. Agent already has the YAML from prior render_dashboard calls.

Option B: save_dashboard + update_dashboard

Separate tools for creating new vs editing existing dashboards.

Trade-offs: More explicit, but adds tool surface. The overwrite flag on a single tool covers this without two tools.

Option C: File-Level Write Tool (Generic)

A generic write_file tool that can write any file, not just dashboards. Like Claude Code's Write tool.

Trade-offs: More flexible (could write dbt models, queries, etc. later). But too generic — loses the ability to validate as a dashboard. Security is harder to scope. Shouldn't need this for M1 since the dft agent use case is dashboard-focused.

Plan

Single save_dashboard tool, validate-before-write, scoped to dashboards directory.

Implementation Steps

Files to modify: - dataface/ai/tool_schemas.py — add SAVE_DASHBOARD schema - dataface/ai/mcp/tools.py — add save_dashboard() implementation - dataface/ai/tools.py — add dispatch case for save_dashboard - dataface/ai/mcp/server.py — register tool in handle_list_tools - dataface/ai/skills/building-dataface-dashboards/SKILL.md — document the save workflow

Implementation (save_dashboard()):

  1. Resolve path — use DatafaceAIContext.resolve_dashboard_path() which already handles: - Relative path resolution (relative to dashboards_directory / faces/) - Path traversal protection (rejects paths that escape scoped directory) - Absolute path rejection

  2. Validate YAML — compile the YAML to catch errors before writing: - Parse YAML - Run through the compiler - If validation fails, return structured errors (same format as render_dashboard) - Do NOT write invalid YAML

  3. Check for conflicts — if file exists and overwrite is not true: - Return error with the existing file's content (so agent can diff) - Suggest overwrite: true if intentional

  4. Write file — create parent directories if needed, write the .yml file

  5. Return confirmation — path written, serve URL, validation summary

Remaining ship work

  1. Re-verify the current implementation against the task contract, especially save-vs-commit behavior.
  2. Clear the two review findings already recorded in this task if they are still present.
  3. Re-run focused tests and then the required review/CI flow from a task worktree.
  4. Only mark the task complete once review is cleared and PR handoff is real.

Return schema:

{
    "status": "saved",
    "path": "faces/revenue-by-region.yml",
    "absolute_path": "/Users/.../faces/revenue-by-region.yml",
    "url": "http://localhost:9876/faces/revenue-by-region/",
    "validation": {"errors": 0, "warnings": 0},
}

Error cases:

# Invalid YAML
{"status": "error", "reason": "validation_failed", "errors": [...]}

# File exists
{"status": "error", "reason": "file_exists", "path": "...",
 "existing_content": "...", "hint": "Use overwrite=true to replace"}

# Path traversal
{"status": "error", "reason": "path_rejected",
 "message": "Path must be within faces/ directory"}

Tests

  1. Save valid YAML → file created, content matches
  2. Save invalid YAML → error returned, no file written
  3. Save to existing path without overwrite → conflict error
  4. Save to existing path with overwrite → file replaced
  5. Path traversal attempt (../../etc/passwd) → rejected
  6. Nested path (reports/monthly/revenue.yml) → directories created
  7. Path without .yml extension → auto-appended or error

Relationship to Cloud Chat Embeddable Dashboards

The Cloud chat task (embeddable-dashboards-in-chat-inline-preview-modal-expand-and-save-to-repo.md) builds a "Save Dashboard" button in the web UI. That task's save flow should call this MCP tool rather than wiring directly to the Cloud-specific Django functions (write_dashboard_yaml, GitService.commit).

Layering:

┌─────────────────────────────────────────────────┐
│ Surfaces (consumers of save_dashboard)          │
│  ├── dft agent (terminal)                       │
│  ├── Cursor / Claude Code / Codex (via MCP)     │
│  ├── Cloud chat UI (via tool dispatch)          │
│  └── Playground (future)                        │
├─────────────────────────────────────────────────┤
│ MCP Tool: save_dashboard                        │
│  → validate YAML                                │
│  → resolve path (scoped)                        │
│  → write .yml file                              │
│  → return confirmation                          │
├─────────────────────────────────────────────────┤
│ Cloud-specific post-save hooks (Cloud only)     │
│  → update_dashboard_cache() (Django model)      │
│  → GitService.commit() (git add + commit)       │
│  → DashboardSnapshot (thumbnail)                │
└─────────────────────────────────────────────────┘

The MCP tool handles the universal part (validate + write file). The Cloud app adds its own post-save hooks (Django cache, git commit, snapshots) on top. Non-Cloud consumers (dft agent, IDE agents) get the file write without the Django overhead.

This task is the foundation — build the tool first, then the Cloud chat task wires its "Save" button to call dispatch_tool_call("save_dashboard", ...) and adds Cloud-specific post-processing.

Git Commit Behavior

The save_dashboard tool has an optional commit parameter that controls whether a git commit is created after writing the file:

"commit": {
    "type": "boolean",
    "description": "Git add + commit after saving (default from config)",
}

The default is configurable globally via DatafaceAIContext (or dataface.yml / environment): - Cloud (Suite): Default commit=True — the Cloud app manages the git repo; saving a dashboard should commit it so it appears in the project history. The Cloud app may also add its own post-save hooks (Django cache update, snapshot generation) on top. - IDE / MCP server (Cursor, Claude Code, Codex): Default commit=False — the user manages their own git workflow. The tool writes the file; the user decides when to commit. - Terminal (dft agent): Default commit=False — same as IDE. The agent writes files to the project; the user commits when ready.

The per-call commit parameter overrides the default. If not provided, the context default applies.

Implementation: DatafaceAIContext gets a auto_commit_saves: bool = False field. The Cloud app sets this to True when constructing the context. The MCP server and CLI agent leave it as False. save_dashboard() checks commit param → falls back to context.auto_commit_saves.

Future Considerations (Not M1)

  • delete_dashboard — remove a saved dashboard
  • rename_dashboard — move/rename
  • save_query — persist a tested SQL query as a reusable partial
  • Undo — track saves in session so the agent can revert

Implementation Progress

  • Added save_dashboard to the canonical tool schemas (dataface/ai/tool_schemas.py), OpenAI wrapper surface (dataface/ai/tools.py), MCP server registration (dataface/ai/mcp/server.py), and shared dispatch layer.
  • Added DatafaceAIContext.auto_commit_saves and implemented commit override behavior in the tool (commit param overrides context default).
  • Implemented save_dashboard() in dataface/ai/mcp/tools.py with path scoping via resolve_dashboard_path(), compile-before-write validation, overwrite protection, nested directory creation, and optional single-file git commit.
  • Key contract: file save success (success: True) is independent of optional git commit. Commit failure sets status: "saved_commit_failed" but success stays True and errors stays [].
  • 12 focused tests in tests/core/test_mcp.py::TestSaveDashboard:
  • valid save creates file
  • invalid YAML does not write
  • conflict without overwrite
  • overwrite replaces file
  • path traversal rejected
  • nested directories created
  • missing .yml extension rejected
  • context auto-commit default honored
  • explicit commit: false overrides context default
  • defaults to faces/ when context is unscoped
  • commit failure keeps successful write (success: True)
  • commit refuses dirty index
  • Contract tests in tests/ai/test_tool_contracts.py::TestSaveDashboardContract.
  • Dispatch integration test in tests/core/test_ai_tools.py::TestDispatchToolCall.
  • Updated the dashboard-building skill doc to include the save workflow.

QA Exploration

  • [x] N/A — non-UI task (MCP tool implementation with unit/contract tests)

Review Feedback

  • Previous review findings (both verified as resolved 2026-03-25): 1. Unscoped default path handling: save_dashboard() with an unscoped DatafaceAIContext() falls back to cwd()/faces/. Verified by test test_save_defaults_to_faces_directory_when_context_is_unscoped. 2. Commit-failure response marking save as failed: On commit failure, success stays True, errors stays [], only status changes to "saved_commit_failed". Verified by test test_save_commit_failure_keeps_successful_write.
  • Focused tests (37/37 passed): uv run pytest tests/core/test_mcp.py::TestSaveDashboard tests/core/test_ai_tools.py tests/ai/test_tool_contracts.py::TestSaveDashboardContract -v
  • Full CI (just ci) passed — 3512 passed, 40 skipped, 3 xfailed (2026-03-25). Lint, format, mypy (Python 3.10 + 3.14), and full test suite all green.
  • [x] Review cleared