Dataface Tasks

Unify Cloud AI Tool Dispatch to Use Canonical MCP Tools

IDMCP_ANALYST_AGENT-UNIFY_CLOUD_AI_TOOL_DISPATCH_TO_USE_CANONICAL_MCP_TOOLS
Statuscompleted
Priorityp0
Milestonem1-ft-analytics-analyst-pilot
Ownerdata-ai-engineer-architect
Initiativeai-agent-surfaces
Completed bydave
Completed2026-03-16

Problem

The Cloud app has a bespoke tool dispatch (_execute_tool_sync() in apps/cloud/apps/ai/views.py) that handles only 4 tools: validate_yaml, test_yaml_execution, execute_query, write_to_editor. Meanwhile, the canonical MCP tool layer (dataface/ai/tools.py::handle_tool_call()dataface/ai/mcp/tools.py) supports the full tool set: catalog, execute_query, render_dashboard, review_dashboard, search_dashboards, list_sources, and soon save_dashboard.

This means: 1. The Cloud copilot can't use catalog, render_dashboard, search_dashboards, or any tool added to the MCP layer. 2. Any new tool requires adding it in two places — the MCP layer AND the Cloud bespoke dispatch. 3. The chat-first home page can't use MCP tools without building yet another dispatch.

This is the P0 foundation task — until this is done, all Cloud AI surfaces are stuck on the limited bespoke tool set.

Context

Two tool dispatchers exist today:

Canonical (MCP) Cloud Bespoke
File dataface/ai/tools.py::handle_tool_call() apps/cloud/apps/ai/views.py::_execute_tool_sync()
Schemas dataface/ai/tool_schemas.py (6+ tools) Hardcoded in apps/cloud/apps/ai/service.py::get_tools() (4 tools)
Used by MCP server, terminal agent, IDE clients Cloud dashboard editor copilot only
Tools catalog, execute_query, render_dashboard, review_dashboard, search_dashboards, list_sources validate_yaml, test_yaml_execution, execute_query, write_to_editor

The only Cloud-specific tool is write_to_editor — which signals the frontend to update the code editor with new YAML. This is a UI-layer concern, not a data tool.

Key files: - dataface/ai/tools.py — canonical dispatch + OpenAI-format tool definitions - dataface/ai/tool_schemas.py — canonical schema definitions - dataface/ai/mcp/tools.py — canonical tool implementations - apps/cloud/apps/ai/views.py — bespoke dispatch (_execute_tool_sync()) - apps/cloud/apps/ai/service.pyAIService.chat_with_tools() uses bespoke tool list

Philosophy docs consulted: - .cursor/rules/anti-slop.mdc - .cursor/rules/core.mdc - apps/cloud/DESIGN_PHILOSOPHY.md - docs/docs/contributing/architecture/index.md - docs/docs/contributing/architecture/platform-overview.md

Existing pattern followed: - apps/playground/ai_service.py uses canonical MCP schemas (ALL_TOOLS) and routes execution through shared dispatch with a scoped DatafaceAIContext. - tasks/workstreams/mcp-analyst-agent/tasks/wire-playground-ai-to-use-mcp-tools-instead-of-bespoke-tool-set.md documents that migration and its tests.

Possible Solutions

Replace _execute_tool_sync() with a call to handle_tool_call() from dataface/ai/tools.py. Keep write_to_editor as a Cloud-specific post-processing hook (it's a UI action, not a data tool).

Trade-offs: Minimal code change. All MCP tools become available in Cloud immediately. write_to_editor stays as a presentation-layer tool handled separately.

Option B: Make Cloud a proper MCP client

Run dft mcp serve as a subprocess and have the Cloud app connect to it via stdio. Full MCP protocol.

Trade-offs: Clean separation. But adds subprocess management, stdio parsing, latency. Overkill for in-process usage.

Plan

Files to modify:

  1. apps/cloud/apps/ai/views.py — Replace _execute_tool_sync(): - For MCP tools (catalog, execute_query, render_dashboard, etc.): call handle_tool_call(tool_name, args) - For Cloud-only tools (write_to_editor): handle inline as a UI action - Pass Cloud-specific context (adapter registry from project connections) via DatafaceAIContext

  2. apps/cloud/apps/ai/service.py — Replace get_tools() call: - Use dataface.ai.tools.get_tools() (canonical) instead of the local bespoke list - Add write_to_editor to the tool list as a Cloud-specific addition

  3. dataface/ai/tools.py — May need a DatafaceAIContext adapter for Cloud: - Cloud uses ProjectConnection models for data sources (not filesystem-based profiles) - The tool dispatch needs to accept an adapter registry from the Cloud layer

Steps: 1. Wire handle_tool_call() as the primary dispatch in Cloud AI views 2. Keep write_to_editor as a Cloud-only tool handled outside the canonical dispatch 3. Ensure adapter registry from Cloud connections is passed through to tool execution 4. Update AIService to use canonical tool schemas 5. Test: existing copilot still works, and now also supports catalog, render_dashboard, search_dashboards

Estimated effort: ~1 day

Implementation Progress

  • Added a regression test for dataface.ai.tools.handle_tool_call(..., context=...) so Cloud can scope canonical tool execution with project-specific adapters and directories.
  • Added a Cloud regression test that pins AIService._get_tools() to the full canonical MCP tool set plus the UI-only write_to_editor tool.
  • Updated dataface/ai/tools.py so handle_tool_call() accepts optional DatafaceAIContext and forwards it to dispatch_tool_call().
  • Replaced the hardcoded Cloud tool list in apps/cloud/apps/ai/service.py with canonical MCP schemas from ALL_TOOLS, kept write_to_editor local, and updated the tool instructions to reference render_dashboard, catalog, list_sources, search_dashboards, and review_dashboard.
  • Reworked apps/cloud/apps/ai/views.py::_execute_tool_sync() into a thin wrapper over canonical handle_tool_call(), with Cloud-specific DatafaceAIContext scoping and local write_to_editor handling preserved.
  • Focused validation:
  • uv run pytest tests/core/test_ai_tools.py tests/cloud/test_cloud_ai_tool_wiring.py
  • uv run pytest tests/core/test_ai_tools.py tests/core/test_review_dashboard.py apps/playground/tests/test_mcp_tool_wiring.py tests/cloud/test_cloud_ai_tool_wiring.py

QA Exploration

  • [x] QA exploration completed (N/A: backend-only tool wiring change)

Review Feedback

  • Shared tool and Playground regression suites passed after the dispatch change, covering the new handle_tool_call(..., context=...) contract and confirming Playground still uses the same canonical MCP path.
  • just ci was run twice. Both runs failed in unrelated repo-level tests outside this change:
  • tests/core/test_inspect_server.py::TestServeExampleRouting::test_csv_example_uses_examples_root_for_assets failed once with a transient 500 during the xdist run, then passed in isolation.
  • tests/core/test_render_cli.py::TestRenderFile::test_render_file_terminal_format failed once due a DuckDB file lock on examples/examples.duckdb, then passed in isolation.
  • tests/faketran/test_application_models.py::test_fake_companies_populate_application_database_models[fake_companies.pied_piper-240-23] timed out once in the xdist run, then passed in isolation.
  • Isolated verification after the flaky CI failures:
  • uv run pytest tests/core/test_render_cli.py::TestRenderFile::test_render_file_terminal_format -vv
  • uv run pytest 'tests/faketran/test_application_models.py::test_fake_companies_populate_application_database_models[fake_companies.pied_piper-240-23]' -vv
  • [x] Review cleared