Unify Cloud AI Tool Dispatch to Use Canonical MCP Tools
Problem
The Cloud app has a bespoke tool dispatch (_execute_tool_sync() in apps/cloud/apps/ai/views.py) that handles only 4 tools: validate_yaml, test_yaml_execution, execute_query, write_to_editor. Meanwhile, the canonical MCP tool layer (dataface/ai/tools.py::handle_tool_call() → dataface/ai/mcp/tools.py) supports the full tool set: catalog, execute_query, render_dashboard, review_dashboard, search_dashboards, list_sources, and soon save_dashboard.
This means:
1. The Cloud copilot can't use catalog, render_dashboard, search_dashboards, or any tool added to the MCP layer.
2. Any new tool requires adding it in two places — the MCP layer AND the Cloud bespoke dispatch.
3. The chat-first home page can't use MCP tools without building yet another dispatch.
This is the P0 foundation task — until this is done, all Cloud AI surfaces are stuck on the limited bespoke tool set.
Context
Two tool dispatchers exist today:
| Canonical (MCP) | Cloud Bespoke | |
|---|---|---|
| File | dataface/ai/tools.py::handle_tool_call() |
apps/cloud/apps/ai/views.py::_execute_tool_sync() |
| Schemas | dataface/ai/tool_schemas.py (6+ tools) |
Hardcoded in apps/cloud/apps/ai/service.py::get_tools() (4 tools) |
| Used by | MCP server, terminal agent, IDE clients | Cloud dashboard editor copilot only |
| Tools | catalog, execute_query, render_dashboard, review_dashboard, search_dashboards, list_sources | validate_yaml, test_yaml_execution, execute_query, write_to_editor |
The only Cloud-specific tool is write_to_editor — which signals the frontend to update the code editor with new YAML. This is a UI-layer concern, not a data tool.
Key files:
- dataface/ai/tools.py — canonical dispatch + OpenAI-format tool definitions
- dataface/ai/tool_schemas.py — canonical schema definitions
- dataface/ai/mcp/tools.py — canonical tool implementations
- apps/cloud/apps/ai/views.py — bespoke dispatch (_execute_tool_sync())
- apps/cloud/apps/ai/service.py — AIService.chat_with_tools() uses bespoke tool list
Philosophy docs consulted:
- .cursor/rules/anti-slop.mdc
- .cursor/rules/core.mdc
- apps/cloud/DESIGN_PHILOSOPHY.md
- docs/docs/contributing/architecture/index.md
- docs/docs/contributing/architecture/platform-overview.md
Existing pattern followed:
- apps/playground/ai_service.py uses canonical MCP schemas (ALL_TOOLS) and routes execution through shared dispatch with a scoped DatafaceAIContext.
- tasks/workstreams/mcp-analyst-agent/tasks/wire-playground-ai-to-use-mcp-tools-instead-of-bespoke-tool-set.md documents that migration and its tests.
Possible Solutions
Option A: Replace bespoke dispatch with canonical dispatch + Cloud hooks [Recommended]
Replace _execute_tool_sync() with a call to handle_tool_call() from dataface/ai/tools.py. Keep write_to_editor as a Cloud-specific post-processing hook (it's a UI action, not a data tool).
Trade-offs: Minimal code change. All MCP tools become available in Cloud immediately. write_to_editor stays as a presentation-layer tool handled separately.
Option B: Make Cloud a proper MCP client
Run dft mcp serve as a subprocess and have the Cloud app connect to it via stdio. Full MCP protocol.
Trade-offs: Clean separation. But adds subprocess management, stdio parsing, latency. Overkill for in-process usage.
Plan
Files to modify:
-
apps/cloud/apps/ai/views.py— Replace_execute_tool_sync(): - For MCP tools (catalog,execute_query,render_dashboard, etc.): callhandle_tool_call(tool_name, args)- For Cloud-only tools (write_to_editor): handle inline as a UI action - Pass Cloud-specific context (adapter registry from project connections) viaDatafaceAIContext -
apps/cloud/apps/ai/service.py— Replaceget_tools()call: - Usedataface.ai.tools.get_tools()(canonical) instead of the local bespoke list - Addwrite_to_editorto the tool list as a Cloud-specific addition -
dataface/ai/tools.py— May need aDatafaceAIContextadapter for Cloud: - Cloud usesProjectConnectionmodels for data sources (not filesystem-based profiles) - The tool dispatch needs to accept an adapter registry from the Cloud layer
Steps:
1. Wire handle_tool_call() as the primary dispatch in Cloud AI views
2. Keep write_to_editor as a Cloud-only tool handled outside the canonical dispatch
3. Ensure adapter registry from Cloud connections is passed through to tool execution
4. Update AIService to use canonical tool schemas
5. Test: existing copilot still works, and now also supports catalog, render_dashboard, search_dashboards
Estimated effort: ~1 day
Implementation Progress
- Added a regression test for
dataface.ai.tools.handle_tool_call(..., context=...)so Cloud can scope canonical tool execution with project-specific adapters and directories. - Added a Cloud regression test that pins
AIService._get_tools()to the full canonical MCP tool set plus the UI-onlywrite_to_editortool. - Updated
dataface/ai/tools.pysohandle_tool_call()accepts optionalDatafaceAIContextand forwards it todispatch_tool_call(). - Replaced the hardcoded Cloud tool list in
apps/cloud/apps/ai/service.pywith canonical MCP schemas fromALL_TOOLS, keptwrite_to_editorlocal, and updated the tool instructions to referencerender_dashboard,catalog,list_sources,search_dashboards, andreview_dashboard. - Reworked
apps/cloud/apps/ai/views.py::_execute_tool_sync()into a thin wrapper over canonicalhandle_tool_call(), with Cloud-specificDatafaceAIContextscoping and localwrite_to_editorhandling preserved. - Focused validation:
uv run pytest tests/core/test_ai_tools.py tests/cloud/test_cloud_ai_tool_wiring.pyuv run pytest tests/core/test_ai_tools.py tests/core/test_review_dashboard.py apps/playground/tests/test_mcp_tool_wiring.py tests/cloud/test_cloud_ai_tool_wiring.py
QA Exploration
- [x] QA exploration completed (N/A: backend-only tool wiring change)
Review Feedback
- Shared tool and Playground regression suites passed after the dispatch change, covering the new
handle_tool_call(..., context=...)contract and confirming Playground still uses the same canonical MCP path. just ciwas run twice. Both runs failed in unrelated repo-level tests outside this change:tests/core/test_inspect_server.py::TestServeExampleRouting::test_csv_example_uses_examples_root_for_assetsfailed once with a transient 500 during the xdist run, then passed in isolation.tests/core/test_render_cli.py::TestRenderFile::test_render_file_terminal_formatfailed once due a DuckDB file lock onexamples/examples.duckdb, then passed in isolation.tests/faketran/test_application_models.py::test_fake_companies_populate_application_database_models[fake_companies.pied_piper-240-23]timed out once in the xdist run, then passed in isolation.- Isolated verification after the flaky CI failures:
uv run pytest tests/core/test_render_cli.py::TestRenderFile::test_render_file_terminal_format -vvuv run pytest 'tests/faketran/test_application_models.py::test_fake_companies_populate_application_database_models[fake_companies.pied_piper-240-23]' -vv- [x] Review cleared