Wire Playground AI to use MCP tools instead of bespoke tool set
Problem
The Playground AI agent maintains its own bespoke tool set (validate_yaml, test_yaml_execution, execute_query_research) in apps/playground/ai_service.py, completely disconnected from the MCP server's canonical tools (render_dashboard, execute_query, catalog, list_sources, search_dashboards). This means:
- Two divergent tool surfaces to maintain
- Playground agent lacks catalog/schema context that MCP tools provide
- Playground agent can't use
render_dashboardfor proper validation - Any MCP tool improvements don't reach the Playground
Context
Playground AI files:
- apps/playground/ai_service.py — bespoke tool schemas + OpenAI chat loop
- apps/playground/ai_service_streaming.py — streaming variant
- apps/playground/yaml_validator.py — wrappers around dataface.validate
MCP tool files:
- dataface/ai/mcp/tools.py — canonical tool implementations
- dataface/ai/tool_schemas.py — canonical tool schemas
- dataface/ai/mcp/server.py — MCP server wiring
Current Playground tools:
| Tool | What it does |
|------|-------------|
| validate_yaml | Compile-only YAML validation |
| test_yaml_execution | Compile + execute all queries |
| execute_query_research | Run arbitrary SQL for schema exploration |
MCP tools that should replace them:
| Tool | Maps to |
|------|---------|
| render_dashboard | Replaces validate_yaml + test_yaml_execution |
| execute_query | Replaces execute_query_research |
| catalog | New — gives agent schema/profile context |
| list_sources | New — lets agent discover available databases |
| search_dashboards | New — lets agent find example dashboards |
Possible Solutions
A. Direct import of MCP tool functions — Recommended
Import the MCP tool implementations directly into the Playground AI service. The MCP tools in dataface/ai/mcp/tools.py are plain async functions — they don't require the MCP protocol. The Playground can call them as regular Python functions and feed their results into the OpenAI tool-call loop.
- Pros: Single source of truth for tools, no protocol overhead, straightforward refactor
- Cons: Playground and MCP server share the same adapter registry setup
B. Run MCP server as subprocess, communicate via stdio
Spawn dft mcp serve and use the MCP protocol to call tools.
- Pros: Full protocol fidelity, tests real MCP path
- Cons: Unnecessary complexity for an in-process app, startup latency, harder error handling
C. HTTP client to MCP embedded server
Connect to the MCP server's optional HTTP endpoint on port 8765.
- Pros: Decoupled process
- Cons: Requires MCP server running separately, network overhead, fragile
Plan
Approach A — direct import.
-
Expose MCP tool schemas for OpenAI format — Create a helper in
dataface/ai/tool_schemas.pythat returns tool schemas in OpenAI function-calling format, derived from the canonical MCP schemas. -
Refactor
ai_service.py— Replace_PLAYGROUND_TOOL_*definitions with imports from the canonical schemas. Replace tool dispatch to call MCP tool functions directly. -
Remove
yaml_validator.py— Its wrappers become unnecessary once the AI service calls MCP tools. -
Update system prompt — The Playground AI system prompt should reference the MCP tool names and capabilities.
-
Adapter registry alignment — Ensure the Playground's adapter registry setup is compatible with what MCP tools expect. May need to pass the registry explicitly.
-
Update streaming variant —
ai_service_streaming.pyneeds the same tool refactor. -
Test end-to-end — Verify the AI chat loop creates dashboards, validates them, explores schema via catalog, all using MCP tools.
Files to modify:
- apps/playground/ai_service.py
- apps/playground/ai_service_streaming.py
- apps/playground/yaml_validator.py (delete)
- dataface/ai/tool_schemas.py (add OpenAI format helper)
- apps/playground/routes.py (adapter registry alignment)
Implementation Progress
Approach A — direct import (implemented)
Key design decisions:
-
Added optional
adapter_registry: AdapterRegistry | Noneto all public MCP tool functions (render_dashboard,execute_query,catalog,list_sources) and their internal helpers (_resolve_connection,_list_schema,_profile_table). Defaults toNone→_get_adapter_registry(). Fully backward-compatible. -
Centralized dispatch in
dataface/ai/tools.dispatch_tool_call()— playground_dispatch_tool()delegates to it with playground-specificadapter_registry,dashboards_directory, anddefault_base_dir. -
Tool schemas use
to_openai_tool(ALL_TOOLS)— all 6 canonical tools (render_dashboard, execute_query, catalog, list_sources, search_dashboards, review_dashboard) exposed to the AI agent.
Changes by file:
| File | Change |
|---|---|
dataface/ai/mcp/tools.py |
Added adapter_registry parameter to 7 functions |
dataface/ai/tools.py |
Added dispatch_tool_call() with adapter_registry/dashboards_directory/default_base_dir kwargs; list_sources routing |
apps/playground/ai_service.py |
Replaced 3 bespoke tool defs with to_openai_tool(ALL_TOOLS), _dispatch_tool() delegates to dispatch_tool_call() |
apps/playground/ai_service_streaming.py |
Rewrote tool dispatch to use inherited _dispatch_tool() from AIService |
apps/playground/yaml_validator.py |
Deleted — wrappers replaced by MCP tool dispatch |
apps/playground/tests/test_yaml_validator.py |
Changed import from yaml_validator to dataface.validate |
apps/playground/tests/test_mcp_tool_wiring.py |
New — 12 tests: schema alignment, shared-dispatcher delegation, tool dispatch, adapter_registry acceptance |
apps/playground/prompts/yaml_generation.md |
All validate_yaml/test_yaml_execution/execute_query_research references → MCP tool names |
apps/playground/prompts/dashboard_design.md |
Same tool-name updates |
apps/playground/prompts/report_generation.md |
Same tool-name updates |
Fresh-worktree takeover notes:
- Copied the implementation diff from
/Users/dave.fowler/Fivetran/dataface/.worktrees/task-wire-playground-ai-to-use-mcp-toolsinto this cleancodex/task-wire-playground-ai-to-mcp-toolsbranch. - Revalidated the focused green suite in this worktree:
uv run pytest tests/core/test_ai_tools.py apps/playground/tests/test_mcp_tool_wiring.py apps/playground/tests/test_yaml_validator.py tests/core/test_mcp.py→ 69 passed. - Validated task frontmatter with
just task validate tasks/workstreams/mcp-analyst-agent/tasks/wire-playground-ai-to-use-mcp-tools-instead-of-bespoke-tool-set.md.
Review Feedback
uv run --project libs/cbox cbox reviewapproved the branch with no critical or high issues.- Review flagged one medium hygiene issue: duplicated MCP tool instructions between
ai_service.pyandai_service_streaming.py. - Fixed by extracting the shared prompt block into
AIService._tool_instructions()and reusing it from both code paths. - Re-ran
uv run pytest tests/core/test_ai_tools.py apps/playground/tests/test_mcp_tool_wiring.py apps/playground/tests/test_yaml_validator.py tests/core/test_mcp.pyafter the follow-up change → 69 passed. scripts/pr-validate preis blocked by an inherited repo formatting issue ondataface/cli/commands/inspect.py;git diff origin/main -- dataface/cli/commands/inspect.pyis empty andgit show origin/main:dataface/cli/commands/inspect.py | uvx black==25.12.0 --check -also fails, confirming it is pre-existing onorigin/main.- [x] Review cleared