type: task id: MCP_ANALYST_AGENT-WIRE_PLAYGROUND_AI_TO_USE_MCP_TOOLS_INSTEAD_OF_BESPOKE_TOOL_SET title: Wire Playground AI to use MCP tools instead of bespoke tool set description: 'The Playground app currently maintains its own bespoke AI tools - validate_yaml, test_yaml_execution, execute_query_research - in ai_service.py, completely disconnected from the MCP server tool set - render_dashboard, execute_query, catalog, list_sources, search_dashboards. The Playground should be refactored to use the MCP tools as its sole AI tool interface, eliminating the duplicate tool definitions. Done means: the Playground AI agent uses MCP tool schemas and implementations, the bespoke tool definitions in ai_service.py are removed, the Playground agent has access to catalog/render_dashboard/execute_query/list_sources/search_dashboards, and the AI chat loop works end-to-end with MCP tools for dashboard creation, validation, and iteration.' milestone: m1-ft-analytics-analyst-pilot owner: data-ai-engineer-architect status: completed priority: p0 completed_at: '2026-03-13T20:38:17-07:00' completed_by: dave
The Playground AI agent maintains its own bespoke tool set (validate_yaml, test_yaml_execution, execute_query_research) in apps/playground/ai_service.py, completely disconnected from the MCP server's canonical tools (render_dashboard, execute_query, catalog, list_sources, search_dashboards). This means:
render_dashboard for proper validationPlayground AI files:
- apps/playground/ai_service.py — bespoke tool schemas + OpenAI chat loop
- apps/playground/ai_service_streaming.py — streaming variant
- apps/playground/yaml_validator.py — wrappers around dataface.validate
MCP tool files:
- dataface/ai/mcp/tools.py — canonical tool implementations
- dataface/ai/tool_schemas.py — canonical tool schemas
- dataface/ai/mcp/server.py — MCP server wiring
Current Playground tools:
| Tool | What it does |
|------|-------------|
| validate_yaml | Compile-only YAML validation |
| test_yaml_execution | Compile + execute all queries |
| execute_query_research | Run arbitrary SQL for schema exploration |
MCP tools that should replace them:
| Tool | Maps to |
|------|---------|
| render_dashboard | Replaces validate_yaml + test_yaml_execution |
| execute_query | Replaces execute_query_research |
| catalog | New — gives agent schema/profile context |
| list_sources | New — lets agent discover available databases |
| search_dashboards | New — lets agent find example dashboards |
Import the MCP tool implementations directly into the Playground AI service. The MCP tools in dataface/ai/mcp/tools.py are plain async functions — they don't require the MCP protocol. The Playground can call them as regular Python functions and feed their results into the OpenAI tool-call loop.
Spawn dft mcp serve and use the MCP protocol to call tools.
Connect to the MCP server's optional HTTP endpoint on port 8765.
Approach A — direct import.
Expose MCP tool schemas for OpenAI format — Create a helper in dataface/ai/tool_schemas.py that returns tool schemas in OpenAI function-calling format, derived from the canonical MCP schemas.
Refactor ai_service.py — Replace _PLAYGROUND_TOOL_* definitions with imports from the canonical schemas. Replace tool dispatch to call MCP tool functions directly.
Remove yaml_validator.py — Its wrappers become unnecessary once the AI service calls MCP tools.
Update system prompt — The Playground AI system prompt should reference the MCP tool names and capabilities.
Adapter registry alignment — Ensure the Playground's adapter registry setup is compatible with what MCP tools expect. May need to pass the registry explicitly.
Update streaming variant — ai_service_streaming.py needs the same tool refactor.
Test end-to-end — Verify the AI chat loop creates dashboards, validates them, explores schema via catalog, all using MCP tools.
Files to modify:
- apps/playground/ai_service.py
- apps/playground/ai_service_streaming.py
- apps/playground/yaml_validator.py (delete)
- dataface/ai/tool_schemas.py (add OpenAI format helper)
- apps/playground/routes.py (adapter registry alignment)
Key design decisions:
Added optional adapter_registry: AdapterRegistry | None to all public MCP tool functions (render_dashboard, execute_query, catalog, list_sources) and their internal helpers (_resolve_connection, _list_schema, _profile_table). Defaults to None → _get_adapter_registry(). Fully backward-compatible.
Centralized dispatch in dataface/ai/tools.dispatch_tool_call() — playground _dispatch_tool() delegates to it with playground-specific adapter_registry, dashboards_directory, and default_base_dir.
Tool schemas use to_openai_tool(ALL_TOOLS) — all 6 canonical tools (render_dashboard, execute_query, catalog, list_sources, search_dashboards, review_dashboard) exposed to the AI agent.
Changes by file:
| File | Change |
|---|---|
dataface/ai/mcp/tools.py |
Added adapter_registry parameter to 7 functions |
dataface/ai/tools.py |
Added dispatch_tool_call() with adapter_registry/dashboards_directory/default_base_dir kwargs; list_sources routing |
apps/playground/ai_service.py |
Replaced 3 bespoke tool defs with to_openai_tool(ALL_TOOLS), _dispatch_tool() delegates to dispatch_tool_call() |
apps/playground/ai_service_streaming.py |
Rewrote tool dispatch to use inherited _dispatch_tool() from AIService |
apps/playground/yaml_validator.py |
Deleted — wrappers replaced by MCP tool dispatch |
apps/playground/tests/test_yaml_validator.py |
Changed import from yaml_validator to dataface.validate |
apps/playground/tests/test_mcp_tool_wiring.py |
New — 12 tests: schema alignment, shared-dispatcher delegation, tool dispatch, adapter_registry acceptance |
apps/playground/prompts/yaml_generation.md |
All validate_yaml/test_yaml_execution/execute_query_research references → MCP tool names |
apps/playground/prompts/dashboard_design.md |
Same tool-name updates |
apps/playground/prompts/report_generation.md |
Same tool-name updates |
Fresh-worktree takeover notes:
/Users/dave.fowler/Fivetran/dataface/.worktrees/task-wire-playground-ai-to-use-mcp-tools into this clean codex/task-wire-playground-ai-to-mcp-tools branch.uv run pytest tests/core/test_ai_tools.py apps/playground/tests/test_mcp_tool_wiring.py apps/playground/tests/test_yaml_validator.py tests/core/test_mcp.py → 69 passed.just task validate tasks/workstreams/mcp-analyst-agent/tasks/wire-playground-ai-to-use-mcp-tools-instead-of-bespoke-tool-set.md.<!-- Reviewer comments, what was changed in response, and sign-off. -->
uv run --project libs/cbox cbox review approved the branch with no critical or high issues.ai_service.py and ai_service_streaming.py.AIService._tool_instructions() and reusing it from both code paths.uv run pytest tests/core/test_ai_tools.py apps/playground/tests/test_mcp_tool_wiring.py apps/playground/tests/test_yaml_validator.py tests/core/test_mcp.py after the follow-up change → 69 passed.scripts/pr-validate pre is blocked by an inherited repo formatting issue on dataface/cli/commands/inspect.py; git diff origin/main -- dataface/cli/commands/inspect.py is empty and git show origin/main:dataface/cli/commands/inspect.py | uvx black==25.12.0 --check - also fails, confirming it is pre-existing on origin/main.