Dataface Tasks

Wire Playground AI to use MCP tools instead of bespoke tool set

IDMCP_ANALYST_AGENT-WIRE_PLAYGROUND_AI_TO_USE_MCP_TOOLS_INSTEAD_OF_BESPOKE_TOOL_SET
Statuscompleted
Priorityp0
Milestonem1-ft-analytics-analyst-pilot
Ownerdata-ai-engineer-architect
Completed bydave
Completed2026-03-13

Problem

The Playground AI agent maintains its own bespoke tool set (validate_yaml, test_yaml_execution, execute_query_research) in apps/playground/ai_service.py, completely disconnected from the MCP server's canonical tools (render_dashboard, execute_query, catalog, list_sources, search_dashboards). This means:

  • Two divergent tool surfaces to maintain
  • Playground agent lacks catalog/schema context that MCP tools provide
  • Playground agent can't use render_dashboard for proper validation
  • Any MCP tool improvements don't reach the Playground

Context

Playground AI files: - apps/playground/ai_service.py — bespoke tool schemas + OpenAI chat loop - apps/playground/ai_service_streaming.py — streaming variant - apps/playground/yaml_validator.py — wrappers around dataface.validate

MCP tool files: - dataface/ai/mcp/tools.py — canonical tool implementations - dataface/ai/tool_schemas.py — canonical tool schemas - dataface/ai/mcp/server.py — MCP server wiring

Current Playground tools: | Tool | What it does | |------|-------------| | validate_yaml | Compile-only YAML validation | | test_yaml_execution | Compile + execute all queries | | execute_query_research | Run arbitrary SQL for schema exploration |

MCP tools that should replace them: | Tool | Maps to | |------|---------| | render_dashboard | Replaces validate_yaml + test_yaml_execution | | execute_query | Replaces execute_query_research | | catalog | New — gives agent schema/profile context | | list_sources | New — lets agent discover available databases | | search_dashboards | New — lets agent find example dashboards |

Possible Solutions

Import the MCP tool implementations directly into the Playground AI service. The MCP tools in dataface/ai/mcp/tools.py are plain async functions — they don't require the MCP protocol. The Playground can call them as regular Python functions and feed their results into the OpenAI tool-call loop.

  • Pros: Single source of truth for tools, no protocol overhead, straightforward refactor
  • Cons: Playground and MCP server share the same adapter registry setup

B. Run MCP server as subprocess, communicate via stdio

Spawn dft mcp serve and use the MCP protocol to call tools.

  • Pros: Full protocol fidelity, tests real MCP path
  • Cons: Unnecessary complexity for an in-process app, startup latency, harder error handling

C. HTTP client to MCP embedded server

Connect to the MCP server's optional HTTP endpoint on port 8765.

  • Pros: Decoupled process
  • Cons: Requires MCP server running separately, network overhead, fragile

Plan

Approach A — direct import.

  1. Expose MCP tool schemas for OpenAI format — Create a helper in dataface/ai/tool_schemas.py that returns tool schemas in OpenAI function-calling format, derived from the canonical MCP schemas.

  2. Refactor ai_service.py — Replace _PLAYGROUND_TOOL_* definitions with imports from the canonical schemas. Replace tool dispatch to call MCP tool functions directly.

  3. Remove yaml_validator.py — Its wrappers become unnecessary once the AI service calls MCP tools.

  4. Update system prompt — The Playground AI system prompt should reference the MCP tool names and capabilities.

  5. Adapter registry alignment — Ensure the Playground's adapter registry setup is compatible with what MCP tools expect. May need to pass the registry explicitly.

  6. Update streaming variantai_service_streaming.py needs the same tool refactor.

  7. Test end-to-end — Verify the AI chat loop creates dashboards, validates them, explores schema via catalog, all using MCP tools.

Files to modify: - apps/playground/ai_service.py - apps/playground/ai_service_streaming.py - apps/playground/yaml_validator.py (delete) - dataface/ai/tool_schemas.py (add OpenAI format helper) - apps/playground/routes.py (adapter registry alignment)

Implementation Progress

Approach A — direct import (implemented)

Key design decisions:

  1. Added optional adapter_registry: AdapterRegistry | None to all public MCP tool functions (render_dashboard, execute_query, catalog, list_sources) and their internal helpers (_resolve_connection, _list_schema, _profile_table). Defaults to None_get_adapter_registry(). Fully backward-compatible.

  2. Centralized dispatch in dataface/ai/tools.dispatch_tool_call() — playground _dispatch_tool() delegates to it with playground-specific adapter_registry, dashboards_directory, and default_base_dir.

  3. Tool schemas use to_openai_tool(ALL_TOOLS) — all 6 canonical tools (render_dashboard, execute_query, catalog, list_sources, search_dashboards, review_dashboard) exposed to the AI agent.

Changes by file:

File Change
dataface/ai/mcp/tools.py Added adapter_registry parameter to 7 functions
dataface/ai/tools.py Added dispatch_tool_call() with adapter_registry/dashboards_directory/default_base_dir kwargs; list_sources routing
apps/playground/ai_service.py Replaced 3 bespoke tool defs with to_openai_tool(ALL_TOOLS), _dispatch_tool() delegates to dispatch_tool_call()
apps/playground/ai_service_streaming.py Rewrote tool dispatch to use inherited _dispatch_tool() from AIService
apps/playground/yaml_validator.py Deleted — wrappers replaced by MCP tool dispatch
apps/playground/tests/test_yaml_validator.py Changed import from yaml_validator to dataface.validate
apps/playground/tests/test_mcp_tool_wiring.py New — 12 tests: schema alignment, shared-dispatcher delegation, tool dispatch, adapter_registry acceptance
apps/playground/prompts/yaml_generation.md All validate_yaml/test_yaml_execution/execute_query_research references → MCP tool names
apps/playground/prompts/dashboard_design.md Same tool-name updates
apps/playground/prompts/report_generation.md Same tool-name updates

Fresh-worktree takeover notes:

  • Copied the implementation diff from /Users/dave.fowler/Fivetran/dataface/.worktrees/task-wire-playground-ai-to-use-mcp-tools into this clean codex/task-wire-playground-ai-to-mcp-tools branch.
  • Revalidated the focused green suite in this worktree: uv run pytest tests/core/test_ai_tools.py apps/playground/tests/test_mcp_tool_wiring.py apps/playground/tests/test_yaml_validator.py tests/core/test_mcp.py69 passed.
  • Validated task frontmatter with just task validate tasks/workstreams/mcp-analyst-agent/tasks/wire-playground-ai-to-use-mcp-tools-instead-of-bespoke-tool-set.md.

Review Feedback

  • uv run --project libs/cbox cbox review approved the branch with no critical or high issues.
  • Review flagged one medium hygiene issue: duplicated MCP tool instructions between ai_service.py and ai_service_streaming.py.
  • Fixed by extracting the shared prompt block into AIService._tool_instructions() and reusing it from both code paths.
  • Re-ran uv run pytest tests/core/test_ai_tools.py apps/playground/tests/test_mcp_tool_wiring.py apps/playground/tests/test_yaml_validator.py tests/core/test_mcp.py after the follow-up change → 69 passed.
  • scripts/pr-validate pre is blocked by an inherited repo formatting issue on dataface/cli/commands/inspect.py; git diff origin/main -- dataface/cli/commands/inspect.py is empty and git show origin/main:dataface/cli/commands/inspect.py | uvx black==25.12.0 --check - also fails, confirming it is pre-existing on origin/main.
  • [x] Review cleared