Dataface Tasks

Launch docs and external readiness

IDM3_PUBLIC_LAUNCH-MCP_ANALYST_AGENT-02
Statusnot_started
Priorityp0
Milestonem3-public-launch
Ownerdata-ai-engineer-architect

Problem

External users have no documentation for the Dataface MCP server. The existing resources (YAML reference, design guides) are internal prompts embedded as MCP resources, not standalone docs that a new user can discover, read, and follow. There are no quickstart guides for configuring the MCP server with Claude Desktop, Cursor, or other MCP-compatible clients. There are no end-to-end examples showing how an agent should explore a schema, write a query, and build a dashboard. Without executable, externally published documentation, public launch will result in users unable to get started and filing issues that internal docs already address.

Context

  • Even if AI agent tool interfaces, execution workflows, and eval-driven behavior tuning works, launch will still fail if external users and operators cannot understand setup, limits, and expected behavior from publishable docs.
  • This task is about making the product externally explainable: clear setup guidance, examples, troubleshooting, and boundary-setting around what is and is not supported at launch.
  • Expected touchpoints include dataface/ai/, MCP/tool contracts, cloud chat surfaces, eval runners, and prompt artifacts, user-facing docs, operator notes, and any examples or screenshots needed to make the launch story concrete.

Possible Solutions

  • A - Ship with only internal notes and ad hoc examples: faster, but it shifts launch confusion onto support and sales conversations.
  • B - Recommended: produce a focused external-readiness doc set: cover setup, primary workflows, limitations, troubleshooting, and operator/admin guidance where needed.
  • C - Delay docs until after launch traffic appears: saves time now, but increases launch-day friction and inconsistent messaging.

Plan

  1. Define the minimum documentation set needed for external users, internal operators, and anyone supporting AI agent tool interfaces, execution workflows, and eval-driven behavior tuning at launch.
  2. Draft or update the core guides, examples, known-limits sections, and troubleshooting notes using the actual launch scope.
  3. Review the docs against the product behavior and remove any claims that are not yet supportable in the code or operations model.
  4. Link the publishable docs to follow-up maintenance owners so launch documentation does not drift immediately after release.

Implementation Progress

Review Feedback

  • [ ] Review cleared