Add 'describe' or 'text' render output format for AI agents
Problem
AI agents need a compact, human-readable text representation of dashboards for reasoning, comparison, and summarization. The JSON format (PR #623, merged) provides structured data but is verbose and not designed for direct LLM consumption. A text format that templates over the JSON output would give agents a token-efficient markdown summary without any LLM dependency.
Context
- Foundation:
format="json"now exists (PR #623). It walks the layout tree, executes queries, resolves charts, and returns resolved objects + data as JSON. The text format builds on top of this. dataface/core/render/json_format.py—render_face_json()returns the structured data we'll template overdataface/core/render/renderer.py— format dispatch (addelif format == "text")dataface/core/render/terminal.py/terminal_charts.py— existing terminal text rendering (different purpose but similar pattern for wiring)dataface/ai/mcp/tools.py—render_dashboard(format=...)already supports format paramdataface/ai/tool_schemas.py— RENDER_DASHBOARD schema enum needs "text" addedapps/playground/routes.py— playground render dispatch needs "text" caseapps/playground/static/index.html— format-selector dropdown needs "Text" optiondataface/cli/commands/render.py— CLI already handles json/terminal stdout pathdataface/cli/main.py— --format help text needs "text" added- No LLM or API key required — purely deterministic template formatting
Possible Solutions
-
Recommended — Deterministic markdown template over JSON output. Call
render_face_json()internally (or reuse its tree walk), format the resolved data into compact markdown. For each chart: show type, field mappings, data summary (row count, value ranges for numerics, distinct values for categoricals). For KPIs: show metric name and value. No LLM, no API key, fast and free. -
LLM-powered natural language insights ("Revenue grew 30% QoQ..."). Requires API key, adds latency and cost. Good as a future
insightsmode but wrong for this task.
Plan
Files to create:
- dataface/core/render/text_format.py — render_face_text() that takes the JSON dict (from _face_to_dict) and produces markdown text
- tests/core/test_text_format.py — tests for text output content
Files to modify:
- dataface/core/render/renderer.py — add elif format == "text" branch
- dataface/cli/commands/render.py — text prints to stdout like json/terminal
- dataface/cli/main.py — update --format help text
- dataface/ai/tool_schemas.py — add "text" to format enum
- apps/playground/routes.py — add text format case in render dispatch
- apps/playground/static/index.html — add Text option to format-selector dropdown
Example output:
# Revenue Dashboard
## Revenue Over Time (line)
- x: month, y: revenue
- 12 rows | month: Jan–Dec | revenue: $100K–$1.2M
## Total Revenue (kpi)
- metric: revenue = $4,200,000
## Sales by Region (bar)
- x: region, y: sales, color: category
- 8 rows | 4 regions | sales: $50K–$800K
Steps:
1. Write failing test: render a face with format="text", assert output contains chart type, field names, data summary
2. Implement render_face_text() — reuse _face_to_dict from json_format, template to markdown
3. Wire into renderer.py, CLI, MCP schema, playground routes + HTML dropdown
4. Run just ci
Implementation Progress
- GitHub issue: https://github.com/fivetran/dataface/issues/369
- Foundation:
format="json"merged in PR #623 - Created
dataface/core/render/text_format.py— reuses_face_to_dictfrom json_format.py, templates into compact markdown - Wired
format="text"intorenderer.py(early return before SVG, like JSON) - CLI: text prints to stdout (
dataface/cli/commands/render.py), help updated inmain.py - MCP: added "text" to format enum in
dataface/ai/tool_schemas.py - Playground: added text case in
apps/playground/routes.py(wrapped in<pre>) and<option>instatic/index.html - Tests: 10 tests in
tests/core/test_text_format.pycovering face title, chart types, field mappings, row counts, numeric ranges, categorical values (few/many), KPIs, nested faces, multiple charts - All 2738 tests pass; formatting and mypy clean on all changed files
QA Exploration
- [x] QA exploration completed (N/A — rendering/CLI/tooling task validated by tests and playground route wiring)
Review Feedback
- [x] Review cleared