dashboard factory
Purpose
Quickstarts, examples, template patterns, and dashboard production standards. This workstream owns the process and tooling for producing dashboards at scale — the "factory line" that turns connector schemas into polished dashboard packs efficiently. Includes the quickstart/tutorial authoring pipeline, the docs-as-code rendering system, template quality rubrics, and the operational cadence for dashboard production throughput. This is the how of dashboard production — distinct from ft-dash-packs, which is the what (the actual connector-specific dashboard catalog). Adjacent to ft-dash-packs (which is the output), graph-library (which sets the visual quality bar), and mcp-analyst-agent (which may automate parts of the production process).
Owner
- Data Analysis Evangelist & AI Training
Tasks by Milestone
A runnable prototype path exists for repeatable process for producing, reviewing, and publishing quickstarts/examples, with concrete artifacts that prove the flow works end-to-end in the current codebase. Core assumptions are documented, known constraints are explicit, and the team can explain what is real versus mocked without ambiguity.
- Prototype gaps and follow-on capture Completed — Document top gaps and risks in publication throughput operations that must be addressed next.
- Prototype implementation path Completed — Implement a runnable end-to-end prototype path for template production pipeline.
- Prototype validation and proof Completed — Validate quality rubric + review process with concrete proof artifacts and repeatable steps.
Internal analysts can execute at least one weekly real workflow that depends on repeatable process for producing, reviewing, and publishing quickstarts/examples in the 5T Analytics environment, without bespoke engineering intervention for every run. Instrumentation and feedback capture are in place so failures, friction points, and adoption gaps are visible and triaged with owners.
- Add dashboard review-and-revise workflow Completed — Define and pilot a second-pass dashboard review workflow that inspects rendered dashboards with real data, captures rev…
- Add dft init for dbt-native repo bootstrap Completed — Implement a first-class dft project bootstrap command for existing repos, especially dbt repos that will store dashboar…
- Add master plans daily activity page Completed — Track completed tasks by day with owners, completers, and linked PRs, including merged PRs not tied to tasks.
- Build tasks DuckDB SQL metrics pipeline for milestone dashboards Completed — Export planning data to Parquet, query via DuckDB, and drive milestone header visualizations from SQL so progress views…
- Create analytics repo Dataface branch and bootstrap workflow CompletedPR #725PR at 2026-03-23T00:08:40-07:00 — Set up the internal analytics repo as a first-class Dataface example-customer repo for analyst work. Create and documen…
- Own Vega-Lite schema snapshot and chart defaults Completed — Vendor the Vega-Lite schema as a tracked compile-time artifact, add a dedicated chart defaults YAML for Dataface house…
- Add render command for precomputed dashboard data artifacts Cancelled — Cancelled — superseded by add-resolved-yaml-render-output-format (frozen valid YAML) and completed add-json-render-outp…
repeatable process for producing, reviewing, and publishing quickstarts/examples is hardened enough for regular use by multiple internal teams and initial design partners, with a predictable response loop for issues and requests. Quality expectations are documented, and prioritized improvements from real usage are actively incorporated into delivery.
- Adoption hardening for internal teams — Harden template production pipeline for repeated use across multiple internal teams and first design partners.
- Artifact digest and Slack sharing workflow — Design a durable artifact review workflow that publishes shareable PNG/SVG dashboard snapshots with brief commentary to…
- Define dashboard quality rubric v1 — Create rubric/checklist for quickstarts/examples used for internal and design-partner review.
- Design-partner feedback loop operations — Operationalize rapid feedback-to-fix loop for quality rubric + review process with explicit decision logs.
- Quality standards and guardrails — Define and enforce quality standards for publication throughput operations to keep output consistent as contributors ex…
- Define dashboard reference boundary and canon strategy — Decide which third-party dashboard artifacts stay external, which lessons should be distilled into Dataface guidance, a…
- Looker-to-Dataface migration skill via Looker API or CLI — Deliver a repo skill and/or CLI workflow that uses the Looker API or Looker CLI to export dashboards and map them to Da…
- Study dashboard composition references and extract reusable lessons — Define a repeatable way to study admired dashboard compositions larger than single charts, collect reference artifacts,…
Launch scope for repeatable process for producing, reviewing, and publishing quickstarts/examples is complete, externally explainable, and supportable: user-facing behavior is stable, documentation is publishable, and operational ownership is explicit. Remaining gaps are non-blocking, risk-assessed, and tracked as post-launch follow-up rather than unresolved launch debt.
- Launch docs and external readiness — Publish external-facing documentation and examples for quality rubric + review process that are executable by new users.
- Launch operations and reliability readiness — Finalize operational readiness for publication throughput operations: telemetry, alerting, support ownership, and incid…
- Public launch scope completion — Complete launch-critical scope for template production pipeline with production-safe behavior and rollback clarity.
- Operationalize quickstart production line — Run weekly production cadence for quickstart/example dashboards with review gates and publish tracking.
Post-launch stabilization is complete for repeatable process for producing, reviewing, and publishing quickstarts/examples: recurring incidents are reduced, support burden is lower, and quality gates are enforced consistently before release. The team has a repeatable operating model for maintenance, regression prevention, and measured reliability improvements.
- Build GitHub Actions and Pages publisher for Dataface GitHub OSS activity dashboards Waiting on build-github-activity-extract-and-snapshot-pipeline — Create an easy-to-copy GitHub Actions workflow and Pages deployment path that refreshes GitHub activity snapshots, runs…
- Build GitHub activity extract and snapshot pipeline GitHub OSS activity dashboards Waiting on design-github-oss-dashboard-topology-and-metric-contract — Build the GitHub GraphQL and REST extraction flow that produces versioned CSV or Parquet snapshots for static Dataface…
- Design GitHub OSS dashboard topology and metric contract GitHub OSS activity dashboards — Define the board set, personas, metric families, entity grains, and cross-dashboard link matrix for GitHub repository a…
- Implement GitHub OSS dashboard pack v1 GitHub OSS activity dashboards Waiting on build-github-actions-and-pages-publisher-for-dataface — Implement the interlinked GitHub activity dashboard pack, including contributor drill-through pages and a real example…
- Regression prevention and quality gates — Add or enforce regression gates around quality rubric + review process so release quality is sustained automatically.
- Sustainable operating model — Document and adopt sustainable operating model for publication throughput operations across support, triage, and releas…
- v1.0 stability and defect burn-down — Run stability program for template production pipeline with recurring defect burn-down and reliability trend tracking.
v1.2 delivers meaningful depth improvements in repeatable process for producing, reviewing, and publishing quickstarts/examples based on observed usage and retention signals, not just roadmap intent. Enhancements improve real customer outcomes, and release readiness is demonstrated through metrics, regression coverage, and clear migration guidance where relevant.
- Quality and performance improvements — Ship measurable quality/performance improvements in quality rubric + review process tied to user-facing outcomes.
- v1.2 depth expansion — Deliver depth expansion in template production pipeline prioritized by observed usage and retention outcomes.
- v1.2 release and migration readiness — Prepare v1.2 release/migration readiness for publication throughput operations, including communication and upgrade gui…
Long-horizon opportunities for repeatable process for producing, reviewing, and publishing quickstarts/examples are captured as concrete hypotheses with user impact, prerequisites, and evaluation criteria. Ideas are ranked by strategic value and feasibility so future investment decisions can be made quickly with less rediscovery.
- Experiment design for future bets — Design validation experiments for publication throughput operations so future bets can be tested before major investmen…
- Future opportunity research — Capture long-horizon opportunities for template production pipeline with user impact and strategic fit.
- Prerequisite and dependency mapping — Map enabling prerequisites and dependencies for quality rubric + review process to reduce future startup cost.