ft dash packs
Purpose
Connector-native dashboard packs and quickstarts for top Fivetran sources. This is the content catalog: pre-built dashboard packages tied to specific connectors (Stripe, Salesforce, HubSpot, etc.) that give users instant analytics out of the box when they connect a source. Each pack includes curated dashboards, KPI definitions, and query logic tailored to that connector's schema. This is the output of the dashboard production process — distinct from dashboard-factory, which owns the tooling and process for building packs at scale. Adjacent to graph-library (which provides the chart quality bar) and dft-core (which executes the queries).
Owner
- Data Analysis Evangelist & AI Training
Tasks by Milestone
A runnable prototype path exists for connector-specific dashboard packs and KPI narratives for Fivetran sources, with concrete artifacts that prove the flow works end-to-end in the current codebase. Core assumptions are documented, known constraints are explicit, and the team can explain what is real versus mocked without ambiguity.
- Prototype gaps and follow-on capture Completed — Document top gaps and risks in pack publishing workflow that must be addressed next.
- Prototype implementation path Completed — Implement a runnable end-to-end prototype path for connector pack coverage.
- Prototype validation and proof Completed — Validate dashboard narrative quality with concrete proof artifacts and repeatable steps.
Internal analysts can execute at least one weekly real workflow that depends on connector-specific dashboard packs and KPI narratives for Fivetran sources in the 5T Analytics environment, without bespoke engineering intervention for every run. Instrumentation and feedback capture are in place so failures, friction points, and adoption gaps are visible and triaged with owners.
- Populate Faketran application database models for fake companies Completed — Audit vendored Faketran fake-company sources for internal application database coverage, then populate and validate the…
- Transform mockusign_dbt into realistic dbt project with staging/marts — Shape the canonical e-sign demo dbt (historically `mockusign_dbt`; may be `dundersign_dbt` or successor) as a Nimble-al…
- Vendor faketran as a monorepo lib and replace mockusign/gruber datasets Completed — Pull `faketran` into this monorepo as a repo-owned library/module and replace the current mockusign/gruber example data…
connector-specific dashboard packs and KPI narratives for Fivetran sources is hardened enough for regular use by multiple internal teams and initial design partners, with a predictable response loop for issues and requests. Quality expectations are documented, and prioritized improvements from real usage are actively incorporated into delivery.
- Adoption hardening for internal teams — Harden connector pack coverage for repeated use across multiple internal teams and first design partners.
- Design-partner feedback loop operations — Operationalize rapid feedback-to-fix loop for dashboard narrative quality with explicit decision logs.
- Quality standards and guardrails — Define and enforce quality standards for pack publishing workflow to keep output consistent as contributors expand.
Launch scope for connector-specific dashboard packs and KPI narratives for Fivetran sources is complete, externally explainable, and supportable: user-facing behavior is stable, documentation is publishable, and operational ownership is explicit. Remaining gaps are non-blocking, risk-assessed, and tracked as post-launch follow-up rather than unresolved launch debt.
- Launch docs and external readiness — Publish external-facing documentation and examples for dashboard narrative quality that are executable by new users.
- Launch operations and reliability readiness — Finalize operational readiness for pack publishing workflow: telemetry, alerting, support ownership, and incident playb…
- Public launch scope completion — Complete launch-critical scope for connector pack coverage with production-safe behavior and rollback clarity.
- Release top 20 connector packs with 5 dashboards each — Publish 100 launch dashboards across top 20 Fivetran connectors with QA signoff.
Post-launch stabilization is complete for connector-specific dashboard packs and KPI narratives for Fivetran sources: recurring incidents are reduced, support burden is lower, and quality gates are enforced consistently before release. The team has a repeatable operating model for maintenance, regression prevention, and measured reliability improvements.
- Regression prevention and quality gates — Add or enforce regression gates around dashboard narrative quality so release quality is sustained automatically.
- Sustainable operating model — Document and adopt sustainable operating model for pack publishing workflow across support, triage, and release cadence.
- v1.0 stability and defect burn-down — Run stability program for connector pack coverage with recurring defect burn-down and reliability trend tracking.
v1.2 delivers meaningful depth improvements in connector-specific dashboard packs and KPI narratives for Fivetran sources based on observed usage and retention signals, not just roadmap intent. Enhancements improve real customer outcomes, and release readiness is demonstrated through metrics, regression coverage, and clear migration guidance where relevant.
- Quality and performance improvements — Ship measurable quality/performance improvements in dashboard narrative quality tied to user-facing outcomes.
- v1.2 depth expansion — Deliver depth expansion in connector pack coverage prioritized by observed usage and retention outcomes.
- v1.2 release and migration readiness — Prepare v1.2 release/migration readiness for pack publishing workflow, including communication and upgrade guidance.
Long-horizon opportunities for connector-specific dashboard packs and KPI narratives for Fivetran sources are captured as concrete hypotheses with user impact, prerequisites, and evaluation criteria. Ideas are ranked by strategic value and feasibility so future investment decisions can be made quickly with less rediscovery.
- Experiment design for future bets — Design validation experiments for pack publishing workflow so future bets can be tested before major investment.
- Future opportunity research — Capture long-horizon opportunities for connector pack coverage with user impact and strategic fit.
- future: Dashboards on permissions/access — Define future connector dashboard pack focused on permissions/access analytics and governance visibility.
- future: github data — Define future GitHub analytics dashboard pack concept with engineering productivity and delivery health narratives.
- Prerequisite and dependency mapping — Map enabling prerequisites and dependencies for dashboard narrative quality to reduce future startup cost.