Dataface Tasks

Quality standards and guardrails

IDM2_INTERNAL_ADOPTION_DESIGN_PARTNERS-FT_DASH_PACKS-03
Statusnot_started
Priorityp1
Milestonem2-internal-adoption-design-partners
Ownerdata-analysis-evangelist-ai-training

Problem

As more people contribute connector dashboard packs—internal teams, design partners, potentially AI-assisted authoring—there are no defined quality standards for what a publishable pack looks like. Without explicit criteria for KPI accuracy, dashboard narrative coherence, chart labeling, query correctness, and connector schema coverage, pack quality will drift. Some packs will be polished while others ship with broken queries or misleading metrics. Inconsistent quality undermines the brand promise that Fivetran connector packs provide instant, trustworthy analytics out of the box.

Context

  • Teams are judging readiness for connector-specific dashboard packs and KPI narratives for Fivetran sources inconsistently because there is no single quality bar that covers correctness, UX clarity, failure handling, and maintenance expectations.
  • Without explicit standards, work gets approved on local intuition and later re-opened when another reviewer finds a gap that was never written down.
  • Expected touchpoints include dashboard pack YAML, dbt/example assets, connector fixtures, and quickstart docs, review checklists, docs, and any eval or QA surfaces used to prove a change is safe to ship.

Possible Solutions

  • A - Rely on experienced reviewers to enforce quality informally: flexible, but it does not scale and leaves decisions hard to reproduce.
  • B - Recommended: define a concise quality rubric plus guardrails: specify acceptance criteria, required evidence, and clear anti-goals so reviews are consistent.
  • C - Block all new work until a comprehensive handbook exists: safer in theory, but too heavy for the milestone and likely to stall momentum.

Plan

  1. List the failure modes and review disagreements that matter most for connector-specific dashboard packs and KPI narratives for Fivetran sources, using recent work as concrete examples.
  2. Turn those into a small set of quality standards, required validation evidence, and explicit guardrails for unsupported or risky cases.
  3. Update the relevant docs, task/checklist expectations, and test or QA hooks so the standards are actually enforced.
  4. Use the rubric on a representative set of recent or in-flight items and tighten the wording anywhere it still leaves too much ambiguity.

Implementation Progress

Review Feedback

  • [ ] Review cleared