Dataface Tasks

Regression prevention and quality gates

IDM4_V1_0_LAUNCH-FT_DASH_PACKS-02
Statusnot_started
Priorityp1
Milestonem4-v1-0-launch
Ownerdata-analysis-evangelist-ai-training

Problem

Dashboard pack updates—new KPIs, query changes, template fixes, connector schema adaptations—can silently break existing packs that were previously working. Without automated regression gates that validate dashboard narrative quality before release (e.g., checking that all queries compile, KPI values are non-null against test data, charts render without errors), each update risks shipping regressions to users. Manual QA doesn't scale across 100+ dashboards and 20+ connectors. Quality gates need to be enforced in the release pipeline so that broken packs are caught before they reach users, not after.

Context

  • Manual review is not enough to protect connector-specific dashboard packs and KPI narratives for Fivetran sources once the change rate increases; regressions will keep shipping unless the highest-value checks become automatic.
  • This task should identify what needs gating in CI or structured review and what evidence is sufficient to block a risky change before it reaches users.
  • Expected touchpoints include dashboard pack YAML, dbt/example assets, connector fixtures, and quickstart docs, automated tests, eval/QA checks, and any release or review scripts that can enforce the new gates.

Possible Solutions

  • A - Add only a few narrow tests around current bugs: easy to land, but it rarely protects the broader behavior contract.
  • B - Recommended: define a regression-gate bundle around the core behavior contract: combine focused tests, snapshots/evals, and required review evidence for risky changes.
  • C - Depend on manual smoke testing before each release: better than nothing, but too inconsistent to serve as a durable gate.

Plan

  1. Identify the highest-risk behavior contracts for connector-specific dashboard packs and KPI narratives for Fivetran sources and the types of changes that should be blocked when they regress.
  2. Choose the smallest practical set of automated checks and required review evidence that covers those contracts well enough to matter.
  3. Wire the new gates into the relevant test, review, or release surfaces and document when exceptions are allowed.
  4. Trial the gates on a few representative changes and tighten the signal-to-noise ratio before expanding the coverage further.

Implementation Progress

Review Feedback

  • [ ] Review cleared