Dataface Tasks

Release top 20 connector packs with 5 dashboards each

IDM3-FTPACK-001
Statusnot_started
Priorityp0
Milestonem3-public-launch
Ownerdata-analysis-evangelist-ai-training

Problem

The public launch requires a critical mass of connector coverage to be credible—users connecting their top Fivetran sources (Salesforce, Stripe, HubSpot, Google Ads, etc.) need to find ready-made dashboards waiting for them. Currently, only a handful of packs exist from prototyping and pilot phases. Without 100 dashboards across the top 20 connectors (5 per connector), the launch catalog will feel sparse and users of unsupported connectors will see no value. Each of these 100 dashboards needs QA signoff to ensure KPI definitions are accurate, queries execute correctly against real connector schemas, and the narrative is coherent for that connector's domain.

Context

  • The launch story for dash packs depends on breadth as well as quality: users expect ready-made analytics for their most common Fivetran sources, not just a few showcase connectors.
  • Releasing twenty connector packs with five dashboards each is primarily a content and production challenge, but it also depends on realistic source assumptions, KPI narratives, and repeatable review standards.
  • This task should identify the minimum connector list, content patterns, and production sequencing needed to make the release credible without pretending every pack is equally deep.

Possible Solutions

  • A - Produce packs opportunistically until the count is reached: may satisfy the headline number, but quality and connector relevance will vary too much.
  • B - Recommended: choose a deliberate top-connector slate and run a standardized production/review process: align connector priority, dashboard archetypes, and quality checks before scaling output.
  • C - Launch with far fewer packs and promise the rest later: lower immediate scope, but weakens the product promise significantly.

Plan

  1. Define the top-connector list and the target five-dashboard shape for each pack, including the KPI narrative each pack is meant to tell.
  2. Standardize the production inputs, review expectations, and publication criteria so packs can be built repeatedly without drifting in quality.
  3. Sequence the release into batches that balance connector demand, data-model readiness, and review capacity instead of trying to finish all packs at once.
  4. Track coverage, blockers, and quality outcomes pack by pack so the release count reflects genuinely shippable content.

Implementation Progress

  • Confirm scope and acceptance with milestone owner.

  • Milestone readiness signal is updated.

  • Track blockers and mitigation owner.

Review Feedback

  • [ ] Review cleared