name: quickstart-product-dashboard-research
description: >-
Research a SaaS connector product (e.g. Salesforce, Zendesk) to design Dataface
dashboard packs for a Fivetran quickstart dbt project. Use when building
quickstart dashboard YAML, mapping vendor-native dashboards to star-schema facts,
or planning entity/detail boards and cross-board navigation. Do NOT use for
generic dashboard design unrelated to quickstart dbt packs.
quickstart-product-dashboard-research
Structured research pass before writing quickstart dashboard YAML: understand what the source product ships natively, what analysts actually ask, and how boards should link so navigation feels like the vendor UI—but backed by the quickstart model.
Program-level methodology (how to plan research, build an example library, personas, cadences, borrowed disciplines): tasks/workstreams/mcp-analyst-agent/initiatives/quickstart-dashboards/research.md
When to apply
- Starting a new connector quickstart dashboard pack (after
dft init on the upstream dbt repo)
- Validating that a draft board set covers default product analytics (not only pretty charts)
- Designing entity vs detail boards (e.g. account overview → opportunity list → ticket timeline)
- Planning cross-board links (which drill targets exist, what keys are stable in the modeled data)
- Which product and which quickstart repo (clone URL, local path).
- dbt project layout after checkout: key marts, fact tables, common dimensions (from
schema.yml / model names / README).
- Hints from the repo (if present): metrics READMEs, generated analysis SQL, or docs describing modeled objects vs source APIs.
Research checklist (produce a single notes doc per connector)
Use per-connector artifacts described in tasks/workstreams/mcp-analyst-agent/initiatives/quickstart-dashboards/research.md (e.g. sources.md, examples/index, gaps.md). Templates: copy from ai_notes/quickstart-dashboards/_templates/ (see _templates/README.md). Default path pattern: ai_notes/quickstart-dashboards/<connector-slug>/ (or workspace convention from program setup).
A. Native product analytics
- What standard dashboards, reports, or home-page analytics does the vendor ship out of the box?
- What objects and metrics do those surfaces emphasize (pipeline, CSAT, case age, ARR, etc.)?
- Screenshots or links to official docs / report galleries (for later validation, not for copying proprietary assets).
B. Questions, roles, and actions
- List 10–20 canonical questions by persona (sales ops, support lead, success manager) — phrased the way users ask in the product.
- For each, note whether it is overview, ranking, trend, funnel, or entity detail.
- Note actions people take after seeing the number (open list, drill to record, escalate) — this drives board linking.
C. Data available in the quickstart model
- Map each question cluster to candidate fact tables and dimensions in the dbt project (explicit table/column anchors).
- Flag gaps where the product UI assumes fields the quickstart model does not expose (document rather than inventing).
D. Board topology
- Propose a small set of boards (e.g. executive overview, entity summary, operational deep-dive).
- For each board, list entry points and outbound links (e.g. “from account KPI row → account detail board with filter”).
- Respect Dataface constraints: queries own grain; boards link via URLs/parameters consistent with project patterns.
E. Interlock with M2 / agent context
- Note which questions are high-risk for text-to-SQL (ambiguous metric names, many-to-many paths) so catalog/planning work can prioritize them later.
Outputs
- Research notes (markdown) with sections A–E.
- Board outline: ordered list of YAML files to create, one-line purpose each.
- Link matrix: table
source_board | anchor | target_board | key column.
- Gap list for product vs model (feeds follow-up modeling or scope cuts).
Exit criteria
- Another contributor (or you, later) can implement the YAML without re-researching the product.
- Pilot tasks can point at this doc as the single source of truth for that connector.