Scope: How we plan and execute product-grounded research for quickstart dashboard packs—not only what to mine from a repo, but how to build a durable library of examples, personas, and questions, and which foreign disciplines we borrow from.
Executable per-connector checklist remains in .codex/skills/quickstart-product-dashboard-research/SKILL.md. This doc is the program-level companion: meta-methodology, evidence strategy, and thinking frameworks.
Quickstart dbt projects are shaped by source APIs and typical reporting. Analysts already have mental models from the vendor UI and from how their teams talk about work (pipeline reviews, ticket queues, QBRs). Dataface packs should meet them there while staying honest about what the modeled tables can answer.
We are not competing with the vendor’s pixel-perfect UI—we are encoding question types, navigation habits, and cadence (daily vs quarterly) into YAML backed by the quickstart star schema.
Research on research: treat each connector as a mini product-discovery cycle with explicit phases so outputs are comparable across Salesforce, Zendesk, HubSpot, etc.
Before deep diving, write:
| Block | Question |
|---|---|
| Product & edition posture | Which SKU/docs baseline? (e.g. Sales Cloud vs full platform.) What do we not claim to cover? |
| Quickstart contract | What does this dbt package promise vs the full warehouse? |
| Success definition | What must a pilot user recognize on day one? (“This looks like our Zendesk home,” not “every Salesforce report.”) |
| Non-goals | Features the product has that the quickstart will not model—explicitly out of scope. |
| Stakeholder voices | Which personas must be plausible for v1? (Often: operator + team lead + exec snapshot.) |
This prevents endless “one more dashboard” creep and aligns gaps with scope cuts, not “silent failure” in YAML.
Charter → Inventory sources → Extract examples → Personas & questions → Map to model → Topology & links → Gap log → Retro
| Phase | Output | Time box (guidance) |
|---|---|---|
| Inventory | Source list with URLs, trust tier, capture date | 0.5–1 day |
| Extract | Structured example list + screenshots index | 1–3 days |
| Personas & questions | Question bank with cadence tags | 1–2 days |
| Model map | Table/column anchors + gap list | tied to dbt read |
| Topology | Board outline + link matrix | 0.5–1 day |
Adjust per connector complexity; write down actuals in the connector retro for program improvement.
Every connector folder should include:
research.md — narrative + synthesis (this structure).sources.md — bibliography: URLs, doc sections, “last verified” date, access notes (public vs trial).examples/ — indexed list: “Example ID, name, source URL, object/metric focus, persona, cadence, screenshot optional.”questions.md or section in research.md — question bank (see Part 4).link-matrix.md — board-to-board navigation intent (even before dashboard linking ships).gaps.md — product vs quickstart; feeds modeling or explicit “not in v1.”Naming: ai_notes/quickstart-dashboards/<connector-slug>/ or a dedicated workspace repo—pick one convention in program setup and do not fork per author.
Copy-paste templates: ai_notes/quickstart-dashboards/_templates/ (charter, research, sources, examples/index, questions, link-matrix, gaps).
Research is done when:
We are doing hybrid product research + analytics design + competitive intel. Explicitly borrow methods and language from:
An example is anything that constrains pack design:
| Type | Description |
|---|---|
| Native vendor | OOTB dashboards, reports, analytics home, Einstein/predictive surfaces if they define mainstream metrics. |
| Official templates | Vendor gallery, Success templates, AppExchange/report packages documented by vendor. |
| First-party docs | “Understanding X metrics,” report type reference, object relationship diagrams. |
| Third-party BI | Reputable template libraries, reference architecture PDFs—patterns only; note license. |
| dbt / Fivetran | Quickstart README, schema.yml descriptions, analyses/, known limitations. |
Each row in examples/ index should tag: source tier (P0 = native product, P1 = official doc, P2 = community/pattern).
At program level (not only per slug):
Store pattern cards under e.g. ai_notes/quickstart-dashboards/_patterns/ or initiative-maintained folder—TBD in program setup.
For each connector, maintain an explicit persona table (minimal v1: 3–5 rows):
| Persona | Typical role titles | Primary system | Information appetite | Decision levers |
|---|---|---|---|---|
| Operator | Rep, agent, SDR | Executes in-product | “My queue today” | Speed, accuracy, follow-ups |
| Team lead | Manager, queue owner | Team + reports | “Team health vs SLA” | Staffing, escalation, coaching |
| Exec / cross-functional | Director, VP, RevOps | Dashboards + slides | “Are we on track?” | Budget, hiring, forecast |
| Admin / governance | Salesforce admin, Zendesk admin | Config + audit | “Compliance, entitlements” | Policies, data quality |
Add partner / finance / product only when the quickstart model actually supports those cuts.
Bucket activities (verbs), not org-chart titles:
Map each activity to question clusters (see 4.4).
Questions and dashboard granularity change by horizon:
| Cadence | Horizon | Typical cuts | Emotional tone |
|---|---|---|---|
| Operational | Minutes–days | Live queues, today’s list, SLA timers | Urgent, concrete |
| Tactical / weekly–monthly | Week–quarter | Team comparisons, trends, cohorts | Comparative, diagnostic |
| Strategic / quarterly–annual | Quarter+ | Targets vs actuals, segmentation shifts, investments | Slow, narrative-heavy |
A strong pack has at least one plausible board or section per cadence or a documented gap (“QBR-style cohort analysis not in quickstart—use warehouse”).
For each connector, maintain 30–80 questions over time (v1 might ship 10–20 satisfied, rest backlog/gap).
Per question row:
ZD-OPS-014)Group questions into episodes—narratives that span boards (“Monday morning support lead”: queue depth → agent load → escalation list).
Already familiar; expand with program discipline:
README, schema.yml, exposures, analyses/, semantic/metrics layers, tests naming (often reveal grain).<Product> dashboard template” for layout patterns—verify semantics in vendor docs._patterns/ library pays off after three connectors by speeding board naming and YAML structure.| Step | Action |
|---|---|
| 1 | Lock folder convention + bibliography template (sources.md). |
| 2 | Run Salesforce + Zendesk pilots using this doc + skill; time each phase. |
| 3 | Extract retro deltas into this research doc (what was missing from Part 1–4). |
| 4 | Add pattern cards for motifs that appeared twice. |
| 5 | Decide automation boundaries (screenshot pipeline, URL archival)—separate tasks. |
.codex/skills/quickstart-product-dashboard-research/SKILL.md.ai_notes/considerations/INTERACTIVE_VARIABLES_HTML.md.