tasks/workstreams/mcp-analyst-agent/initiatives/quickstart-dashboards/research.md

Research

Scope: How we plan and execute product-grounded research for quickstart dashboard packs—not only what to mine from a repo, but how to build a durable library of examples, personas, and questions, and which foreign disciplines we borrow from.

Executable per-connector checklist remains in .codex/skills/quickstart-product-dashboard-research/SKILL.md. This doc is the program-level companion: meta-methodology, evidence strategy, and thinking frameworks.


Why this initiative exists

Quickstart dbt projects are shaped by source APIs and typical reporting. Analysts already have mental models from the vendor UI and from how their teams talk about work (pipeline reviews, ticket queues, QBRs). Dataface packs should meet them there while staying honest about what the modeled tables can answer.

We are not competing with the vendor’s pixel-perfect UI—we are encoding question types, navigation habits, and cadence (daily vs quarterly) into YAML backed by the quickstart star schema.


Part 1 — Meta: how to plan the research (before you open the vendor app)

Research on research: treat each connector as a mini product-discovery cycle with explicit phases so outputs are comparable across Salesforce, Zendesk, HubSpot, etc.

1.1 Charter (one page per connector)

Before deep diving, write:

Block Question
Product & edition posture Which SKU/docs baseline? (e.g. Sales Cloud vs full platform.) What do we not claim to cover?
Quickstart contract What does this dbt package promise vs the full warehouse?
Success definition What must a pilot user recognize on day one? (“This looks like our Zendesk home,” not “every Salesforce report.”)
Non-goals Features the product has that the quickstart will not model—explicitly out of scope.
Stakeholder voices Which personas must be plausible for v1? (Often: operator + team lead + exec snapshot.)

This prevents endless “one more dashboard” creep and aligns gaps with scope cuts, not “silent failure” in YAML.

1.2 Phased workflow

Charter → Inventory sources → Extract examples → Personas & questions → Map to model → Topology & links → Gap log → Retro
Phase Output Time box (guidance)
Inventory Source list with URLs, trust tier, capture date 0.5–1 day
Extract Structured example list + screenshots index 1–3 days
Personas & questions Question bank with cadence tags 1–2 days
Model map Table/column anchors + gap list tied to dbt read
Topology Board outline + link matrix 0.5–1 day

Adjust per connector complexity; write down actuals in the connector retro for program improvement.

1.3 Artefact standards (so the library stays useful)

Every connector folder should include:

Naming: ai_notes/quickstart-dashboards/<connector-slug>/ or a dedicated workspace repo—pick one convention in program setup and do not fork per author.

Copy-paste templates: ai_notes/quickstart-dashboards/_templates/ (charter, research, sources, examples/index, questions, link-matrix, gaps).

1.4 Quality bar for “done enough”

Research is done when:

  1. Another builder can implement YAML without re-opening vendor tabs for basics.
  2. Every default-ish dashboard type you claim parity with has either a model anchor or a gap entry.
  3. Personas have at least three cadences represented (see Part 4): operational, rhythm-of-business, planning.
  4. Sources are dated; known stale sections are flagged.

Part 2 — Disciplines and “skills” to steal from

We are doing hybrid product research + analytics design + competitive intel. Explicitly borrow methods and language from:

2.1 Jobs-to-be-done (JTBD) and outcome chains

2.2 UX / service design / journey maps

2.3 Sales engineering and “first demo” narratives

2.4 Competitive and ecosystem intelligence

2.5 Technical writing and information architecture

2.6 Analyst ethnography (lightweight)

2.7 Evaluation / rubric thinking (from our own eval initiatives)


Part 3 — Building the example and evidence library

3.1 What counts as an “example”

An example is anything that constrains pack design:

Type Description
Native vendor OOTB dashboards, reports, analytics home, Einstein/predictive surfaces if they define mainstream metrics.
Official templates Vendor gallery, Success templates, AppExchange/report packages documented by vendor.
First-party docs “Understanding X metrics,” report type reference, object relationship diagrams.
Third-party BI Reputable template libraries, reference architecture PDFs—patterns only; note license.
dbt / Fivetran Quickstart README, schema.yml descriptions, analyses/, known limitations.

Each row in examples/ index should tag: source tier (P0 = native product, P1 = official doc, P2 = community/pattern).

3.2 Capture policy

3.3 Anti-patterns

3.4 Consolidation across connectors

At program level (not only per slug):

Store pattern cards under e.g. ai_notes/quickstart-dashboards/_patterns/ or initiative-maintained folder—TBD in program setup.


Part 4 — People, activities, and questions (deep framing)

4.1 Who are the people?

For each connector, maintain an explicit persona table (minimal v1: 3–5 rows):

Persona Typical role titles Primary system Information appetite Decision levers
Operator Rep, agent, SDR Executes in-product “My queue today” Speed, accuracy, follow-ups
Team lead Manager, queue owner Team + reports “Team health vs SLA” Staffing, escalation, coaching
Exec / cross-functional Director, VP, RevOps Dashboards + slides “Are we on track?” Budget, hiring, forecast
Admin / governance Salesforce admin, Zendesk admin Config + audit “Compliance, entitlements” Policies, data quality

Add partner / finance / product only when the quickstart model actually supports those cuts.

4.2 What do they do with the product? (activities)

Bucket activities (verbs), not org-chart titles:

Map each activity to question clusters (see 4.4).

4.3 Cadence: day-to-day vs rhythm-of-business vs planning

Questions and dashboard granularity change by horizon:

Cadence Horizon Typical cuts Emotional tone
Operational Minutes–days Live queues, today’s list, SLA timers Urgent, concrete
Tactical / weekly–monthly Week–quarter Team comparisons, trends, cohorts Comparative, diagnostic
Strategic / quarterly–annual Quarter+ Targets vs actuals, segmentation shifts, investments Slow, narrative-heavy

A strong pack has at least one plausible board or section per cadence or a documented gap (“QBR-style cohort analysis not in quickstart—use warehouse”).

4.4 Question bank structure

For each connector, maintain 30–80 questions over time (v1 might ship 10–20 satisfied, rest backlog/gap).

Per question row:

Group questions into episodes—narratives that span boards (“Monday morning support lead”: queue depth → agent load → escalation list).

4.5 From questions to boards (design heuristic)


Part 5 — Signals to mine (per repo) — expanded

Already familiar; expand with program discipline:


Part 6 — Hypotheses to test in pilots

  1. A small board set (3–5 YAML files) with strong linking beats many flat boards for recognition and adoption.
  2. Research notes that include a link matrix and question IDs reduce rework when adding a third connector.
  3. Gaps between vendor questions and quickstart facts cluster by connector family (CRM vs support vs billing)—useful for prioritizing modeling vs scope cuts.
  4. Cadence tagging predicts which boards analysts open first—validate with pilot feedback or lightweight telemetry later.
  5. A shared _patterns/ library pays off after three connectors by speeding board naming and YAML structure.

Part 7 — Program-level research roadmap (meta timeline)

Step Action
1 Lock folder convention + bibliography template (sources.md).
2 Run Salesforce + Zendesk pilots using this doc + skill; time each phase.
3 Extract retro deltas into this research doc (what was missing from Part 1–4).
4 Add pattern cards for motifs that appeared twice.
5 Decide automation boundaries (screenshot pipeline, URL archival)—separate tasks.

References


Changelog