Dataface Tasks

Research

Scope: How we plan and execute product-grounded research for quickstart dashboard packs—not only what to mine from a repo, but how to build a durable library of examples, personas, and questions, and which foreign disciplines we borrow from.

Executable per-connector checklist remains in .codex/skills/quickstart-product-dashboard-research/SKILL.md. This doc is the program-level companion: meta-methodology, evidence strategy, and thinking frameworks.


Why this initiative exists

Quickstart dbt projects are shaped by source APIs and typical reporting. Analysts already have mental models from the vendor UI and from how their teams talk about work (pipeline reviews, ticket queues, QBRs). Dataface packs should meet them there while staying honest about what the modeled tables can answer.

We are not competing with the vendor’s pixel-perfect UI—we are encoding question types, navigation habits, and cadence (daily vs quarterly) into YAML backed by the quickstart star schema.


Part 1 — Meta: how to plan the research (before you open the vendor app)

Research on research: treat each connector as a mini product-discovery cycle with explicit phases so outputs are comparable across Salesforce, Zendesk, HubSpot, etc.

1.1 Charter (one page per connector)

Before deep diving, write:

Block Question
Product & edition posture Which SKU/docs baseline? (e.g. Sales Cloud vs full platform.) What do we not claim to cover?
Quickstart contract What does this dbt package promise vs the full warehouse?
Success definition What must a pilot user recognize on day one? (“This looks like our Zendesk home,” not “every Salesforce report.”)
Non-goals Features the product has that the quickstart will not model—explicitly out of scope.
Stakeholder voices Which personas must be plausible for v1? (Often: operator + team lead + exec snapshot.)

This prevents endless “one more dashboard” creep and aligns gaps with scope cuts, not “silent failure” in YAML.

1.2 Phased workflow

Charter → Inventory sources → Extract examples → Personas & questions → Map to model → Topology & links → Gap log → Retro
Phase Output Time box (guidance)
Inventory Source list with URLs, trust tier, capture date 0.5–1 day
Extract Structured example list + screenshots index 1–3 days
Personas & questions Question bank with cadence tags 1–2 days
Model map Table/column anchors + gap list tied to dbt read
Topology Board outline + link matrix 0.5–1 day

Adjust per connector complexity; write down actuals in the connector retro for program improvement.

1.3 Artefact standards (so the library stays useful)

Every connector folder should include:

  • research.md — narrative + synthesis (this structure).
  • sources.mdbibliography: URLs, doc sections, “last verified” date, access notes (public vs trial).
  • examples/indexed list: “Example ID, name, source URL, object/metric focus, persona, cadence, screenshot optional.”
  • questions.md or section in research.md — question bank (see Part 4).
  • link-matrix.md — board-to-board navigation intent (even before dashboard linking ships).
  • gaps.md — product vs quickstart; feeds modeling or explicit “not in v1.”

Naming: ai_notes/quickstart-dashboards/<connector-slug>/ or a dedicated workspace repo—pick one convention in program setup and do not fork per author.

Copy-paste templates: ai_notes/quickstart-dashboards/_templates/ (charter, research, sources, examples/index, questions, link-matrix, gaps).

1.4 Quality bar for “done enough”

Research is done when:

  1. Another builder can implement YAML without re-opening vendor tabs for basics.
  2. Every default-ish dashboard type you claim parity with has either a model anchor or a gap entry.
  3. Personas have at least three cadences represented (see Part 4): operational, rhythm-of-business, planning.
  4. Sources are dated; known stale sections are flagged.

Part 2 — Disciplines and “skills” to steal from

We are doing hybrid product research + analytics design + competitive intel. Explicitly borrow methods and language from:

2.1 Jobs-to-be-done (JTBD) and outcome chains

  • Job story framing: “When I [situation], I want to [motivation], so I can [outcome].”
  • Outcome chain: raw event → roll-up → comparison → decision → action (e.g. ticket reopened → team backlog → SLA breach → escalation).
  • Steal for: turning “cool chart” lists into why this exists—feeds board titles, section order, and link targets.

2.2 UX / service design / journey maps

  • Swimlanes by role (agent vs lead vs exec).
  • Touchpoints: where they live in-product (home, list view, record, admin analytics).
  • Steal for: board topology (overview vs entity detail vs admin), and where deep links must land.

2.3 Sales engineering and “first demo” narratives

  • Vendors and partners structure demo stories (quarterly business review, Monday morning pipeline, support standup).
  • Steal for: canonical question scripts—“If you only had five minutes with a RevOps lead, which five cuts of data?”

2.4 Competitive and ecosystem intelligence

  • Adjacent BI templates: Salesforce dashboards in Tableau/Looker galleries (as pattern references, not asset copies), industry “starter” workbooks, partner solution guides.
  • Embedded analytics: Many SaaS products embed standard explores (Looker, Omni, etc.); screenshots and “metric definitions” pages describe intended semantics.
  • Steal for: naming consistency and metric families (pipeline, backlog, aging, conversion) that recur across tools.

2.5 Technical writing and information architecture

  • Docs hierarchies: “Objects → fields → reports that use them.”
  • Steal for: mapping vendor object model to quickstart facts without inventing joins the model doesn’t support.

2.6 Analyst ethnography (lightweight)

  • Reddit, Stack Exchange, vendor forums, “how do I report on X” threads—phrasing of real questions.
  • Steal for: natural-language question bank and agent-eval prompts later (aligns with M2 catalog/planning work).

2.7 Evaluation / rubric thinking (from our own eval initiatives)

  • Treat question coverage like a rubric: persona × cadence × chart type × entity depth.
  • Steal for: pilot sign-off—“we scored green on operator daily, yellow on exec quarterly until gap G-12 is modeled.”

Part 3 — Building the example and evidence library

3.1 What counts as an “example”

An example is anything that constrains pack design:

Type Description
Native vendor OOTB dashboards, reports, analytics home, Einstein/predictive surfaces if they define mainstream metrics.
Official templates Vendor gallery, Success templates, AppExchange/report packages documented by vendor.
First-party docs “Understanding X metrics,” report type reference, object relationship diagrams.
Third-party BI Reputable template libraries, reference architecture PDFs—patterns only; note license.
dbt / Fivetran Quickstart README, schema.yml descriptions, analyses/, known limitations.

Each row in examples/ index should tag: source tier (P0 = native product, P1 = official doc, P2 = community/pattern).

3.2 Capture policy

  • Prefer links + quotes over screenshots; screenshots rot and can blur fair-use boundaries—use for internal research only unless license-clear.
  • Always record: capture date, URL, product area name, what question the example answers (one line).
  • Never treat a third-party workbook as “canonical semantics” without vendor doc cross-check when the metric is legally/commercially sensitive (revenue, pipeline).

3.3 Anti-patterns

  • Screenshot hoarding without question mapping—junk drawer, not a library.
  • Parity theater—listing 40 dashboards when v1 ships 4 boards; use coverage matrix instead: example → planned board or gap.
  • Undated sources—six months later nobody knows if the UI moved.

3.4 Consolidation across connectors

At program level (not only per slug):

  • Pattern cards: reusable motifs—“ticket aging histogram,” “pipeline by stage snapshot,” “account 360 header KPI strip.”
  • Cross-product glossary: same job (“deflection”) may differ per vendor—note terminology drift.

Store pattern cards under e.g. ai_notes/quickstart-dashboards/_patterns/ or initiative-maintained folder—TBD in program setup.


Part 4 — People, activities, and questions (deep framing)

4.1 Who are the people?

For each connector, maintain an explicit persona table (minimal v1: 3–5 rows):

Persona Typical role titles Primary system Information appetite Decision levers
Operator Rep, agent, SDR Executes in-product “My queue today” Speed, accuracy, follow-ups
Team lead Manager, queue owner Team + reports “Team health vs SLA” Staffing, escalation, coaching
Exec / cross-functional Director, VP, RevOps Dashboards + slides “Are we on track?” Budget, hiring, forecast
Admin / governance Salesforce admin, Zendesk admin Config + audit “Compliance, entitlements” Policies, data quality

Add partner / finance / product only when the quickstart model actually supports those cuts.

4.2 What do they do with the product? (activities)

Bucket activities (verbs), not org-chart titles:

  • Triage — sort, prioritize, assign.
  • Execute — log work, communicate, resolve.
  • Review — standup, pipeline review, QBR prep.
  • Plan — capacity, targets, territory, roadmap.
  • Govern — access, data quality, audit.

Map each activity to question clusters (see 4.4).

4.3 Cadence: day-to-day vs rhythm-of-business vs planning

Questions and dashboard granularity change by horizon:

Cadence Horizon Typical cuts Emotional tone
Operational Minutes–days Live queues, today’s list, SLA timers Urgent, concrete
Tactical / weekly–monthly Week–quarter Team comparisons, trends, cohorts Comparative, diagnostic
Strategic / quarterly–annual Quarter+ Targets vs actuals, segmentation shifts, investments Slow, narrative-heavy

A strong pack has at least one plausible board or section per cadence or a documented gap (“QBR-style cohort analysis not in quickstart—use warehouse”).

4.4 Question bank structure

For each connector, maintain 30–80 questions over time (v1 might ship 10–20 satisfied, rest backlog/gap).

Per question row:

  • Id (stable: ZD-OPS-014)
  • Natural language (how a human asks)
  • Persona
  • Cadence (operational / tactical / strategic)
  • Activity (triage, review, …)
  • Visualization type (overview, ranking, trend, funnel, entity detail)
  • Primary entity (ticket, account, user, …)
  • Model anchor (table.column or “GAP”)
  • Risk (optional: text-to-SQL ambiguity, many-to-many)

Group questions into episodes—narratives that span boards (“Monday morning support lead”: queue depth → agent load → escalation list).

4.5 From questions to boards (design heuristic)

  • High frequency + sharedoverview board.
  • Deep entity semanticsdetail board with stable key in URL (once dashboard linking is ready).
  • Rare but political (exec) → small snapshot section or explicit out of scope with reason.

Part 5 — Signals to mine (per repo) — expanded

Already familiar; expand with program discipline:

  • dbt: README, schema.yml, exposures, analyses/, semantic/metrics layers, tests naming (often reveal grain).
  • Vendor docs: report catalogs, analytics modules, object reference, “understanding” guides, release notes (metrics renames!).
  • API shapes (indirect): sometimes explain why quickstart has strange keys—helps agent and author mental models.
  • Community: Stack Overflow patterns, r/salesforce, vendor forums—for phrasing and rank-ordered pain.
  • Third-party BI: search “<Product> dashboard template” for layout patterns—verify semantics in vendor docs.

Part 6 — Hypotheses to test in pilots

  1. A small board set (3–5 YAML files) with strong linking beats many flat boards for recognition and adoption.
  2. Research notes that include a link matrix and question IDs reduce rework when adding a third connector.
  3. Gaps between vendor questions and quickstart facts cluster by connector family (CRM vs support vs billing)—useful for prioritizing modeling vs scope cuts.
  4. Cadence tagging predicts which boards analysts open first—validate with pilot feedback or lightweight telemetry later.
  5. A shared _patterns/ library pays off after three connectors by speeding board naming and YAML structure.

Part 7 — Program-level research roadmap (meta timeline)

Step Action
1 Lock folder convention + bibliography template (sources.md).
2 Run Salesforce + Zendesk pilots using this doc + skill; time each phase.
3 Extract retro deltas into this research doc (what was missing from Part 1–4).
4 Add pattern cards for motifs that appeared twice.
5 Decide automation boundaries (screenshot pipeline, URL archival)—separate tasks.

References


Changelog

  • 2025-03-20: Expanded with meta-methodology, disciplines to borrow, evidence library standards, personas/cadences/question bank, program roadmap.