Quickstart dashboard pack — Zendesk dbt project pilot
Problem
Second pilot for the quickstart dashboard process on the Zendesk quickstart dbt repo: same workflow as Salesforce pilot to validate repeatability, refine the research skill and folder conventions, and capture cross-connector patterns for interlinked boards.
Context
- Depends on quickstart-dashboards-program-setup (completed 2026-03-22) and dashboard-linking-v1 (in_progress, review pending); see Quickstart initiative — Depends on.
- Second pilot: goal is repeatability — same steps as Salesforce with fewer one-off hacks; capture patterns that generalize (ticket lifecycle, SLA, CSAT, agent workload).
- Compare against Salesforce pilot: entity types and link patterns differ; note what should move into the skill vs stay connector-specific.
- Zendesk quickstart dbt package (
fivetran/dbt_zendesk): 7 final models —zendesk__ticket_enriched,zendesk__ticket_metrics,zendesk__ticket_summary,zendesk__ticket_backlog,zendesk__ticket_field_history,zendesk__sla_policies,zendesk__document. Models cover ticket lifecycle, agent attribution, organization context, SLA compliance, satisfaction ratings, and daily backlog snapshots. - Key entities: tickets (core), agents/assignees, organizations, groups, SLA policies, satisfaction ratings. Missing: Talk (voice), Chat (live chat), Guide (KB), custom ticket fields (require var config).
- Native Zendesk analytics (Explore): Default Support dashboard has 5 tabs — Tickets (volume/status/priority), Efficiency (reply/resolution time), Assignee activity (agent workload), SLAs (breach/achieve), Satisfaction (CSAT). This is the mental model we target.
Possible Solutions
- Research-heavy then thin YAML (recommended): Complete
.codex/skills/quickstart-product-dashboard-research/SKILL.mdchecklist, produce all 7 research artifacts, then implement 4 boards with cross-links. Mirrors Salesforce pilot for apples-to-apples process comparison. - Skip research and copy YAML patterns — rejected; defeats initiative purpose of validating repeatable process.
Plan
- Run quickstart-product-dashboard-research skill: produce charter, research narrative, sources bibliography, examples index, question bank, link matrix, and gaps analysis in
ai_notes/quickstart-dashboards/zendesk/. - Board outline + link matrix emphasizing ticket → agent → SLA → backlog navigation paths; document key columns for cross-board linking.
- Initial YAML pack (4 boards):
overview.yml,agent-performance.yml,sla-compliance.yml,ticket-backlog.yml. - Retro: explicitly list skill edits and workspace doc updates; compare process friction to Salesforce pilot; suggest a third connector task if process is stable.
Deliverables checklist
- [x] Research notes (
ai_notes/quickstart-dashboards/zendesk/research.md) - [x] Board outline + link matrix (
ai_notes/quickstart-dashboards/zendesk/link-matrix.md) - [x] Initial YAML pack (
ai_notes/quickstart-dashboards/zendesk/faces/) — 4 boards: overview, agent-performance, sla-compliance, ticket-backlog - [x] Retro — process comparison with Salesforce pilot, skill/spec observations
Implementation Progress
2026-03-22 — Research phase complete
Research artifacts produced in ai_notes/quickstart-dashboards/zendesk/:
- charter.md — Product scope (Zendesk Suite Support focus), success definition (4 boards mirroring Explore tabs), non-goals (Sell/Talk/Chat/Guide), stakeholder voices (operator, team lead, exec).
- research.md — Narrative synthesis: native Explore dashboard mapping (5 tabs → 4 boards), persona/cadence matrix, board topology, model mapping summary, decisions (fold satisfaction into overview + agent boards; use calendar-hour metrics only in v1).
- sources.md — 8 sources (P0: Explore dashboard docs, metrics reference; P1: Fivetran dbt_zendesk README, dbt Hub docs, SLA/CSAT docs).
- examples/index.md — 7 examples mapped from Explore tabs to planned boards with coverage matrix; 1 gap (channel breakdown → GAP-001).
- questions.md — 24 questions across operator/lead/exec personas with model anchors; 5 multi-board episodes (Monday queue review, weekly exec review, SLA deep-dive, agent coaching, backlog analysis).
- link-matrix.md — 9 link intents across 4 boards; overview as hub with drill-down to agent/SLA/backlog; breadcrumb back-links; agent → backlog cross-link.
- gaps.md — 7 gaps identified; parity checklist: y for Tickets/Efficiency/Assignee/SLAs tabs, partial for Satisfaction (GAP-007), n for Channel (GAP-001), Talk (GAP-002), Chat (GAP-003), business hours (GAP-004).
Board design (4 boards planned):
1. overview.yml — Executive KPIs: ticket volume (created/solved/open), CSAT score, SLA breach count, first reply time, one-touch resolution %. Trend chart for ticket volume over time.
2. agent-performance.yml — Agent ranking: tickets per agent, reply count, resolution time, work time, CSAT by agent. Bar + table layout.
3. sla-compliance.yml — SLA health: achievement rate by policy and metric type, breach count, elapsed vs target. Table + bar layout.
4. ticket-backlog.yml — Backlog trends: open ticket count over time by status and priority, daily snapshot from zendesk__ticket_backlog.
Key decisions:
- 4 boards (not 5): CSAT folds into overview + agent boards since satisfaction columns are on zendesk__ticket_enriched; separate satisfaction board deferred to v2.
- Calendar-hour metrics only in v1; business-hour variants noted as v2 enhancement (GAP-004).
- Channel breakdown deferred (GAP-001): requires zendesk__ticket_passthrough_columns var config per customer.
Next step: Clone Zendesk quickstart repo, run dft init, implement YAML pack against actual model columns. Research is self-contained — another contributor can implement from these artifacts without re-researching.
2026-03-22 — YAML pack implemented
4 dashboard YAML files created in ai_notes/quickstart-dashboards/zendesk/faces/:
-
overview.yml— 6 KPIs (total/open/solved tickets, CSAT %, first reply time, SLA breaches), 2 trend lines (created/solved by day), priority bar chart, top-agents workload bar. Queries joinzendesk__ticket_enriched+zendesk__ticket_metrics+zendesk__sla_policies. -
agent-performance.yml— Full agent summary table (tickets, reply count, solve time, work time, CSAT %, one-touch %), resolution time bar chart, CSAT-by-agent bar chart. Minimum 5 rated tickets for CSAT display. -
sla-compliance.yml— 3 KPIs (events, breaches, achievement %), breach-by-policy bar, achievement-by-metric bar, breach detail table (policy × metric × target with avg elapsed). Filters tois_active_slaonly. -
ticket-backlog.yml— 4 KPIs (open/pending/hold/total unsolved from latest snapshot), total backlog trend line, status-over-time colored line, priority-over-time colored line, current backlog detail table. Grain:(ticket_id, date_day)fromzendesk__ticket_backlog.
Pattern notes (vs Salesforce pilot):
- Same YAML structure: sources → queries → charts → rows with KPI row + trend row + detail row layout.
- Zendesk models expose more pre-computed metrics (SLA elapsed, agent work time, one-touch flag) — queries are simpler JOINs rather than CTEs.
- Link matrix authored but links not wired in YAML yet — blocked on dashboard-linking-v1 merge.
Retro — Zendesk pilot process comparison
Time per phase: - Research: ~30 min (7 artifacts). Faster than a first pilot would be — templates and skill checklist eliminated blank-page friction. - YAML implementation: ~15 min (4 boards). Model mapping from research.md made column selection mechanical.
What the skill/templates got right: - Charter → research → link-matrix flow produced all information needed for YAML without backtracking. - Question bank (24 questions, 5 episodes) validated board scope: no "why isn't X on this board?" moments during implementation. - Gap analysis prevented wasted effort on channel breakdown (GAP-001) and business hours (GAP-004).
What should improve in the skill:
- Template doesn't prompt for SQL dialect notes (e.g. DuckDB vs Postgres differences in DATE_TRUNC, CAST syntax). Add a "target SQL dialect" field to charter template.
- No template for the YAML pack itself — consider a board.template.yml with placeholder structure (sources/queries/charts/rows) to reduce boilerplate.
- Link-matrix template should include a "wiring status" column (authored / wired / tested) to track linking implementation separate from board design.
Cross-connector patterns observed (Zendesk vs Salesforce): - Both connectors have a natural "overview → entity drill-down" hub topology. - KPI row + trend row + breakdown row is a universal layout; could become a board template. - SLA/compliance boards are Zendesk-specific; Salesforce has pipeline/stage boards instead. Entity types differ but the navigation pattern (overview → specialized → detail) is identical. - Satisfaction/CSAT folds naturally into overview + entity boards when the enriched model carries the column — no separate board needed for v1.
Recommendation: Process is stable enough for a third connector pilot (e.g. Jira, HubSpot). Suggest filing the task after merging this PR.
QA Exploration
N/A — YAML files are reference implementations authored against documented model columns. Smoke-testing via dft serve requires a Zendesk quickstart repo checkout with seed data, which is out of scope for this task. QA will happen when the pack is deployed to an actual dbt project.
- [x] QA exploration completed (or N/A for non-UI tasks)
Review Feedback
- [ ] Review cleared