Dataface Tasks

Quality standards and guardrails

IDM2_INTERNAL_ADOPTION_DESIGN_PARTNERS-IDE_EXTENSION-03
Statusnot_started
Priorityp1
Milestonem2-internal-adoption-design-partners
Ownerui-design-frontend-dev

Problem

As more contributors add extension features — new panel types, diagnostic rules, MCP tool integrations, preview capabilities — there are no enforced quality standards governing how these surfaces behave. One contributor's panel may handle errors gracefully while another silently swallows them. Diagnostic messages vary in tone, specificity, and actionability. Preview rendering for new chart types may ship without loading/error states. Without defined standards and automated guardrails (linting rules, review checklists, integration test requirements), the extension's quality becomes inconsistent and degrades as the contributor base grows, creating a patchwork experience that undermines user trust.

Context

  • Teams are judging readiness for analyst authoring in VS Code/Cursor with preview, diagnostics, and assist inconsistently because there is no single quality bar that covers correctness, UX clarity, failure handling, and maintenance expectations.
  • Without explicit standards, work gets approved on local intuition and later re-opened when another reviewer finds a gap that was never written down.
  • Expected touchpoints include apps/ide/vscode-extension/, preview/inspector runtime code, and extension docs/tests, review checklists, docs, and any eval or QA surfaces used to prove a change is safe to ship.

Possible Solutions

  • A - Rely on experienced reviewers to enforce quality informally: flexible, but it does not scale and leaves decisions hard to reproduce.
  • B - Recommended: define a concise quality rubric plus guardrails: specify acceptance criteria, required evidence, and clear anti-goals so reviews are consistent.
  • C - Block all new work until a comprehensive handbook exists: safer in theory, but too heavy for the milestone and likely to stall momentum.

Plan

  1. List the failure modes and review disagreements that matter most for analyst authoring in VS Code/Cursor with preview, diagnostics, and assist, using recent work as concrete examples.
  2. Turn those into a small set of quality standards, required validation evidence, and explicit guardrails for unsupported or risky cases.
  3. Update the relevant docs, task/checklist expectations, and test or QA hooks so the standards are actually enforced.
  4. Use the rubric on a representative set of recent or in-flight items and tighten the wording anywhere it still leaves too much ambiguity.

Implementation Progress

Review Feedback

  • [ ] Review cleared