Design assertions foundation
Problem
The chart library already has a growing body of design observations and structured notes, but those insights are still too loose to reliably drive downstream systems. If the work stops at observations and notes, it remains valuable but hard to operationalize: evaluation prompts, default decisions, and future automation will all depend on rediscovering or reinterpreting prior design thinking. This work should therefore produce an initial design-assertions corpus: stable, structured chart-design conclusions derived from chart exploration and written in a form that downstream workflows can consume.
Context
- The chart design notes guide already distinguishes observations, recommendations, claim types, reader goals, and implementation surfaces.
- Stable conclusions are already expected to be promoted from notes into a more durable rules layer.
- This task should convert exploration into reusable assertions rather than leaving it as freeform commentary.
Possible Solutions
- Keep design knowledge in notes only. Fastest in the short term, but too loose for downstream use.
- Write a large abstract rules document up front. Durable in theory, but likely premature and detached from concrete chart evidence.
- Recommended: use concrete chart exploration to promote stable findings into an initial design-assertions corpus, with each assertion structured strongly enough to support evaluation, defaults, and later automation.
Plan
- Define the first design-assertions format, building on the existing notes workflow.
- Use chart exploration and evaluation on the chart corpus to identify stable conclusions worth promoting.
- Promote a first batch of assertions from notes into a reusable assertions corpus.
- Tag assertions clearly enough to distinguish principle, default, exception, renderer contract, acceptance criterion, or other relevant types.
- Mark which assertions appear suitable for downstream evaluation, default-setting, or automation.
Implementation Progress
- This task should start after the first evaluation loop is running.
- The output should be a reusable assertions corpus, not just many variants or informal notes.
QA Exploration
- [ ] QA exploration completed (or N/A for non-UI tasks)
Review Feedback
- [ ] Review cleared