Quality and performance improvements
Problem
By v1.2, the profiler will have accumulated performance data and user feedback revealing specific quality and speed bottlenecks: semantic detection latency on wide tables (100+ columns) may make profiling impractical, confidence scores on certain data patterns may cluster around unhelpful mid-range values, and profile rendering for large schemas may be slow enough to discourage exploration. These issues have measurable user-facing impact (abandonment during profiling, misclassification-driven misinterpretation) but have not been systematically profiled and optimized. Targeted improvements to semantic inference accuracy, query efficiency, and rendering performance — each tied to a measurable user-facing outcome — are needed to move the profiler from "adequate" to "delightful."
Context
- Once warehouse profiling, semantic inference, and analyst-facing inspect/context artifacts is in regular use, quality and performance work needs to target the actual slow, flaky, or costly paths rather than generic optimization ideas.
- The right scope here is evidence-driven: identify bottlenecks, remove the highest-friction issues, and make sure the fixes are measurable and regression-resistant.
- Expected touchpoints include
dataface/core/inspect/, schema-context consumers, inspect docs, and core tests, telemetry or QA evidence, and any heavy workflows where users are paying the cost today.
Possible Solutions
- A - Tune isolated hotspots as they are reported: useful for emergencies, but it rarely produces a coherent quality/performance program.
- B - Recommended: prioritize measurable bottlenecks and quality gaps: couple performance work with correctness and UX validation so improvements are both faster and safer.
- C - Rewrite broad subsystems for theoretical speedups: tempting, but usually too risky and poorly grounded for this milestone.
Plan
- Identify the biggest quality and performance pain points in warehouse profiling, semantic inference, and analyst-facing inspect/context artifacts using real usage data, QA findings, and support feedback.
- Choose a small set of improvements with clear before/after measures and explicit user-facing benefit.
- Implement the fixes together with regression checks, docs, or operator notes wherever the change affects behavior or expectations.
- Review the measured outcome and turn any remaining hotspots into sequenced follow-up tasks instead of leaving them as vague future work.
Implementation Progress
Review Feedback
- [ ] Review cleared