Experiment design for future bets
Problem
Future bets on the connector pack platform—community-contributed packs, AI-generated dashboards, self-serve pack customization—require significant investment but have uncertain payoff. Without designed validation experiments (lightweight prototypes, user interviews, A/B tests, or instrumented feature flags), the team will either over-invest in ideas that don't pan out or under-invest in high-value ideas because the risk feels too high. Pre-designed experiments with clear success criteria allow the team to test hypotheses cheaply before committing to full implementation, reducing the cost of wrong bets on the pack publishing workflow.
Context
- The larger future bets for connector-specific dashboard packs and KPI narratives for Fivetran sources should be validated with scoped experiments before they absorb major implementation effort or become roadmap commitments.
- This task should design the experiments, not run them: define hypotheses, success signals, cheap prototypes or evaluation methods, and the decision rule for what happens next.
- Expected touchpoints include dashboard pack YAML, dbt/example assets, connector fixtures, and quickstart docs, opportunity/prerequisite notes, eval or QA harnesses where relevant, and any external dependencies required to run the experiments.
Possible Solutions
- A - Rely on team intuition to pick which future bet to pursue: fast, but weak when the bets are expensive or high-risk.
- B - Recommended: design lightweight validation experiments for the strongest bets: specify hypothesis, method, scope, evidence, and the threshold for continuing or dropping the idea.
- C - Build full prototypes for every future direction immediately: rich signal, but far too expensive for early-stage uncertainty.
Plan
- Choose the future bets for connector-specific dashboard packs and KPI narratives for Fivetran sources that are both strategically important and uncertain enough to justify explicit experiments.
- Define the hypothesis, cheapest credible validation method, required inputs, and success/failure signals for each experiment.
- Document the operational constraints, owners, and follow-up decisions so the experiment outputs can actually change roadmap choices.
- Rank the experiments by cost versus decision value and sequence the first one or two instead of trying to validate everything at once.
Implementation Progress
Review Feedback
- [ ] Review cleared