Feature Placement PR Checklist (League / Lab / Dimensions)¶
Last updated: 2026-02-28
Use this checklist for every product/UI PR that introduces a new page, panel, chart, model output, or workflow.
Canonical framework: League / Lab / Dimensions Brief
Review Goal¶
Each feature must have one primary station: - League: observed state - Lab: experimental analysis - Dimensions: counterfactual/possible state exploration
If a feature spans multiple stations, split it into separate surfaces with explicit transitions.
Hard Gates (Pass/Fail)¶
All gates must pass before merge.
Primary station is explicit- PR states one owner station (
League,Lab, orDimensions). -
Fail if station is omitted or multi-owned by default.
-
Primary user question is explicit - PR includes one dominant user question the feature answers.
-
Fail if question is vague ("insights", "explore data") without a concrete ask.
-
Evidence type matches station Leagueuses observed/validated data.Labuses controlled experiments with tunable parameters.Dimensionsuses scenario/counterfactual generation.-
Fail if evidence type conflicts with station.
-
Uncertainty contract is correct League: no speculative claims presented as fact.Lab: conditional outputs are labeled as model-dependent.Dimensions: outputs are probabilistic/calibrated, not deterministic claims.-
Fail on certainty mismatch.
-
Interaction contract is correct League: read/monitor/compare interactions.Lab: test/fit/adjust interactions.Dimensions: simulate/branch/compare outcomes interactions.-
Fail if dominant user action belongs to another station.
-
Loss function alignment is documented - PR states how the feature reduces the owner station's loss.
-
Fail if no explicit loss framing.
-
Cross-station leakage is controlled - Any secondary capability links out to the correct station.
- Fail if one page tries to be all three stations.
Station-Specific Loss Checks¶
League Loss¶
L_league = inaccuracy + staleness + ambiguity + missing_context + friction
Must show: - Source/provenance clarity - Freshness handling (timestamp, season, or data scope) - Clear factual language
Hard fail conditions: - Predictive/simulated values shown as current fact - Missing time scope on key stats
Lab Loss¶
L_lab = spurious_confidence + non_reproducibility + poor_explainability + friction + overfit_to_view
Must show: - Reproducible controls (axes, model choice, params, seed when relevant) - Model assumptions and caveat language - Comparability across configurations
Hard fail conditions: - No control state visibility - "Best model" claims without diagnostics context
Dimensions Loss¶
L_dimensions = miscalibration + state_incoherence + false_precision + opaque_assumptions + narrow_coverage
Must show: - Scenario assumptions surfaced to user - Distribution/range outputs, not single-point certainty - Ability to compare alternate branches
Hard fail conditions: - Single deterministic future presented as truth - Hidden assumptions that materially change outcomes
PR Description Template (Required)¶
Paste this into every relevant PR:
## Feature Placement
- Primary station: `League | Lab | Dimensions`
- Feature summary: <one sentence>
- Primary user question: <one sentence>
- Dominant user verb: `see/verify` | `test/compare` | `simulate/project`
## Why This Station
- Why it belongs here:
- <bullet>
- <bullet>
- Why it does NOT belong in other stations:
- <bullet>
- <bullet>
## Loss Function Impact
- Station loss terms reduced:
- <term>: <how>
- <term>: <how>
- New risks introduced:
- <risk + mitigation>
## Uncertainty Contract
- Claim type: `factual` | `conditional` | `probabilistic`
- How uncertainty is communicated: <copy/UI>
Scoring Rubric (Secondary Check)¶
Score each item 0-2: - 2: clear and correct - 1: partial/ambiguous - 0: missing/incorrect
Criteria: 1. Station clarity 2. User-question clarity 3. Evidence-station fit 4. Uncertainty fit 5. Interaction fit 6. Loss-function articulation 7. Leakage control
Threshold: - 12+ = acceptable (if all hard gates pass) - <12 = revise
Note: hard gates override score. A hard-gate fail blocks merge even with a high score.
Imagined Feature Placement Examples¶
| Feature idea | Primary station | Why |
|---|---|---|
| Live league pulse board (standings + form + signal snapshot) | League | Current-state orientation; factual monitoring |
| Team trend explainer card with verified derived metrics | League | Derived but non-speculative summary |
| User-built scatter with regression/GMM + AI interpretation | Lab | Controlled experiment with explicit model lens |
| "What changed when we switch from VAEP to xT axes?" panel | Lab | Hypothesis testing via parameterized comparison |
| Monte Carlo variance sandbox for shot conversion assumptions | Lab | Experiment method; analytical stress test |
| Cross-era matchup rollout with outcome distribution | Dimensions | Alternate-state generation and probability surface |
| Scenario builder: "if Team A gains +0.2 xT/90 in run-in" | Dimensions | Counterfactual intervention on future trajectory |
| Branch explorer showing top 5 plausible season endings | Dimensions | Multi-path futures with uncertainty envelope |
| "Predicted table next week" shown as a single fixed ranking | Dimensions (incorrectly implemented) | Should be distributional; single-point certainty is a fail |
Placement Anti-Patterns¶
Leaguepage contains speculative model output without caveats.Labpage hides model parameters and cannot be reproduced.Dimensionspage offers only one "official" predicted future.- One page attempts full read + experiment + simulate in a single screen.
Tie-Break Rule¶
If a feature is genuinely ambiguous, classify by the dominant user action: - user mostly reads/verifies -> League - user mostly tunes/tests -> Lab - user mostly branches/simulates -> Dimensions
If still ambiguous, split into: - factual baseline block in League - experiment block in Lab - scenario block in Dimensions