Skip to content

League / Lab / Dimensions Brief

Purpose

This document defines the three top-level product stations for NWSL Notebook: - League: observed reality - Lab: controlled experimentation - Dimensions: counterfactual and possible realities

Use this brief as the default decision rubric when placing new UI experiences.

Core Knowledge Gradient

Knowledge moves from lowest tolerance for error to highest tolerance for uncertainty: 1. League (near-zero loss tolerance) 2. Lab (moderate loss tolerance) 3. Dimensions (high uncertainty tolerance, but must be calibrated)

In plain terms: - League answers: "What is true now / what happened?" - Lab answers: "What explains what we see?" - Dimensions answers: "What could happen under alternate states?"

1) League

What it is

The canonical surface of current and historical league state (all teams, all seasons), combining primary and stable derived data.

Questions it answers

  • What happened?
  • What is the current state of the league/team/player?
  • How do teams compare on agreed metrics right now?
  • What trends are already evidenced in data?

How and why

  • Deterministic views from primary tables plus validated derived metrics/signals.
  • Minimal assumptions; clear provenance.
  • Fast orientation and trust-building.

What it shows

  • Standings, record, goals, points, form
  • Team/player leaderboards
  • Signal state snapshots and timelines
  • League-wide map/overview surfaces

Why users go there

To ground themselves in reality before analyzing or simulating.

Experience and answers it provides

  • High-confidence answers
  • Immediate situational awareness
  • Shared baseline for discussion and decision-making

Abstract loss function

L_league = w1*inaccuracy + w2*staleness + w3*ambiguity + w4*missing_context + w5*interaction_friction

Interpretation: - Optimize for truth, clarity, and speed. - Any speculative framing increases loss.

2) Lab

What it is

An interactive experimentation workbench for testing hypotheses with models, comparisons, and visual analytics.

Questions it answers

  • Why might this be happening?
  • Which variables relate most strongly?
  • How sensitive are results to model/parameter choices?
  • Which explanation is more plausible under controlled tests?

How and why

  • User-driven exploration (scatter, overlays, clustering, regressions, diagnostics).
  • Reproducible controls and tunable parameters.
  • Small, inspectable experiments.

What it shows

  • User-chosen scatter plots
  • ML overlays and model fits
  • Team/player comparisons
  • Parameterized exploratory visualizations
  • Monte Carlo experiments (as analytical experiments, not narrative futures)

Why users go there

To interrogate data, test assumptions, and build defensible explanations.

Experience and answers it provides

  • Conditional answers ("given these settings")
  • Comparative insight across models/lenses
  • Evidence for or against hypotheses

Abstract loss function

L_lab = w1*spurious_confidence + w2*non_reproducibility + w3*poor_explainability + w4*interaction_friction + w5*overfit_to_view

Interpretation: - Optimize for insight per interaction while preserving methodological honesty. - Overstated certainty increases loss.

3) Dimensions

What it is

A scenario engine for alternate states and future possibilities, powered by simulation/generative sequence modeling.

Questions it answers

  • What could happen next?
  • What changes under different assumptions/interventions?
  • Which futures are robust vs fragile?
  • What is the distribution of plausible outcomes?

How and why

  • Counterfactual state editing + rollout engines (e.g., transformer-based match simulations).
  • Distribution-first outputs, not single deterministic claims.
  • Emphasis on calibration, uncertainty, and scenario framing.

What it shows

  • Match/world rollouts
  • Outcome distributions and scenario envelopes
  • Alternate-universe comparisons
  • Sensitivity to assumptions and interventions

Why users go there

To plan, stress-test decisions, and reason across plausible futures.

Experience and answers it provides

  • Probabilistic answers
  • Tradeoff-aware foresight
  • Scenario narratives grounded in model behavior

Abstract loss function

L_dimensions = w1*miscalibration + w2*state_incoherence + w3*false_precision + w4*opaque_assumptions + w5*narrow_scenario_coverage

Interpretation: - Optimize for calibrated possibility space, not certainty. - "Looks precise but is wrong" is the highest loss.

Placement Rubric (Where does a new UI belong?)

Put a feature in: - League if it reports or summarizes observed state with minimal assumptions. - Lab if the user is actively choosing variables/models to test a hypothesis. - Dimensions if the feature creates or compares alternate/future states.

Fast test

If the dominant user verb is: - "See / verify / monitor" -> League - "Test / compare / fit / probe" -> Lab - "Simulate / project / explore what-if" -> Dimensions

Boundary Rules

  • Monte Carlo used as an experiment method belongs in Lab.
  • Match/world scenario rollouts belong in Dimensions.
  • Derived metrics can appear in all three stations, but their role changes:
  • League: factual summary
  • Lab: analytical variable
  • Dimensions: state-transition driver

Product Promise by Station

  • League: "Trust what you see."
  • Lab: "Interrogate what you think."
  • Dimensions: "Explore what could be."