MEMORANDUM
TO: Sports Science Staff
FROM: Bill Guerin, General Manager, 2026 U.S. Olympic Men's Ice Hockey Team
RE: Shooting Accuracy Study — 2030 French Alps Preparation
DATE: February 2026

After our gold medal run in Milan, we are already looking ahead to the 2030 French Alps. The coaching staff wants data to inform training decisions for the next cycle. I am authorizing a controlled shooting accuracy study with Olympic-pool athletes.

We need data on two things:

  1. Does the new dynamic warmup protocol improve shooting accuracy? Our strength and conditioning staff developed a dynamic warmup to replace the standard static stretch. I want to know if it works.
  2. Does stick flex make a measurable difference? Equipment sponsors are pushing high-flex sticks. Before we commit, I want evidence.

Some coaches insist that home-ice advantage matters and want to test arena (home vs. away). Others swear by blade curve adjustments and want to test open vs. closed blade curves. I will leave the study design to you, but my budget is limited.

Budget: $27,500

Each athlete session costs money. Recruiting beyond 5 per cell gets expensive. Plan accordingly.

Design the study. Run it. Report back with what you find.

- Guerin

SCORING

Points are awarded for each statistically significant finding (p < 0.05). Not all findings are worth the same. Findings matter most, but budget efficiency counts too. The leaderboard ranks by score.

YOUR DESIGN
RECRUITMENT COSTS

Recruiting Olympic-caliber athletes gets harder the more you need per condition.

Reps in cell Cost per rep
1 - 5 $500
6 - 10 $750
11+ $1,000
Blocking: Blocking by position organizes players by Forward vs. Defense and includes it in the model. If position explains variability in accuracy, blocking reduces residual variance and increases power. However, recruiting a balanced design costs an extra $2,500 in logistics. Is it worth it?
◆ THE BLUEPRINT
Experimental Design Concepts
Replication

Each observation in a cell is a replicate. More replicates means better estimates of within-cell variability and more statistical power to detect effects.

Factorial Design

A 2x2 factorial has 4 cells. A 2x2x2 has 8 cells. At a fixed budget, adding factors doubles the cells and halves the reps per cell.

Blocking

A generalized block design groups experimental units by a known source of variability (e.g., player position). The block term absorbs that variability, reducing SSE and increasing F-statistics for the treatment effects.

The ANOVA Model

Without blocking:

\(Y_{ijk} = \mu + \alpha_i + \beta_j + (\alpha\beta)_{ij} + \epsilon_{ijk}\)

With blocking by Position:

\(Y_{ijk} = \mu + \gamma_k + \alpha_i + \beta_j + (\alpha\beta)_{ij} + \epsilon_{ijk}\)
YOUR DESIGN
DEGREES OF FREEDOM
STUDY DATA
◆ THE BLUEPRINT
Reading Your Data
What Am I Looking At?

This table shows every observation from your simulated study. Each row is one player's shooting session (30 shots).

Key Columns
  • Warmup / Flex: The treatment conditions assigned to this player.
  • Hits: Number of targets hit out of 30 shots.
  • Accuracy: Hits / 30. This is the response variable in the ANOVA model.
Degrees of Freedom

Each term in the ANOVA model uses degrees of freedom. Every df spent on a model term is one fewer df in the residual. More residual df means a more precise estimate of error variance, which helps detect real effects.

Blocking and interactions each cost 1 df. Adding a factor costs 1 df for the main effect, but also doubles the number of cells. The tradeoff: is the information worth the cost?

ANOVA TABLE
INTERACTION PLOT
◆ THE BLUEPRINT
Reading Your Results
The ANOVA Table

Each row tests whether a factor (or interaction) explains significant variation in accuracy. The p-value tells you the probability of observing this much variation by chance alone.

p < 0.05 means statistically significant. The factor has a detectable effect on accuracy.

The Interaction Plot

If the lines are parallel, there is no interaction between the factors. Non-parallel lines suggest the effect of one factor depends on the level of the other.

Why Did I Miss an Effect?

If a real effect exists but your p-value is > 0.05, you lack statistical power. Possible reasons:

  • Too few replications per cell
  • Too many factors spreading observations thin
  • Not blocking (high residual variance)
CLASS STANDINGS