Introduction to Experimental Design

Foundations of Experimental Thinking

Dr. Samuel B Fernandes

2026-01-01

Learning Objectives

By the end of this lecture, you should be able to:

  • Explain why experimental design is critical for valid inference in agriculture.
  • Describe the principles of experimentation.
  • Distinguish observational studies from designed experiments.
  • Spot common design mistakes in agricultural research.

When Bad Design Hurts

Caution

Story: A food company tested a new preservative by adding it only to jars processed in the morning shift (cooler temps) and compared shelf life to jars from the hotter afternoon shift. They declared the preservative worked—until it failed in production.

  • Confounding: time-of-day and temperature, not the preservative, drove the shelf-life difference.
  • Cost: recalls, brand damage, wasted production.
  • Lesson: without design, we mistake nuisance variation for treatment effects.

Why Agriculture Needs Good Design

  • Horticulture: Greenhouse bench effects can minimize fertilizer differences.
  • Animal Science: Pen location alters airflow/heat; confounds diet trials.
  • Crop Science: Slope, soil moisture, and shade gradients bias yield.
  • Food Science: Batch-to-batch process drift masks formulation effects.

Foundations

What Is an Experiment?

  • Plain English: You intervene (apply treatments) and control how they’re assigned.
  • Formal: Assign treatments to experimental units using a rule (often random), then measure responses.
  • Goal: Isolate treatment effects by managing other sources of variation.

Observational vs. Experimental

  • Observational: Measure what’s already happening (e.g., soil C vs. yield on farmer fields without previous assignment of treatments). Limited ability to infer causation.
  • Experimental: You assign fertilizer rates to plots. Randomization lets you claim causation if assumptions hold.
  • Risk: Treating observational patterns as causal → poor recommendations.

Principles of Experimentation

Sir Ronald A. Fisher (1890–1962)

Father of modern DOE

Fisher’s Three Pillars:

  • Replication: Multiple units per treatment to estimate variability and improve precision
  • Randomization: Fair assignment eliminates bias and ensures valid statistical inference
  • Blocking: Group similar units to control known variation before comparing treatments

Fisher also pioneered factorial designs and developed ANOVA for analyzing experimental data.

Consequences of Poor Design

Confounding in Action

Figure 1: Confounding example: temperature trend mistaken for preservative effect

Replication: Biological vs Technical

  • Biological: Different plants/animals/plots → captures real-world variability.
  • Technical: Repeat measurement on the same sample → captures instrument noise.
  • Guideline: Biological replication drives inference; technical replication improves precision but doesn’t replace biological reps.

Randomization: Practical Moves

  • Shuffle treatment labels before field layout; don’t sort by convenience.
  • Document the random seed and method for reproducibility.
  • Possibly different levels of randomization.

The way you randomize your treatments defines the design (i.e., CRD, RCBD, etc.).

Blocking: When and Why

  • Use blocks when you can name a major nuisance source (slope, greenhouse bench, barn side).
  • Blocks reduce residual noise → tighter confidence intervals.
  • Don’t over-block: if blocks are too small, you lose power for treatments.

Agricultural Examples (Conceptual)

Field Example: Nitrogen on Wheat

  • Treatments: 0, 60, 120 kg N/ha.
  • Units: 12 plots on a mild slope.
  • Design fix: Block by slope position (upper/mid/lower), randomize N within each block.
  • Without blocking: Higher yields at lower slope falsely inflate N effect.

Livestock Example: Diet Trial

  • Treatments: Control vs. protein-supplemented diet.
  • Units: Pens of pigs; ventilation differs by barn side.
  • Design fix: Block by barn side, randomize diets within side; replicate pens per diet.
  • Without blocking: Barn-side effect masquerades as diet effect.

Food Quality Example: Pasteurization Study

  • Treatments: Standard vs. slightly lower temperature.
  • Units: Batches over two days; day-to-day equipment calibration drifts.
  • Design fix: Randomize treatment order within each day; treat day as a block.
  • Without blocking: Day effect confounds temperature effect.

Small-Group Activity (5 minutes)

Identify the Design Flaw

Scenario: A vineyard wants to test a canopy thinning practice. They thin only the sunny (south-facing) rows and compare to unthinned on the shaded (north-facing) rows.

  • In pairs, list two flaws and propose a quick redesign.
  • What would you randomize? What could you block?
  • Share one fix with the class.

Looking Ahead

Summary & Key Takeaways

  • Good design separates treatment effects from nuisance variation.
  • Replication estimates noise; randomization protects against bias; blocking controls known gradients.
  • Observational ≠ experimental; causation needs controlled assignment.
  • Document your design decisions—future you (and reviewers) will thank you.

Next Lecture Preview

  • Wednesday: Intro to AI as a study guide, R + Quarto basics; run your first R chunk.
  • Bring: Laptop with VS Code, R/Quarto installed (or Posit Cloud account).
  • We’ll practice a simple randomization script together.

Motivation

“By the end of this semester, you’ll design better experiments than many published papers.” — Because you’ll plan, randomize, replicate, and block with intent.

Resources

  • Textbook: Oehlert (2010), A First Course in Design and Analysis of Experiments, Chapter 1.
  • Data ideas: agridat package (e.g., barley, wheat, livestock trials) for real agricultural examples.
  • Reading tip: Focus on the why behind replication, randomization, blocking; equations come later.