Resolution of the Jeffreys–Lindley Paradox via Interval Null Hypotheses

This article presents a resolution to the long-debated Jeffreys–Lindley (JL) paradox through the adoption of interval null hypotheses. The JL paradox highlights a divergence between frequentist and Bayesian conclusions when testing point nulls under continuous models: frequentist p-values may suggest rejecting \( H_0 \) for large samples, while Bayesian posterior probabilities may support \( H_0 \).

The root cause of the paradox is the practice of assigning prior mass to point-null hypotheses—zero-measure events under continuous models. As the sample size increases, this leads to contradictory outcomes: frequentists reject \( H_0 \) based on small p-values, while Bayesians favor \( H_0 \) due to its prior mass. This article argues that the paradox is a mathematical artifact of modeling \( H_0: \theta = \theta_0 \) rather than a scientifically meaningful controversy.

The proposed solution reformulates the null as an interval: \( H_0: |\theta - \theta_0| \leq \delta \), where \( \delta > 0 \) is contextually meaningful. This eliminates the measure-theoretic inconsistency and allows both Bayesian and frequentist inferences to align. As \( n \to \infty \), both approaches correctly retain or reject \( H_0 \) based on whether \( \theta \) lies inside or outside the interval.

The article provides formal proofs under normality assumptions and reviews practical implications. Choosing \( \delta \) is non-trivial and should be based on scientific relevance or contextual effect sizes, as reflected in equivalence testing and ROPE (region of practical equivalence) frameworks.

Ultimately, the interval null approach dissolves the paradox, resolves inconsistencies between inference schools, and enhances the interpretability and reproducibility of hypothesis testing. For full mathematical detail and applied examples, see the complete article in the International Encyclopedia of Statistical Science.