This article introduces the Bayes Factor (BF) as a cornerstone of Bayesian inference for hypothesis testing and model comparison. Emphasizing its role as a ratio of marginal likelihoods under competing hypotheses, the paper demonstrates how BFs quantify evidence favoring one hypothesis over another and highlights the sensitivity of this inference to prior assumptions.
A key concept derived is the relationship: \[ \text{Posterior Odds} = \text{Bayes Factor} \times \text{Prior Odds} \] which illustrates how prior beliefs are updated with new data. The article warns against blind interpretation of BF values—showing how prior odds critically shape posterior conclusions—even when BF indicates strong support for a hypothesis.
The paper discusses several forms of Bayes Factors:
Limitations discussed include:
The article critiques the over-reliance on BF tables and Jeffreys’ interpretation scales and offers the RAO-LOVRIC Zero Probability Theorem as a fundamental insight challenging common claims about BF versus p-values.
For further depth, see the full encyclopedia entry in the International Encyclopedia of Statistical Science.