Return to SIPTA Homepage

Ryan Martin

Inferential models

Abstract

Statistical inference is concerned with the quantification of uncertainty about unknowns via data-dependent degree of belief measures. An inferential model (IM) formalizes this as a mapping from the data, posited statistical model, etc., to a general degree of belief measure. Important questions include:

  • what properties should an IM satisfy?
  • what do these properties imply concerning the mathematical form of the IM output?
  • and how to construct an IM that satisfies these properties?

In Part 1 of the course (about 1 lecture), I introduce a validity condition designed to ensure that the belief measure output is reliable in a specific sense, and then I investigate the implications this has on the mathematical form of the IM's output. First, I will show that an IM whose output is a probability distribution cannot be valid and, second, I will demonstrate that valid IMs can take the form of consonant belief/plausibility functions. Furthermore, I will give a characterization of frequentist procedures having error rate control guarantees in terms of the same consonant belief/plausibility functions, suggesting that valid IMs can only take this form.

Having an understanding of what a valid IM looks like, the next question is how to construct such a thing. In Part 2 of course (about 1.5 lectures), I will focus on the construction presented in the Inferential Models monograph, co-authored with Chuanhai Liu, which is based on the use of random sets. With validity being guaranteed by construction, I turn to questions about efficiency. In particular, I will provide details about two fundamental dimension reduction strategies, namely, conditioning and marginalization, that lead to significant efficiency gains. Several non-trivial examples will be presented to illustrate the practical utility of this theory.

Finally, in Part 3 of the course (about 0.5 lectures), I will consider a number of open problems and unanswered questions, including, the construction of optimal/most-efficient IMs, the incorporation of partial prior information, the consequences of relaxing the validity condition, and the potential impacts of imprecise probability on the foundations of statistical inference.

Outline

Part 1
1. Setup of the statistical inference problem
2. Probabilistic inference
3. Valid probabilistic inference
4. Can probabilities be valid?
5. If not probabilities, then what can be valid?
6. Characterisation of frequentist procedures via plausibility
Part 2
1. IM construction
2. Validity theorem
3. Examples
4. Beyond validity: efficiency
5. Dimension reduction, I: Conditioning
6. Dimension reduction, II: Marginalisation
7. Extensions: generalised IMs and prediction
8. Back to the frequentist characterisation theorem
Part 3
1. Efficiency and optimality
2. Computations
3. Partial prior information
4. False confidence phenomenon
5. Weakening the validity requirement
6. Fundamental role of imprecise probability
7. Maybe more