Material for the LTTC module `Fundamental Theory of Statistical Inference’.

 

2023-24

 

The course descriptor, with suggested reading, is here. A full set of notes, subject to correction, is available here. The notes contain a set of problems. 

 

The material is arranged in six chapters. Here, in advance, are the slides.

 

Slides for Chapter 1

 

Slides for Chapter 2

 

Slides for Chapter 3

 

Slides for Chapter 4

 

Slides for Chapter 5

 

Slides for Chapter 6

 

Here are the articles by Efron, Berger, and Bayarri and Berger.

 

Here is a version of the problems with solutions. These are certainly subject to correction: please inform me of any errors etc.

 

Recommended study/reading will be posted here, week by week.

 

WEEK 1

 

`Decision theory is concerned with an individual decision-maker who tries to make the best decision based on their understanding of the world, game theory is concerned with the interaction between different decision-makers each of whom is trying to make the best decision based on their beliefs about what the others will choose’.

 

A nice discussion of decision theory/game theory relevant to our focus is given in the book `Statistical Inference’ by Casella & Berger, 1990, Chapter 10. It is possible to think of Nature (the chooser of the parameter value) as an adversary of the statistician (giving a direct game theory interpretation), but Casella and Berger argue this is not reasonable in a statistical problem.

 

It might be worthwhile studying the in-depth treatment of finite decision problems in Young and Smith, or elsewhere, as this gives insight to the different ideas of decision theory, and the importance of the concept of a randomised decision rule.

 

Chapter 1 of Cox’s book `Principles of Statistical Inference’ would be worth reading. Chapter 5 of that book is very nice, but it might make sense to read it in a week or two, when we have covered further material.

 

The Efron article is quite accessible, even at this stage. I particularly like the `statistical triangle’. It illustrates how there are no firm boundaries between the different paradigms of inference. Where does your Ph.D. work lie on the triangle? Do you agree with Efron’s positioning of different statistical methods?

 

Slides containing detailed working of some key problems which illustrate various ideas are here.

 

WEEK 2

 

I enjoyed putting the data-analytic examples in Young and Smith together. They flesh out shrinkage and empirical Bayes ideas, and might be found illuminating.

 

A very nice commentary on statistical inference, with a fairly contemporary perspective,  is the paper by Reid and Cox.

 

This might be a good time to tackle Cox’s Chapter 5, and the Bayarri and Berger article, which is very thought provoking. You might have fun reading Bayes, T. `An essay towards solving a problem in the doctrine of chances’, Phil. Trans. Roy. Soc., 53, 370-418 (1763), which isn’t necessarily expressed in a way that is familiar.

 

A nice discussion of the geometrical interpretation of Stein shrinkage is given by Brown and Zhao.

 

The slides containing detailed working of key problems are here.

 

 

WEEK 3

 

I will aim next week to discuss these problems. Next week I will also aim to cover initial elements of the final Chapter 6. Chapters 2 to 4 of Cox would make very useful background reading, as it might help to make sense of things we introduced this week!

 

Birnbaum, A. (1962). `On the foundations of statistical inference (with discussion)’, JASA, 57, 269-326, is worth reading. It proves the result that the Sufficiency Principle [if T is a sufficient statistic and y and z are data samples with T(y)=T(z), identical inferences about the parameter of interest should be drawn from y and z] together with some form of Conditionality Principle implies the Strong Likelihood Principle, essentially incompatible with non-Bayesian statistics. The result is argued about: see Cox, Chapter 4.

 

 

 

 

WEEK 4

 

Next week I will complete the material on frequentist theory, of which there is quite a bit remaining.

 

We have set in place a sound enough description of frequentist approaches to testing for both Berger articles to be fully comprehensible.

 

Problem 7.4 yields a very interesting comparison between Fisherian and frequentist analyses, and I hope to be able to look at it next week.

 

A very famous discussion of ancillary statistics is given by Buehler, R.J. `Some ancillary statistics and their properties’, JASA, 77, 581-89 (1982). It stresses analysis of when an ancillary statistic functions as a `precision index’. A very lucid account of conditional inference is given by McCullagh, P. `Conditional inference and Cauchy models’, Biometrika, 79, 247-57 (1992).

 

WEEK 5

 

I hope people might consider reading Chapters 7 and 8 of Young and Smith, which demonstrate how Fisherian ideas lie at the heart of commonly used, likelihood-based, inference procedures, which are probably more important in practice than optimal frequentist methods. Here is the set of problems related to frequentist theory.

 

A very interesting discussion of completeness and related ideas is given by Lehmann, E. `An interpretation of completeness and Basu’s Theorem’, JASA, 76, 315-320 (1981).

 

All the problems should now be accessible. Feel free to e-mail me with any queries.

 

Previous exams, for practice:

 

The 2017 exam is here, with its solution.

 

The 2018 exam is here, with its solution.

 

The 2019 exam is here, with its solution.

 

The 2020 exam is here, with its solution.

 

The 2021 exam is here, with its solution.

 

The 2022 exam is here, with its solution.

 

The 2023 exam is here, with its solution.

 

The 2024 exam is here, with its solution.