# Emanuel Winterfors

### My PhD topic: Nonlinear Experimental Design

Bayesian experimental design provides a general probability-theoretical framework from which other theories on experimental design can me derived. It is based on Bayesian inference to interpret the observations/data acquired during the experiment. This allows accounting for both any prior knowledge on the parameters to be determined as well as uncertainties in observations.

In Bayesian experimental design, the aim when designing an experiment is to maximize the expected utility of the experiment outcome. The utility is most commonly defined in terms of a measure of the accuracy of the information provided by the experiment (e.g. the Shannon information or the negative variance), but may also involve factors such as the financial cost of performing the experiment. What will be the optimal experiment design depends on the particular utility criterion chosen.

Relations to more specialized optimal design theory

Notation

 $\theta$ parameters to be determined $y$ observation or data $\xi\,$ design $p(y|\theta,\xi)\,$ PDF for making observation $y$, given parameter values $\theta$ and design $\xi$ $p(\theta)\,$ prior PDF $p(y|\xi)\,$ marginal PDF in observation space $p(\theta | y, \xi)\,$ posterior PDF $U(\xi)\,$ utility of the design $\xi$ $U(y, \xi)\,$ utility of the experiment outcome after observation $y$ with design $\xi$

Linear theory
If the model is linear, the prior probability density function (PDF) is homogeneous and observational errors are Multivariate normal distribution | normally distributed, the theory simplifies to the classical optimal design | optimal experimental design theory.

Approximate normality
In numerous publications on Bayesian experimental design, it is (often implicitly) assumed that all posterior PDFs will be approximately normal. This allows for the expected utility to be calculated using linear theory, averaging over the space of model parameters, an approach reviewed in Chaloner & Verdinelli (1995). Caution must however be taken when applying this method, since approximately normality of all possible posteriors is difficult to verify, even in cases of normal observational errors and uniform a prior PDF.

Mathematical formulation

Given a vector $\theta$ of parameters to determine, a Prior probability | prior PDF $p(\theta)$ over those parameters and a PDF $p(y|\theta,\xi)$ for making observation $y$, given parameter values $\theta$ and design $\xi$, the posterior PDF can be calculated using Bayes’ theorem

 $p(\theta | y, \xi) = \frac{p(y | \theta, \xi) p(\theta)}{p(y | \xi)} \,$ , (1)

where $p(y|\xi)$ is the marginal probability density in observation space

 $p(y|\xi) = \int{p(\theta)p(y|\theta,\xi)d\theta}\,$ . (2)

The expected utility of an experiment with design $\xi$ can then be defined

 $U(\xi)=\int{p(y|\xi)U(y,\xi)dy}\,$ , (3)

where $U(y,\xi)$ is some real-valued functional of the Posterior probability | posterior PDF $p(\theta | y, \xi)$ after making observation $y$ using an experiment design $\xi$.

Gain in Shannon information as utility

If the utility is defined as the prior-posterior gain in Differential entropy | Shannon information

 $U(y, \xi) = \int{\log(p(\theta | y, \xi))p(\theta | y, \xi)d\theta} - \int{\log(p(\theta))p(\theta)d\theta} \,$ . (4)

Lindley (1956) noted that the expected utility will then be coordinate-independent and can be written in two forms

 $U(\xi)$ $= \int{\int{\log(p(\theta | y,\xi))p(\theta, y | \xi)d\theta}dy} - \int{\log(p(\theta))p(\theta)d\theta}$ $= \int{\int{\log(p(y | \theta,\xi))p(\theta, y | \xi)dy}d\theta} - \int{\log(p(y| \xi))p(y| \xi)dy}$

of which the latter can be evaluated without the need for evaluating individual posterior PDFs
$p(\theta | y,\xi)$ for all possible observations $y$. Worth noting is that the first term on the second equation line will be a constant, as long as the observational uncertainty does not depend on the design $\xi$. Several authors have considered numerical techniques for evaluating and optimizing this criterion, e.g. van den Berg, Curtis & Trampert (2003) and Ryan (2003).

 © 2008 Emanuel Winterfors $\LaTeX$ code can be used in comments: $latex p(\theta)$ gives $p(\theta)$