Gaussian models : Différence entre versions
m 
m (→Extension to transformed Student's $t$distributions) 

Ligne 199 :  Ligne 199 :  
<br>  <br>  
=== Extension to transformed Student's $t$distributions ===  === Extension to transformed Student's $t$distributions ===  
−  These extensions (log$t$, power$t$, etc.) can be obtained simply by replacing the normal distribution of the random effects with a Student $t$distribution. Such extensions can be useful for modeling heavytailed distributions.  +  
−  Several Student's $t$distributions with different degrees of freedom (d.f.) are displayed below. The Student's $t$distribution converges to the normal distribution as the d.f. increases, whereas heavy tails are obtained for small d.f.  +  These extensions (log$t$, power$t$, etc.) can be obtained simply by replacing the normal distribution of the random effects with a [http://en.wikipedia.org/wiki/Student%27s_tdistribution Student $t$distribution]. Such extensions can be useful for modeling heavytailed distributions. 
+  Several [http://en.wikipedia.org/wiki/Student%27s_tdistribution Student's $t$distributions] with different degrees of freedom (d.f.) are displayed below. The [http://en.wikipedia.org/wiki/Student%27s_tdistribution Student's $t$distribution] converges to the normal distribution as the d.f. increases, whereas heavy tails are obtained for small d.f.  
Version du 7 juin 2013 à 14:27
Sommaire 
The normal distribution
Gaussian models have several advantages, including the capacity of describing with ease both the predicted value of a random variable and its fluctuations around this value. Indeed, if we consider a Gaussian random variable $\psi$ with mean $\mu$ and standard deviation $\omega$, we can work with two entirely equivalent mathematical representations:
\( \begin{eqnarray}
\psi &\sim& {\cal N}(\mu , \omega^2)
\end{eqnarray}\)

(1) 
\( \begin{eqnarray}
\psi &=& \mu + \eta, \quad {\rm where }\ \quad \ \eta \sim {\cal N}(0,\omega^2) .
\end{eqnarray}\)

(2) 
The form (1) provides an explicit description of the distribution of $\psi$ from which we can deduce the pdf and other characteristics such as the median, mode and quantiles. The figure below shows the pdf of a normal distribution with mean $\mu$ and standard deviation $\omega$. Each vertical band contains 10% of the distribution.
The ${\cal N}(\mu,\omega^2)$ distribution

This type of graphical representation is powerful and helps us to better visualize the types of values the random variable can take and those values that are more likely than others.
Examples of normal distributions with various parameters are shown in the next figure.
Normal distributions

Representation (2) lets us separate the random and nonrandom components of $\psi$. If we define as the predicted value the value obtained in the absence of randomness ($\eta=0$), we get that $\hat{\psi}=\mu$. In the particular case of a normal distribution, this predicted value is the mean, median and mode of $\psi$. We can therefore rewrite equations (1) and (2) using $\hpsi$:
Extensions of the normal distribution
Clearly, not all distributions are Gaussian. To begin with, the normal distribution has the support $\Rset$, unlike many parameters that take values in precise ranges; some variables take only positive values (e.g., concentrations and volumes) and others are restricted to bounded intervals (e.g., bioavailability).
Furthermore, the Gaussian distribution is symmetric, which is not a property shared by all distributions. One way to extend the use of Gaussian distributions is to consider that some transform of the parameters we are interested in is Gaussian, i.e., assume the existence of a monotonic function $h$ such that $h(\psi)$ is normally distributed. Then, there exists some $\mu$ and $\omega$ such that $h(\psi) \sim {\cal N}(\mu , \omega^2)$.
For a given transformation $h$, we can parametrize using $\hat{\psi}$, the predicted value of $\psi$. Indeed, the predicted value of $h(\psi)$ is $\mu=h(\hat{\psi})$, and
\(\begin{eqnarray}
h(\psi) &\sim& {\cal N}(h(\hat{\psi}) , \omega^2)
\end{eqnarray}\)

(3) 
\(\begin{eqnarray}
h(\psi) &=& h(\hat{\psi}) + \eta , \quad {\rm where } \quad \ \eta \sim {\cal N}(0,\omega^2).
\end{eqnarray}\)

(4) 
It is possible to derive the pdf of $\psi$ from (4):
\(
\ppsi(\psi)=\displaystyle{ \frac{h^\prime(\psi)}{\sqrt{2 \pi \omega^2} } } \ \exp\left\{\displaystyle{ \frac{1}{2 \, \omega^2} } (h(\psi)  h(\hpsi))^2 \right\}. \)

(5) 
Let us now see some examples of transformed normal pdfs:
Lognormal distribution
The lognormal distribution is widely used for describing the distribution of PK/PD parameters. This choice is usually justified by the fact that it ensures nonnegative values, and rarely because it is shown to properly describe the population distribution of the parameter of interest.
Let $\psi$ be a lognormally distributed random variable with parameters $(\mu,\omega)$:
This distribution can be also parameterized with $(m,\omega)$, where $m = \mu = \hat{\psi}$. Then, $\log(\psi) \sim {\cal N}( \log(m), \omega)$ and
We display below some lognormal pdfs obtained with different parameters $(m,\omega)$.
Lognormal distributions

We see that for a given standard deviation $\omega$, the pdfs obtained for different $m$ are simply rescaled.
On the other hand, for a given $m$ the asymmetry of the distribution increases when the standard deviation $\omega$ increases.
Powernormal (or BoxCox) distribution
This is the distribution of a random variable $\psi$ for which the BoxCox transformation of $\psi$,
(with $\lambda > 0$) follows a normal distribution ${\cal N}( \mu, \omega^2)$ truncated such that $h(\psi)>0$. It therefore takes its values in $(0,+\infty)$. The distribution converges to the lognormal distribution when $\lambda \to 0$ and a truncated normal distribution when $\lambda \to 1$. The main interest of a powernormal distribution is its ability to represent a distribution "between" the lognormal distribution and the normal distribution.
Here, $m = \hat{\psi} = (\lambda \mu + 1)^{1/\lambda}$. We display below several powernormal pdfs obtained with various parameter sets $(\lambda,m,\omega)$.
Powernormal distributions

Logitnormal and probitnormal distributions.
A random variable $\psi$ with a logitnormal distribution takes its values in $(0,1)$. The logit of $\psi$ is normally distributed, i.e.,
This means that $\mu=\logit(m)$.
A random variable $\psi$ with a probitnormal distribution also takes its values in $(0,1)$. Then, the probit of $\psi$ is normally distributed:
This means that $\mu=\probit(m)$.
We can see in the figures below that the pdfs of the logit and probit distributions with the same $m$ and wellchosen $\omega$ are very similar. Thus, these two distributions can be used interchangeably for modeling the distribution of a parameter that takes its values in $(0,1)$.
Logitnormal and probitnormal distributions

Logit and probit transformations can be generalized to any interval $(a,b)$ by setting
where $\tilde{\psi}$ is a random variable that takes its values in $(0,1)$ with a logit (or probit) distribution.
Furthermore, it is easy to show that the probitnormal distribution with $m=0.5$ and $\omega=1$ is the uniform distribution on $(0,1)$. Thus, any uniform distribution can easily be derived from the probitnormal distribution.
Extension to transformed Student's $t$distributions
These extensions (log$t$, power$t$, etc.) can be obtained simply by replacing the normal distribution of the random effects with a Student $t$distribution. Such extensions can be useful for modeling heavytailed distributions. Several Student's $t$distributions with different degrees of freedom (d.f.) are displayed below. The Student's $t$distribution converges to the normal distribution as the d.f. increases, whereas heavy tails are obtained for small d.f.
Standardized normal and Student's $t$ probability distribution functions

$\mlxtran$ for the Gaussian model