TestMarc1

(diff) ← Version précédente | Voir la version courante (diff) | Version suivante → (diff)


Introduction

A model built for real-world applications can involve various types of variable, such as measurements, individual and population parameters, covariates, design, etc. The model allows us to represent relationships between these variables.

If we consider things from a probabilistic point of view, some of the variables will be random, so the model becomes a probabilistic one, representing the joint distribution of these random variables.

Defining a model therefore means defining a joint distribution. The hierarchical structure of the model will then allow it to be decomposed into submodels, i.e., the joint distribution decomposed into a product of conditional distributions.

Tasks such as estimation, model selection, simulation and optimization can then be expressed as specific ways of using this probability distribution.

We will illustrate this approach starting with a very simple example that we will gradually make more sophisticated. Then we will see in various situations what can be defined as the model and what its inputs are.

An illustrative example

A model for the observations of a single individual

Let $y=(y_j, 1\leq j \leq n)$ be a vector of observations obtained at times $\vt=(t_j, 1\leq j \leq n)$. We consider that the $y_j$ are random variables and we denote $\qy$ the distribution (or pdf) of $y$. If we assume a parametric model, then there exists a vector of parameters $\psi$ that completely define $y$.

We can then explicitly represent this dependency with respect to $\bpsi$ by writing $\qy( \, \cdot \, ; \psi)$ for the pdf of $y$.

If we wish to be even more precise, we can even make it clear that this distribution is defined for a given design, i.e., a given vector of times $\vt$, and write $\qy(\, \cdot \, ; \psi,\vt)$ instead.

By convention, the variables which are before the symbol ";" are random variables. Those that are after the ";" are non-random parameters or variables. When there is no risk of confusion, the non-random terms can be left out of the notation.

-In this context, the model is the distribution of the observations $\qy(\, \cdot \, ; \psi,\vt)$.
-The inputs of the model are the parameters $\psi$ and the design $\vt$.

Example

500 mg of a drug is given by intravenous bolus to a patient at time 0. We assume that the evolution of the plasmatic concentration of the drug over time is described by the pharmacokinetic (PK) model

$$f(t;V,k) = \displaystyle{ \frac{500}{V} }e^{-k \, t} ,$$

where $V$ is the volume of distribution and $k$ the elimination rate constant. The concentration is measured at times $(t_j, 1\leq j \leq n)$ with additive residual errors:

$$y_j = f(t_j;V,k) + e_j , \quad 1 \leq j \leq n .$$

Assuming that the residual errors $(e_j)$ are independent and normally distributed with constant variance $a^2$, the observed values $(y_j)$ are also independent random variables and

 $$y_j \sim {\cal N} \left( f(t_j ; V,k) , a^2 \right), \quad 1 \leq j \leq n.$$ (1.4)

Here, the vector of parameters $\psi$ is $(V,k,a)$. $V$ and $k$ are the PK parameters for the structural PK model and $a$ the residual error parameter. As the $y_j$ are independent, the joint distribution of $y$ is the product of their marginal distributions:

$$\py(y ; \psi,\vt) = \prod_{j=1}^n \pyj(y_j ; \psi,t_j) ,$$

where $\qyj$ is the normal distribution defined in (1.4).