Statistical Inference - PhD EcoStatData
Università degli Studi di Milano-Bicocca
This unit will cover the following topics:
The rational behind point estimation is quite simple:
When sampling is from a population described by a pdf or a pmf f(\cdot ; \theta), knowledge of \theta yields knowledge of the entire population.
Hence, it is natural to seek a method of finding a good estimator of the unknown point \theta.
Point estimator
A point estimator \hat{\theta} is any function of the random sample Y_1,\dots,Y_n, namely \hat{\theta}(\bm{Y}) = \hat{\theta}(Y_1,\dots,Y_n). That is, any statistic is a point estimator.
In principle, the range of the estimator coincide with that of the parameter, i.e. \hat{\theta} : \mathcal{Y} \rightarrow \Theta, but there are exceptions.
An estimator \hat{\theta}(Y_1,\dots,Y_n) is a function of the sample Y_1,\dots,Y_n and is a random variable.
An estimate \hat{\theta}(y_1,\dots,y_n) is a function of the realized values y_1,\dots,y_n and is a number.
We will use the notation \hat{\theta} to denote both estimators and estimates whenever its meaning is clear from the context.
The methods of moments is, perhaps, the oldest method of finding point estimators, dating back at least to Karl Pearson in the late 1800s.
Let Y_1,\dots,Y_n be an iid sample from f(\cdot; \theta) and \theta = (\theta_1,\dots,\theta_p). Moreover, let us define m_r = \frac{1}{n}\sum_{i=1}^n Y_i^r, \qquad \mu_r(\theta) = \mu_r(\theta_1,\dots,\theta_p) = \mathbb{E}_\theta(Y^r), \qquad r=1,\dots,p. corresponding to the population moment \mu_r(\theta_1,\dots,\theta_p) and the sample moment m_r.
The method of moments estimator \hat{\theta} is obtained by solving the following system of equations for (\theta_1,\dots,\theta_p): \begin{aligned} \mu_1(\theta_1,\dots,\theta_p) &= m_1, \\ \mu_2(\theta_1,\dots,\theta_p) &= m_2, \\ &\vdots \\ \mu_p(\theta_1,\dots,\theta_p) &= m_p. \\ \end{aligned}
In general, it is not guaranteed that a solution exist nor its uniqueness.
Moments estimators are not necessarily the best estimators, but under reasonable conditions they are consistent, they have converge rate \sqrt{n}, and they are asymptotically normal.
Suppose (Y,Y^2,\dots,Y^p) has finite covariance \Sigma, then the multivariate central limit theorem implies that as n\rightarrow \infty \sqrt{n}\{m - \mu(\theta)\} \overset{d}{\longrightarrow} Z, \qquad Z\sim N_p(0, \Sigma), where m = (m_1,\dots,m_p) and \mu(\theta) = (\mu_1(\theta),\dots,\mu_p(\theta)).
Suppose also that \mu(\theta) is a one-to-one mapping and let g(\mu) be the inverse of \mu(\theta), that is g = \mu^{-1}. We assume that g has differentiable components g_r(\cdot) for r = 1,\dots,p.
The moments estimator can be written as \hat{\theta} = g(m) and \theta = g(\mu(\theta)). Then, as a consequence of the delta method the following general result holds: \sqrt{n}(\hat{\theta} - \theta) \overset{d}{\longrightarrow} Z, \qquad Z\sim N_p\left(0, D \Sigma D^T\right), where D = [d_{rr'}] is a p \times p matrix whose entries are the derivatives d_{rr'} = \partial g_r(\mu)/\partial \mu_{r'}.
Let Y_1,\dots,Y_n be an iid random sample from a beta distribution of parameters \alpha,\beta > 0 with density f(y; \alpha, \beta) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)} y^{\alpha-1}(1 - y)^{\beta-1}, \qquad 0 < y < 1.
The moment estimator for (\alpha, \beta) is the (explicitly available) solution of the system of equations m_1 = \frac{\alpha}{\alpha + \beta}, \qquad m_2 = \frac{\alpha (\alpha+1)}{(\alpha + \beta) (\alpha + \beta + 1)}.
The method of maximum likelihood is, by far, the most popular technique for deriving estimators, developed by Ronald A. Fisher in Fisher (1922; 1925).
Recall that L(\theta) = L(\theta; \bm{y}) is the likelihood function and \ell(\theta) = \log{L(\theta)} is the log-likelihood.
Given a likelihood function L(\theta) of \theta \in \Theta, a maximum likelihood estimate of \theta is an element \hat{\theta} \in \Theta which attains the maximum value of L(\theta) in \Theta, i.e. such that L(\hat{\theta}) \ge L(\theta) or equivalently L(\hat{\theta}) = \max_{\theta \in \Theta} L(\theta).
The maximum likelihood estimator (MLE) of the parameter \theta based on a sample \bm{Y} is \hat{\theta}(\bm{Y}).
Intuitively, the MLE is a reasonable choice: it is the parameter point for which the observed sample is most likely.
Clearly, the MLE is also the maximizer of the log-likelihood: \ell(\hat{\theta}) = \max_{\theta \in \Theta} \ell(\theta).
Remark II: often \hat{\theta} cannot be written explicitly as a function of the sample values, i.e. in general the MLE has no closed-form expression but it must be found using numerical procedures.
Remark III: the likelihood function has to be maximized in the set space \Theta specified by the statistical model, not over the set of the mathematically admissible values of \theta.
Theorem (Invariance, Casella and Berger 2002, Theorem 7.2.10)
Let \psi(\cdot) be a one-to-one function (i.e. a reparametrization) from the set \Theta onto the set \Psi. Then the MLE of \psi = \psi(\theta) is \hat{\psi} = \psi(\hat{\theta}) where \hat{\theta} denotes the MLE of \theta.
Let Y_1,\dots,Y_n be a iid random sample from a Poisson distribution of parameter \lambda > 0, with likelihood function L(\lambda) = \prod_{i=1}^n \frac{e^{-\lambda} \lambda^{y_i}}{y_i!}.
Therefore the log-likelihood, up to an additive constant c not depending on \lambda, is \ell(\lambda) = \sum_{i=1}^ny_i\log{\lambda} - n\lambda + c.
The maximum likelihood estimator \hat{\lambda} is found by maximizing \ell(\lambda). In this regular problem, this can be done by studying the first derivative: \ell^*(\lambda) = \frac{1}{\lambda}\sum_{i=1}^ny_i - n.
We solve \ell^*(\lambda) = 0, obtaining \hat{\lambda} = \bar{y}. This is indeed a maximizer of \ell(\lambda) (why?).
An M-estimator is the maximizer over \Theta of a function M(\theta) : \Theta \rightarrow \mathbb{R} of the type: M(\theta) = \frac{1}{n}\sum_{i=1}^n m(\theta; Y_i), where m(\theta; Y_i) are known real-valued functions.
Remark: when m(\theta; y) = \log{f(Y_i; \theta)} this coincides with the MLE of a model with iid observations.
The term 1/n is included here to facilitate the description of the subsequent asymptotic theory, but it is obviously non influential.
A Z-estimator is the solution over \Theta of a system of equations function Q(\theta) = \bm{0} of the type: Q(\theta) = Q(\theta; Y) = \frac{1}{n}\sum_{i=1}^n q(\theta; Y_i) = \bm{0}, where q(y; \theta) are known vector-valued maps. These are called estimating equations.
When \theta = (\theta_1,\dots,\theta_p), Q and q typically have p coordinate functions, namely we consider: Q_r(\theta) = \frac{1}{n}\sum_{i=1}^n q_r(\theta; Y_i)= 0, \qquad r = 1,\dots,p.
In many examples q_r(y; \theta) are the partial derivatives of a function m(\theta; y), that is Q(\theta) = \frac{\partial}{\partial \theta} M(\theta). An example is the score function \ell^*(\theta). However, this is not always the case.
The location of a r.v. Y is a vague term that can be made precise by defining it as the expectation \mathbb{E}(Y), a quantile, or the center of symmetry, as in the following example.
Let Y_1,\dots,Y_n be a iid sample of real-valued random variables belonging to family of distributions \mathcal{F} defined as \mathcal{F} = \{f(y - \theta) : \theta \in \Theta \subseteq \mathbb{R} \}, for some unknown density f(y) symmetric around 0. The parameter \theta is the location.
Classical M- estimators for \theta are the mean and the median, maximizing: -\frac{1}{n}\sum_{i=1}^n (Y_i - \theta)^2, \quad (\text{Mean}) \qquad \qquad - \frac{1}{n}\sum_{i=1}^n |Y_i - \theta|, \quad (\text{Median}) or alternatively (Z- estimator forms) solving the equations \frac{1}{n}\sum_{i=1}^n (Y_i - \theta) = 0, \quad (\text{Mean}) \qquad \qquad \frac{1}{n}\sum_{i=1}^n \text{sign}(Y_i - \theta) = 0, \quad (\text{Median}).
Huber estimators can be regarded as a compromise between the mean and the median, maximizing the following function: M(\theta) = \frac{1}{n}\sum_{i=1}^n m(Y_i - \theta), \qquad m(y) = \begin{cases}\frac{1}{2}y^2 \quad &\text{ if } |y| \le k\\ k |y| - \frac{1}{2}k^2 \quad &\text{ if } |y| \ge k \end{cases} where k > 0 is a tuning parameter. The function m(y) is continuous and differentiable1. The choice k \rightarrow 0 leads to the median, whereas for k \rightarrow \infty we get the mean.
Equivalently, we can consider the solution of the following estimating equation: Q(\theta) = \frac{1}{n}\sum_{i=1}^n q(Y_i - \theta)= 0, \qquad q(y) = \begin{cases} -k \quad &\text{ if }\: y \le -k\\ y \quad &\text{ if }\: |y| \le k \\ k \quad &\text{ if }\: y \ge k\end{cases}
Unfortunately, there is no closed-form expression and the equation must be solved numerically.
Bayesian estimators are obtained following different inferential paradigm than the one considered here, but they also have appealing frequentist properties.
Let L(\theta; \bm{y}) be the likelihood function and let \pi(\theta) be the prior distribution. Then, Bayesian inference relies on the posterior distribution, obtained as \pi(\theta \mid \bm{y}) = \frac{\pi(\theta) L(\theta; \bm{y})}{\int_\Theta \pi(\theta) L(\theta; \bm{y}) \mathrm{d}\theta}.
For reasons that will be soon clarified, under certain hypothesis the posterior mean is an optimal Bayesian estimator: \hat{\theta}_\text{Bayes} = \mathbb{E}(\theta \mid \bm{y}) = \frac{\int_\Theta \theta \: \pi(\theta) L(\theta; \bm{y}) \mathrm{d}\theta}{\int_\Theta \pi(\theta) L(\theta; \bm{y}) \mathrm{d}\theta}, which is, unfortunately, not always available in closed-form.
Other “optimal” Bayesian estimators include e.g. the posterior median.
Let Y_1,\dots,Y_n be a iid random sample from a Poisson distribution of parameter \lambda > 0, with likelihood function L(\lambda) = \prod_{i=1}^n \frac{e^{-\lambda} \lambda^{y_i}}{y_i!}.
Therefore the log-likelihood, up to an additive constant c not depending on \lambda, is \ell(\lambda) = \sum_{i=1}^ny_i\log{\lambda} - n\lambda + c.
The maximum likelihood estimator \hat{\lambda} is found by maximizing \ell(\lambda). In this regular problem, this can be done by studying the first derivative: \ell^*(\lambda) = \frac{1}{\lambda}\sum_{i=1}^ny_i - n.
We solve \ell^*(\lambda) = 0, obtaining \hat{\lambda} = \bar{y}. This is indeed a maximizer of \ell(\lambda) (why?).