# Wald test statistic
## Single parameter
A Wald test statistic has a particular form. It is the estimator, centered by the value of the parameter under the null hypothesis, and divided by its standard error.
$
T = \frac{\hat{\theta} - \theta_0}{\text{se}(\hat{\theta})}
$
If the estimator $\hat{\theta}$ follows a Central Limit Theorem, then by properties of the normal distribution, $T$ will be distributed as a standard normal:
$
T \sim N(0, 1)
$
We can take advantage of this fact to calculate z-statistics and p-values. If we need to estimate the standard error, then we use t-statistics instead.
Several popular hypothesis tests have the form of a Wald test statistic:
- [[t-tests]]
- [[Proportion test|proportion tests]]
- hypothesis tests for coefficients that use [[Maximum likelihood estimation|maximum likelihood estimation]]
## Linear combinations of parameters
Since the [[Ordinary least square (OLS) estimators|OLS estimators]] are a $p$-variate Normal, then by the properties of Normal distributions, *linear combinations* (contrasts) of the coefficients also have Normal distributions.
If $\hat{\beta}$ is a $p$-variate vector, we can make a contrast by multiplying it with other vector $a$ of the same dimension. If $p=3$ and we wanted to take the difference between the third and second coefficient in $\hat{\beta}$, then you can specify $a = (0, -1, 1)$. This means we can estimate $\beta_2 - \beta_1$ by:
$
\hat{l} = a'\hat{\beta} = \hat{\beta}_2 - \hat{\beta}_1
$
The variance of this linear constrast can be derived by multiplying the covariate matrix with the contrast vector:
$
\begin{align}
\text{Var}(\hat{l}) &= \text{Var}(a'\hat{\beta}) \\
&= a'(\sigma^2(X'X)^{-1})a
\end{align}
$
Linear contrasts can be useful if you are trying to compare two non-reference groups in a [[Multiple linear regression|multiple linear regression]]. This is also common in longitudinal analysis.
---
# References
[[Applied Linear Regression#6. Testing and Analysis of Variance]]