This package contains a set of functions and tools for Maximum Likelihood (ML) estimation. The focus of the package is on non-linear optimization from the ML viewpoint, and it provides several convenience wrappers and tools, like BHHH algorithm, variance-covariance matrix, standard errors, and summary methods.
maxLik
: the central function that performs the ML estimation. It can be called with loglik function and start values as simply asmaxLik(loglik, start)
.- a number of optimization methods with unified interface, most of
which can be called from through
maxLik
. - support for BHHH method
- tools to help debugging analytic gradient and Hessian
- a number of summary methods, including
summary
,stdDev
andtidy
for a quick summaries and standard deviations.
maxLik package is a set of convenience tools and wrappers focusing on Maximum Likelihood (ML) analysis, but it also contains tools for other optimization tasks. The package includes a) wrappers for several existing optimizers (implemented by stats::optim; b) original optimizers, including Newton-Raphson and Stochastic Gradient Ascent; and c) several convenience tools to use these optimizers from the ML perspective. Examples are BHHH optimization (maxBHHH) and utilities that extract standard errors from the estimates. Other highlights include a unified interface for all included optimizers, tools to test user-provided analytic derivatives, and constrained optimization.
A good starting point to learn about the usage of maxLik are the
included vignettes "Introduction: what is maximum likelihood",
"Maximum likelihood estimation with maxLik" and
"Stochastic Gradient Ascent in maxLik". Another good
source is
Henningsen & Toomet (2011), an introductory paper to the package.
Use
vignette(package="maxLik")
to see the available vignettes, and
vignette("using-maxlik")
to read the usage vignette.
From the user's perspective, the
central function in the package is maxLik
. In its
simplest form it takes two arguments: the log-likelihood function, and
a vector of initial parameter values (see the example below).
It returns an object of class
maxLik with convenient methods such as
summary, coef, stdEr. It also supports a plethora
of other arguments, for instance one can supply analytic gradient and
Hessian, select the desired optimizer, and control the optimization in
different ways.
A useful utility functions in the package is
compareDerivatives
that
allows one to compare the analytic and numeric derivatives for debugging
purposes.
Another useful function is condiNumber
for
analyzing multicollinearity problems in the estimated models.
In the interest of providing a unified user interface, all the optimizers are implemented as maximizers in this package. This includes the optim-based methods, such as maxBFGS and maxSGA, the maximizer version of popular Stochastic Gradient Descent.
Examples:
### estimate mean and variance of normal random vector
## create random numbers where mu=1, sd=2
set.seed(123)
x <- rnorm(50, 1, 2 )
## log likelihood function.
## Note: 'param' is a 2-vector c(mu, sd)
llf <- function(param) {
mu <- param[1]
sd <- param[2]
llValue <- dnorm(x, mean=mu, sd=sd, log=TRUE)
sum(llValue)
}
## Estimate it with mu=0, sd=1 as start values
ml <- maxLik(llf, start = c(mu=0, sigma=1) )
print(summary(ml))
## Estimates close to c(1,2) :-)
- Ott Toomet
- Arne Henningsen
- Spencer Graves
- Yves Croissant
- David Hugh-Jones
- Lucca Scrucca
Maintainer: Ott Toomet
Henningsen A, Toomet O (2011). maxLik: A package for maximum likelihood estimation in R. Computational Statistics, 26(3), 443-458. doi: 10.1007/s00180-010-0217-1.
The package (and its name) was inspired by the maxlik library in gauss programming language.
The very first code of maxLik
originates from a PhD econometrics
course in fall 2000. The course was taught by Lars Muus at Aarhus
University, and a problem set asked the students to implement
Gauss-Newton method. Later, OT could not understand error messages of
nlm
function, and amended the Gauss-Newton to Newton-Raphson. This
is the Newton-Raphson method that is one of the central optimizers in
current maxLik
.