Image Source: Wikipedia
Linear Models are one of the oldest and most well known statistical prediction algorithms which nowdays is often categorized as a "machine learning algorithm." Generalized Linear Models (GLMs) are are a framework for modeling a response variable $y$ that is bounded or discrete. Generalized linear models allow for an arbitrary link function $g$ that relates the mean of the response variable to the predictors, i.e. $E(y) = g(β′x)$. The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between, $(-\infty ,\infty )$, the range of the linear predictor, and the range of the response variable (e.g. $[0,1]$). [1]
Therefore, GLMs allow for response variables that have error distribution models other than a normal distribution. Some common examples of GLMs are:
In a linear model, given a vector of inputs, $X^T = (X_1, X_2, ..., X_p)$, we predict the output $Y$ via the model:
$$\hat{Y} = \hat{\beta}_0 + \sum_{j=1}^p X_j \hat{\beta}_j$$The term $\hat{\beta}_0$ is the intercept, also known as the bias in machine learning. Often it is convenient to include the constant variable $1$ in $X$, include $\beta_0$ in the vector of coefficients $\hat{\beta}$, and then write the linear model in vector form as an inner product,
$$\hat{Y} = X^T\hat{\beta},$$where $X^T$ denotes the transpose of the design matrix. We will review the case where $Y$ is a scalar, however, in general $Y$ can have more than one dimension. Viewed as a function over the $p$-dimensional input space, $f(X) = X^T\beta$ is linear, and the gradient, $f′(X) = \beta$, is a vector in input space that points in the steepest uphill direction.
There are many different methods to fitting a linear model, but the most simple and popular method is Ordinary Least Squares (OLS). The OLS method minimizes the residual sum of squares (RSS), and leads to a closed-form expression for the estimated value of the unknown parameter $\beta$.
$$RSS(\beta) = \sum_{i=1}^n (y_i - x_i^T\beta)^2$$$RSS(\beta)$ is a quadradic function of the parameters, and hence its minimum always exists, but may not be unique. The solution is easiest to characterize in matrix notation:
$$RSS(\beta) = (\boldsymbol{y} - \boldsymbol{X}\beta)^T(\boldsymbol{y} - \boldsymbol{X}\beta)$$where $\boldsymbol{X}$ is an $n \times p$ matrix with each row an input vector, and $\boldsymbol{y}$ is a vector of length $n$ representing the response in the training set. Differentiating with respect to $\beta$, we get the normal equations,
$$\boldsymbol{X}^T(\boldsymbol{y} - \boldsymbol{X}\beta) = 0$$If $\boldsymbol{X}^T\boldsymbol{X}$ is nonsingular, then the unique solution is given by:
$$\hat{\beta} = (\boldsymbol{X}^T\boldsymbol{X})^{-1}\boldsymbol{X}^T\boldsymbol{y}$$The fitted value at the $i^{th}$ input, $x_i$ is $\hat{y}_i = \hat{y}(x_i) = x_i^T\hat{\beta}$. To solve this equation for $\beta$, we must invert a matrix, $\boldsymbol{X}^T\boldsymbol{X}$, however it can be computationally expensive to invert this matrix directly. There are computational shortcuts for solving the normal equations available via QR or Cholesky decomposition. When dealing with large training sets, it is useful to have an understanding of the underlying computational methods in the software that you are using. Some GLM software implementations may not utilize all available computational shortcuts, costing you extra time to train your GLMs, or require you to upgrade the memory on your machine.
http://web.stanford.edu/~hastie/Papers/glmpath.pdf
Consider a sample consisting of $n$ cases, each of which consists of $p$ covariates and a single outcome. Let $y_i$ be the outcome and $X_i := ( x_1 , x_2 , … , x_p)^T$.
Then the objective of Ridge is to solve:
$${\displaystyle \min _{\beta }\left\{{\frac {1}{N}}\sum _{i=1}^{N}\left(y_{i}-\beta_0 - \sum_{j=1}^p x_{ij}\beta_j \right)^{2}\right\}{\text{ subject to }}\sum _{j=1}^{p}\beta _{j}^2 \leq t.}$$Here $t$ is a prespecified free parameter that determines the amount of regularization. Ridge is also called $\ell_2$ regularization.
Lasso (least absolute shrinkage and selection operator) (also Lasso or LASSO) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the statistical model it produces.
Then the objective of Lasso is to solve:
$${\displaystyle \min _{\beta }\left\{{\frac {1}{N}}\sum _{i=1}^{N}\left(y_{i}-\beta_0 - \sum_{j=1}^p x_{ij}\beta_j \right)^{2}\right\}{\text{ subject to }}\sum _{j=1}^{p}|\beta _{j}| \leq t.}$$Here $t$ is a prespecified free parameter that determines the amount of regularization. Lasso is also called $\ell_1$ regularization.
Elastic Net regularization is a simple blend of Lasso and Ridge regularization. In software, this is typically controlled by an alpha
parameter in between 0 and 1, where:
alpha = 0.0
is Ridge regressionalpha = 0.5
is a 50/50 blend of Ridge/Lasso regressionalpha = 1.0
is Lasso regressionGLM models are trained by finding the set of parameters that maximizes the likelihood of the data. For the Gaussian family, maximum likelihood consists of minimizing the mean squared error. This has an analytical solution and can be solved with a standard method of least squares. This is also applicable when the $\ell_2$ penalty is added to the optimization. For all other families and when the $\ell_1$ penalty is included, the maximum likelihood problem has no analytical solution. Therefore an iterative method such as IRLSM, L-BFGS, the Newton method, or gradient descent, must be used.
The IRLS method is used to solve certain optimization problems with objective functions of the form:
$${\underset {{\boldsymbol \beta }}{\operatorname {arg\,min}}}\sum _{{i=1}}^{n}{\big |}y_{i}-f_{i}({\boldsymbol \beta }){\big |}^{p},$$by an iterative method in which each step involves solving a weighted least squares problem of the form:
$${\boldsymbol \beta }^{{(t+1)}}={\underset {{\boldsymbol \beta }}{\operatorname {arg\,min}}}\sum _{{i=1}}^{n}w_{i}({\boldsymbol \beta }^{{(t)}}){\big |}y_{i}-f_{i}({\boldsymbol \beta }){\big |}^{2}.$$IRLS is used to find the maximum likelihood estimates of a generalized linear model as a way of mitigating the influence of outliers in an otherwise normally-distributed data set. For example, by minimizing the least absolute error rather than the least square error.
One of the advantages of IRLS over linear programming and convex programming is that it can be used with Gauss-Newton and Levenberg-Marquardt numerical algorithms.
The IRL1 algorithm solves a sequence of non-smooth weighted $\ell_1$-minimization problems, and hence can be seen as the non-smooth counterpart to the IRLS algorithm.
The IRLS method with alternating direction method of multipliers (ADMM) inner solver as described in Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers by Boyd et. al to deal with the $\ell_1$ penalty. ADMM is an algorithm that solves convex optimization problems by breaking them into smaller pieces, each of which are then easier to handle. Every iteration of the algorithm consists of following steps:
In the H2O GLM implementation, steps 1 and 2 are performed distributively, and Step 3 is computed in parallel on a single node. The Gram matrix appraoch is very efficient for tall and narrow datasets when running lamnda search with a sparse solution.
The IRLS method can also use cyclical coordinate descent in it's inner loop (as opposed to ADMM). The glmnet package uses cyclical coordinate descent which successively optimizes the objective function over each parameter with others fixed, and cycles repeatedly until convergence.
Cyclical coordinate descent methods are a natural approach for solving convex problems with $\ell_1$ or $\ell_2$ constraints, or mixtures of the two (elastic net). Each coordinate-descent step is fast, with an explicit formula for each coordinate-wise minimization. The method also exploits the sparsity of the model, spending much of its time evaluating only inner products for variables with non-zero coefficients.
Limited-memory BFGS (L-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm using a limited amount of computer memory. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with a large number of variables. The method is popular among "big data" GLM implementations such as h2o::h2o.glm() (one of two available solvers) and SparkR::glm(). The L-BFGS-B algorithm is an extension of the L-BFGS algorithm to handle simple bounds on the model.
In order for the coefficients to be easily interpretable, the features must be centered and scaled (aka "normalized"). Many software packages will allow the direct input of categorical/factor columns in the training frame, however internally any categorical columns will be expaded into binary indicator variables. The caret package offers a handy utility function, caret::dummyVars(), for dummy/indicator expansion if you need to do this manually.
Missing data will need to be imputed, otherwise in many GLM packages, those rows will simply be omitted from the training set at train time. For example, in the stats::glm()
function there is an na.action
argument which allows the user to do one of the three options:
Other GLM implementations such as h2o::glm()
will impute the mean automatically (in both training and test data), unless specified by the user.
There is an implementation of the standard GLM (no regularization) in the built-in "stats" package in R called glm.
Authors: The original R implementation of glm was written by Simon Davies working for Ross Ihaka at the University of Auckland, but has since been extensively re-written by members of the R Core team. The design was inspired by the S function of the same name described in Hastie & Pregibon (1992).
Backend: Fortran
In [1]:
#install.packages("caret")
library(caret)
data("Sacramento")
# Split the data into a 70/25% train/test sets
set.seed(1)
idxs <- caret::createDataPartition(y = Sacramento$price, p = 0.75)[[1]]
train <- Sacramento[idxs,]
test <- Sacramento[-idxs,]
In [2]:
# Fit the GLM
fit <- glm(price ~ .,
data = train,
family = gaussian())
summary(fit)
In [3]:
# Predict on the test set
pred <- predict(fit, newdata = test)
Above we have a slight issue. The city
column has new factor levels in the test set that were not present in the training set. Even though the train
and test
data frames originated from a single data frame, Sacramento
, and therefore have identical factor levels, we still run into this problem. Let's take a closer look at the factor levels to see what's going on:
In [4]:
str(train)
In [5]:
str(test)
Although train
and test
have identical structure, not all the levels are represented in the training data. To validate this, let's take a look at the actual unique levels that were used in the model:
In [6]:
# Check the number of levels in the model features
sapply(fit$xlevels, function(x) print(length(x)))
We can manually fix this by updating the xlevels
element of the model. We have the same issue with zip
, so we should go ahead and manually update that as well.
In [7]:
# Update factor levels so that prediction works
fit$xlevels[["city"]] <- union(fit$xlevels[["city"]], levels(test$city))
fit$xlevels[["zip"]] <- union(fit$xlevels[["zip"]], levels(test$zip))
In [8]:
# Predict on the test set
pred <- predict(fit, newdata = test)
summary(fit)
In [9]:
# Compute model performance on the test set
caret::R2(pred = pred, obs = test$price)
caret::RMSE(pred = pred, obs = test$price)
In [10]:
# Train a caret glm model
fit <- caret::train(form = price ~ .,
data = train,
trControl = trainControl(method = "none"),
method = "glm",
family = gaussian())
summary(fit$finalModel)
In [11]:
# Predict on the test set
pred <- predict(fit, newdata = test)
In [12]:
# Compute model performance on the test set
caret::R2(pred = pred, obs = test$price)
caret::RMSE(pred = pred, obs = test$price)
Ok, this looks much better. And we didn't have to deal with the missing factor levels! :-)
Authors: Tomas Nykodym, H2O.ai contributors
Backend: Java
The h2o package offers a data-distributed implementation of Generalized Linear Models. A "data-distribtued" version uses distributed data frames, so that the whole design matrix does not need to fit into memory at once. The h2o package fits both regularized and non-regularized GLMs. The implementation details are documented here.
In [13]:
#h2o.shutdown(prompt = FALSE)
In [14]:
# h2o.glm example
#install.packages("h2o")
library(h2o)
h2o.init(nthreads = -1) #Start a local H2O cluster using nthreads = num available cores
Typically one would load a dataset in parallel from disk using the h2o.importFile()
function, however for the purposes of this tutorial, we are going to use a tiny built-in R dataset, so we can send that data to the H2O cluster (from R memory) using the as.h2o()
function. We would also use the h2o.splitFrame()
function to split the data instead of the caret::createDataPartition()
, but for an apples-to-apples comparison with the methods above, it's good to use the same exact train and test split, generated the same way as above.
In [15]:
# Load Sacramento dataset
library(caret)
data("Sacramento")
# Convert the data into an H2OFrame
sac <- as.h2o(Sacramento)
# Split the data into a 70/25% train/test sets
set.seed(1)
idxs <- caret::createDataPartition(y = Sacramento$price, p = 0.75)[[1]]
train <- sac[idxs,]
test <- sac[-idxs,]
# Dimensions
dim(train)
dim(test)
# Columns
names(train)
In [16]:
# Identify the predictor columns
xcols <- setdiff(names(train), "price")
# Train a default GLM model with no regularization
system.time(fit <- h2o.glm(x = xcols,
y = "price",
training_frame = train,
family = "gaussian",
lambda = 0)) #lambda = 0 means no regularization
In [17]:
summary(fit)
In [18]:
# H2O computes many model performance metrics automatically, accessible by utility functions
perf <- h2o.performance(model = fit, newdata = test)
h2o.r2(perf)
sqrt(h2o.mse(perf))
Ok, so let's assume that we have wide, sparse, collinear or big data. If your training set falls into any of those categories, it might be a good idea to use a regularlized GLM.
Authors: Jerome Friedman, Trevor Hastie, Noah Simon, Rob Tibshirani
Backend: Mortran (extension of Fortran used for scientific computation)
glmnet is a package that fits a generalized linear model via penalized maximum likelihood. The regularization path is computed for the lasso or elastic-net penalty at a grid of values for the regularization parameter lambda. The algorithm is extremely fast, and can exploit sparsity in the input matrix $\boldsymbol{X}$.
Features:
"gaussian","binomial","poisson","multinomial","cox","mgaussian"
The Glmnet package is a fast implementation, but it requires some extra processing up-front to your data if it's not already represented as a numeric matrix. For example, if you have categorical data or missing data, you need to deal with that yourself.
In [19]:
#install.packages("glmnet")
#install.packages("Cairo") #for plotting lasso coefficients in Jupyter notebook
library(glmnet)
In [20]:
data("QuickStartExample") #loads 'x' and 'y'
str(x)
class(x)
In [21]:
fit <- glmnet(x, y)
We can visualize the coefficients by executing the plot
function. Each curve corresponds to a variable. It shows the path of its coefficient against the $\ell_1$-norm of the whole coefficient vector at as $\lambda$ varies. The axis above indicates the number of nonzero coefficients at the current $\lambda$, which is the effective degrees of freedom for the lasso.
In [22]:
plot(fit)
In [ ]:
# TO DO: Add caret::twoClassSim example for comparison instead of "QuickStartExample"
In [28]:
# Simulate a binary response dataset
library(caret)
set.seed(1)
df <- caret::twoClassSim(n = 100000,
linearVars = 10,
noiseVars = 50,
corrVars = 50)
dim(df)
In [29]:
# Identify the response & predictor columns
ycol <- "Class"
xcols <- setdiff(names(df), ycol)
df[ , ycol] <- ifelse(df[ , ycol] == "Class1", 0, 1)
# Split the data into a 70/25% train/test sets
set.seed(1)
idxs <- caret::createDataPartition(y = df[,ycol], p = 0.75)[[1]]
train <- df[idxs, ]
test <- df[-idxs, ]
train_y <- df[idxs, ycol]
test_y <- df[-idxs, ycol]
train_x <- model.matrix(~ -1 + ., train[, xcols])
test_x <- model.matrix(~ -1 + ., test[, xcols])
# Dimensions
dim(train_x)
length(train_y)
dim(test_x)
length(test_y)
In [30]:
head(test_y)
In [31]:
# Train a Lasso GLM
system.time(cvfit <- cv.glmnet(x = train_x,
y = train_y,
family = "binomial",
alpha = 1.0)) # alpha = 1 means lasso by default
In [32]:
preds <- predict(cvfit$glmnet.fit,
newx = test_x,
s = cvfit$lambda.min,
type = "response")
head(preds)
In [33]:
#install.packages("cvAUC")
library(cvAUC)
cvAUC::AUC(predictions = preds, labels = test_y)
In [34]:
plot(cvfit)
In [35]:
coef(cvfit, s = "lambda.min")
Introduced in the previous section, the h2o package can perform unregularized or regularized regression. By default, h2o.glm
will perform an Elastic Net regression. Similar to the glmnet
function, you can adjust the Elastic Net penalty through the alpha
parameter (alpha = 1.0
is Lasso and alpha = 0.0
is Ridge).
In [36]:
# Simulate a binary response dataset
library(caret)
set.seed(1)
df <- caret::twoClassSim(n = 100000,
linearVars = 10,
noiseVars = 50,
corrVars = 50)
dim(df)
In [37]:
# Convert the data into an H2OFrame
library(h2o)
h2o.init(nthreads = -1)
hf <- as.h2o(df)
In [38]:
# Identify the response & predictor columns
ycol <- "Class"
xcols <- setdiff(names(hf), ycol)
# Convert the 0/1 binary response to a factor
hf[ , ycol] <- as.factor(hf[ , ycol])
In [98]:
dim(df)
In [39]:
# Split the data into a 70/25% train/test sets
set.seed(1)
idxs <- caret::createDataPartition(y = df[,ycol], p = 0.75)[[1]]
train <- hf[idxs,]
test <- hf[-idxs,]
# Dimensions
dim(train)
dim(test)
In [ ]:
# Train a Lasso GLM
system.time(fit <- h2o.glm(x = xcols,
y = ycol,
training_frame = train,
family = "binomial",
lambda_search = TRUE, # compute lasso path
alpha = 1)) # alpha = 1 means lasso, same as glmnet above
In [ ]:
# Compute AUC on test dataset
# H2O computes many model performance metrics automatically, including AUC
perf <- h2o.performance(model = fit,
newdata = test)
h2o.auc(perf)