Model Building for Large-Scale Machine Learning

In this post on my series on “Optimization Methods for Large-Scale Machine Learning” by Bottou, Curtis, and Nocedal, I want to focus on model building in machine learning.

Section 2 of the paper describes several case studies, with the purpose of showing how “the process of machine learning leads to the selection of a prediction function through solving an optimization problem.” A prediction function is a mathematical function that links the model inputs to the quantity we wish to predict. From the practitioner’s point of view, a prediction function is implicitly specified by the technique the data scientist has chosen (for example, regression or neural networks) and trained model parameters (what is actually learned when the technique is applied to data).

For example, the structure of a neural network amounts to a description of a family of related functions. In the diagram below I have given two simple neural networks with corresponding prediction functions. The first simply adds the two inputs together. The second specifies a linear function involving a vector of inputs and training parameters W and b.

NeuralNetwork

Training the neural network amounts to choosing a particular function from the family corresponding to the nodes. Neural networks are interesting because they yield “large-scale, highly nonlinear, and nonconvex optimization problems”. For optimization practitioners, the “nonconvex” part of this statement is important because nonconvex optimization problems are particularly challenging. Here is a snippet from Stephen Boyd’s Convex Optimization I class that makes the point well.

With this in mind we may be tempted to avoid neural networks, and deep learning, altogether. However, as section 2.2 points out, certain classification tasks, like those involving speech and images, are “not well performed in an automated manner using computer programs based on sets of prescribed rules.” Deep neural networks (DNNs) involve many internal layers of manipulations and transformations, which lead to very flexible, highly parameterized models. Therefore while the corresponding optimization models for DNN are really damn hard, the potential payoff is worth it.

When a machine learning application is trying to classify data, for example in handwriting recognition, it is typical to minimize a function that relates to the misclassification rate. There are various choices for the specific function, as noted here and here (for empirical risk minimization). While we want to minimize a loss function relating to the misclassification rate, we also want classifiers that are general. In other words, if they work great on the data that we have at the time the classifier is learned, but poorly on data that comes in after that, our classifier is not very useful. For this we often divide our data into training, validation, and testing sets. Read here for more.

Section 2.3 considers the determination of a prediction function that accurately predicts outputs given inputs. We want this function to work well over the set of inputs that we will see in the real world, not just the training set. Therefore “one should choose the prediction function h by attempting to minimize a risk measure over an adequately selected family of prediction functions”. A family of functions can be described in many ways, for example as a particular functional form with parameters in it as in m x + b for parameters (m, b). Adequately selected means:

  • Able to achieve low empirical risk by choosing a rich family of functions or by using knowledge about the problem domain.
  • The gap between expected and empirical risk should be small, that is, they should not be biased towards or underfit the input data.
  • Chosen so the resulting optimization problem can be solved efficiently.

These considerations are at odds with one another as some point towards broader, more complicated families of functions and others simpler. With regards to the first consideration, increasing the number of training samples is helpful. So is choosing a function family with a high “capacity”, which can be loosely described as a function’s “complexity, expressive power, richness, or flexibility.

Having considered what makes a good prediction function, the authors next consider procedures for finding them. The approach considered in Section 2.3 is called structural risk minimization – here is a good overview. A nice visual representation is given in Figure 2.5, but the point is to avoid both underfitting and overfitting. Underfitting happens which happens when the observed empirical risk (the frequency of observed misclassification) is high. This happens when the prediction function is insufficiently expressive to link inputs to outputs, which can happen when the network structure doesn’t make sense or is too simplistic. Overfitting happens when increasing the number or complexity of the model parameters begins to increase the misclassification rate in real-world data. This can happen even as the misclassification rate on our training data decreases. In other words, the model no longer effectively generalizes to the real world – it is too highly tuned to the data at hand. All of this implies that picking functional families that give good empirical risk may be counterproductive. The remedy to this situation is to split input data into training, validation, and testing sets, as alluded to in the first post in this series.

In the next post in this series, we’ll cover Section 3, which describes the optimization methods used to train these models.

Author: natebrix

Follow me on twitter at @natebrix.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s