Noise Reduction Methods for Large-Scale Machine Learning

I have two posts remaining in my series on “Optimization Methods for Large-Scale Machine Learning” by Bottou, Curtis, and Nocedal. You can find the entire series here. These last two posts will discuss improvements on the base stochastic gradient method. Below I have reproduced Figure 3.3, which suggests two general approaches. I will cover noise reduction in this post, and second-order methods in the next.

SGDirections

The left-to-right direction on the diagram signifies noise reduction techniques. We say that the SG search direction is “noisy” because it includes information from only one (randomly generated) sample per iteration. We use a noisy direction, of course, because it’s too expensive to use the entire gradient. But we can consider using a small batch of samples per iteration (a “minibatch”), or using information from previous iterations. The idea here is to find a happy medium between the far left of the diagram, which represents one sample per iteration, and the right, which represents using the full gradient.

Section 5 describes several noise reduction techniques. Dynamic sample size methods vary the number of samples in a minibatch per iteration, for example by increasing the batch size geometrically with the iteration count. Gradient aggregation, as the name suggests, involves the use of gradient information from past iterations. The SVRG method involves starting with a full batch gradient, then for subsequent iterations updating the gradient using gradient information at a single sample. The SAGA method involves “taking the average of stochastic gradients evaluated at previous iterates”. Finally, iterate averaging methods use the iterates from multiple previous steps to update the current iterate.

The motivations behind these various noise reduction methods are more or less the same: make more progress on a single step without paying too much of a computational cost. The primary tradeoff, in addition to increased computational cost per iteration, of course, is the extra storage associated with keeping extra state to compute search direction. Section 5 of the paper discusses these tradeoffs in light of convergence criteria.

[Updated 8/24/2016] Going back to our diagram, the up and down dimension of the diagram represents so-called “second-order methods”. Gradient-based methods, including SG, are first-order methods because they use a first-order (linear) approximation to the objective function we want to optimize. Second-order methods attempt to look at the curvature of the objective function to obtain better search directions. Once again, there is a tradeoff: using the curvature is more work, but we hope that by computing better search directions we’ll need far fewer iterations to get a good solution. I had originally intended on covering Section 6 of the paper, which describes several such methods in detail, but I will leave the interested reader to dig through that section themselves!

Advertisements

Analyses of Stochastic Gradient Methods

I am continuing my series on “Optimization Methods for Large-Scale Machine Learning” by Bottou, Curtis, and Nocedal. You can find the entire series here.

Last time we discussed stochastic and batch methods for optimization in machine learning. In both cases we’re trying to optimize a loss function that will give us good learning parameters for a deep learning model. We do this iteratively. A pure stochastic gradient (SG) approach picks one sample per iteration to define a search direction, whereas a batch method will pick multiple samples.

In this post I want to cover most of Section 4, which concerns the convergence and complexity of SG methods. For completeness I have reproduced the authors’ summary of a general purpose SG algorithm below:

Choose an iterate w_1
for k = 1, 2, … do
   generate a random number s_k
   compute a stochastic vector g(w_k, s_k)
   choose a stepsize a_k > 0.
   w_k+1 <- w_k – a_k g(w_k, s_k)

In this algorithm, g is the search direction. In this series, we’ve already discussed three different choices for g:

  • g is the gradient. This is conventional gradient descent. In this case the random number is ignored.
  • g is the gradient for a single sample. This is conventional stochastic gradient.
  • g is the gradient over a subset of the samples. This is “mini-batch” SG.

As the authors show, it’s not easy to make definitive statements on comparison between mini batch and SG. The gist of what they show is that SG has better convergence properties in theory, but batch methods can provide certain practical advantages. That is: “one can, however, realize benefits of mini-batching in practice since it offers important opportunities for software optimization and parallelization; e.g., using sizeable mini-batches is often the only way to fully leverage a GPU processor.” The rest of Section 4 spells this out in more detail through convergence results and complexity analysis.

As a practitioner, I honestly don’t place huge weight on the convergence theorems presented in Section 4. The key result for me is Theorem 4.9, which states that for a nonconvex objective (which we have in most deep learning scenarios) the SG method converges when the stepsize diminishes according to a somewhat loosely defined schedule.

More interesting (for me) is Section 4.4, which discusses the overall work complexity for applying SG to deep learning scenarios. In many real-world big data scenarios, we’re optimizing a loss function using previously trained model parameters under a particular computational time limit. This is different from many traditional optimization scenarios, where we let the code run until we achieve a particular solution accuracy. In our situation, the total expected error in the model is the sum of three components: the expected risk using optimal parameters, the estimated error in the expected and empirical risk, and the optimization accuracy. Minimizing this error involves tradeoffs, for example “if one decides to make the optimization more accurate … one might need to make up for the additional computing time by: (i) reducing the sample size n, potentially increasing the estimation error; or (ii) simplifying the function family H, potentially increasing the approximation error.” These are familiar techniques for machine learning practitioners; the benefit provided here is a more formal characterization of how these techniques impact the overall solution error.

Returning to the choice of conventional versus batch approaches, the discussion in Section 4.5 shows that “Even though a batch approach possesses a better dependency on epsilon, this advantage does not make up for its dependence on n. […] In conclusion, we have found that a stochastic optimization algorithm performs better in terms of expected error, and, hence, makes a better learning algorithm in the sense considered here.” Again, even this statement could be mitigated somewhat by the computational benefits associated with batching on a particular computational infrastructure, that is, the benefits of GPUs and parallelism.

Section 4.5 is a commentary on some of the remaining challenges and questions that must be confronted when using SG for large-scale machine learning. Since the issues raised in Section 4.5 pair nicely with the discussion in Section 5, I’ll save both for next time.

Stochastic and Batch Methods for Machine Learning

I am continuing my series on “Optimization Methods for Large-Scale Machine Learning” by Bottou, Curtis, and Nocedal. You can find the entire series here.

In previous posts we discussed the use of deep neural networks (DNNs) in machine learning, and the pivotal role of the optimization of carefully selected prediction functions in training such models. These topics roughly correspond to the first two sections of the paper.

Section 3 provides an overview of optimization methods appropriate for DNNs. We begin where we left off in Section 2 by assuming a family of prediction functions parameterized by w. In other words, w represents the learning parameters. We also assume a loss function that depends on predicted and actual values, for example misclassification rate. Typically we’re minimizing the loss function f over a set of samples – the empirical risk. We call this R_n(w), which in turn is the average of the loss for each sample: (1/n) Ʃ_i∇f_i(). Given all of this, there are two fundamental paths for minimizing risk. In each case we iteratively improve the learning parameters step by step, where we write the parameters at step k as w_k.

  • stochastic: w_k+1 ← w_k − α_k ∇f_ik (w_k) where the index ik is chosen randomly.
  • batch: w_k+1 ← w_k − (α_k / n) Ʃ_i∇f_i(w_k)

The difference between the two paths is how many samples we consider on each step. The tradeoff is between per iteration cost (where stochastic wins) versus per iteration improvement (where batch wins). Stochastic algorithms a good choice for DNNs because they employ information more efficiently. If training samples are the same, or similar, than the added value of per step improvement from batch is not going to be worth it, because some (or most) of the added value turns out to be redundant. “If one believes that working with only, say, half of the data in the training set is sufficient to make good predictions on unseen data, then one may argue against working with the entire training set in every optimization iteration. Repeating this argument, working with only a quarter of the training set may be useful at the start, or even with only an eighth of the data, and so on. In this manner, we arrive at motivation for the idea that working with small samples, at least initially, can be quite appealing.”

The authors summarize: “there are intuitive, practical, and theoretical arguments in favor of stochastic over batch approaches in optimization methods for large-scale machine learning”. I will omit a summary of the theoretical motivations, but they are found in Section 3.3.

Next the authors consider improving on SG. One path involves trying to realize the best of both worlds between batch and stochastic methods, namely preserving the low per-iteration cost of stochastic methods while improving the per-iteration improvement. Another path is to “attempt to overcome the adverse effects of high nonlinearity and ill-conditioning”.  This involves trying to employ information beyond just the gradient. We’ll examine these alternatives in future posts in this series, when we get to later sections in the paper.

Model Building for Large-Scale Machine Learning

In this post on my series on “Optimization Methods for Large-Scale Machine Learning” by Bottou, Curtis, and Nocedal, I want to focus on model building in machine learning.

Section 2 of the paper describes several case studies, with the purpose of showing how “the process of machine learning leads to the selection of a prediction function through solving an optimization problem.” A prediction function is a mathematical function that links the model inputs to the quantity we wish to predict. From the practitioner’s point of view, a prediction function is implicitly specified by the technique the data scientist has chosen (for example, regression or neural networks) and trained model parameters (what is actually learned when the technique is applied to data).

For example, the structure of a neural network amounts to a description of a family of related functions. In the diagram below I have given two simple neural networks with corresponding prediction functions. The first simply adds the two inputs together. The second specifies a linear function involving a vector of inputs and training parameters W and b.

NeuralNetwork

Training the neural network amounts to choosing a particular function from the family corresponding to the nodes. Neural networks are interesting because they yield “large-scale, highly nonlinear, and nonconvex optimization problems”. For optimization practitioners, the “nonconvex” part of this statement is important because nonconvex optimization problems are particularly challenging. Here is a snippet from Stephen Boyd’s Convex Optimization I class that makes the point well.

With this in mind we may be tempted to avoid neural networks, and deep learning, altogether. However, as section 2.2 points out, certain classification tasks, like those involving speech and images, are “not well performed in an automated manner using computer programs based on sets of prescribed rules.” Deep neural networks (DNNs) involve many internal layers of manipulations and transformations, which lead to very flexible, highly parameterized models. Therefore while the corresponding optimization models for DNN are really damn hard, the potential payoff is worth it.

When a machine learning application is trying to classify data, for example in handwriting recognition, it is typical to minimize a function that relates to the misclassification rate. There are various choices for the specific function, as noted here and here (for empirical risk minimization). While we want to minimize a loss function relating to the misclassification rate, we also want classifiers that are general. In other words, if they work great on the data that we have at the time the classifier is learned, but poorly on data that comes in after that, our classifier is not very useful. For this we often divide our data into training, validation, and testing sets. Read here for more.

Section 2.3 considers the determination of a prediction function that accurately predicts outputs given inputs. We want this function to work well over the set of inputs that we will see in the real world, not just the training set. Therefore “one should choose the prediction function h by attempting to minimize a risk measure over an adequately selected family of prediction functions”. A family of functions can be described in many ways, for example as a particular functional form with parameters in it as in m x + b for parameters (m, b). Adequately selected means:

  • Able to achieve low empirical risk by choosing a rich family of functions or by using knowledge about the problem domain.
  • The gap between expected and empirical risk should be small, that is, they should not be biased towards or underfit the input data.
  • Chosen so the resulting optimization problem can be solved efficiently.

These considerations are at odds with one another as some point towards broader, more complicated families of functions and others simpler. With regards to the first consideration, increasing the number of training samples is helpful. So is choosing a function family with a high “capacity”, which can be loosely described as a function’s “complexity, expressive power, richness, or flexibility.

Having considered what makes a good prediction function, the authors next consider procedures for finding them. The approach considered in Section 2.3 is called structural risk minimization – here is a good overview. A nice visual representation is given in Figure 2.5, but the point is to avoid both underfitting and overfitting. Underfitting happens which happens when the observed empirical risk (the frequency of observed misclassification) is high. This happens when the prediction function is insufficiently expressive to link inputs to outputs, which can happen when the network structure doesn’t make sense or is too simplistic. Overfitting happens when increasing the number or complexity of the model parameters begins to increase the misclassification rate in real-world data. This can happen even as the misclassification rate on our training data decreases. In other words, the model no longer effectively generalizes to the real world – it is too highly tuned to the data at hand. All of this implies that picking functional families that give good empirical risk may be counterproductive. The remedy to this situation is to split input data into training, validation, and testing sets, as alluded to in the first post in this series.

In the next post in this series, we’ll cover Section 3, which describes the optimization methods used to train these models.

Optimization Methods for Large-Scale Machine Learning

Hey, so I mostly read a 93 page paper. The topic is a worthy one: optimization methods for large-scale machine learning. Deep learning powers best in class speech, image, and text intelligence on the web today, and deep learning is in turn powered by optimization. I will summarize “Optimization Methods for Large-Scale Machine Learning” by Bottou, Curtis, and Nocedal over the next few posts because it provides a useful operations research-centered evaluation of an important area in machine learning. In general, machine learning practitioners don’t know shit about operations research, and vice versa. This paper, along with work of Stephen Wright at Wisconsin (check out this talk), will certainly help to remedy this situation. I also predict that this paper will spur new advances in deep learning.

Here goes, and remember that I’m trying to summarize 93 pages!

The title of the paper is quite broad, but the focus is primarily on the use of the stochastic gradient descent (SGD) method (and variants) in deep learning applications. If you don’t have any previous experience with these topics, this series may not be for you, but I will try to summarize anyway. The term “deep learning” describes a range of machine learning algorithms that are used to classify or predict. Deep learning is primarily distinguished by:

  1. The use of much more input data than is typical for machine learning,
  2. Models that have many internal layers of data manipulation and transformation,
  3. A reliance on parallel and GPU processing.

Training a deep learning algorithm involves finding model parameters that produce effective predictors or classifiers. Finding the values of variables that produce the best results for a particular objective (or “goal”) is the job of optimization. The stochastic gradient descent method is so-named because it repeatedly takes steps in the direction of steepest descent, which is defined by the gradient of the objective we want to optimize. If we think of the objective function as a hilly field, then the gradient always points in the steepest direction down, when we examine the immediate area around where we stand. The “stochastic” part of SGD applies because rather than looking at all of the samples over which the objective function is defined, we only look one (or a few) randomly determined sample. As compared to using the full gradient, this approach takes less time to take a single step, but the step is possibly less effective in improving the value of our objective function. In theory and practice we can establish that often the tradeoff is worth it. Characterizing these tradeoffs more concretely is one of the objectives of the paper. As as supplement to the paper and this post, check out this great post by Sebastian Ruder for an overview of gradient descent algorithms for machine learning.

In my next post in this series, I will cover Section 2 which describes the selection of a prediction function that is useful for modeling but practical for model training at scale.

Updated 8/2/2016 to correctly summarize SGD. Thanks J-F!

Finding Optimal State Capitol Tours on the Cloud with NEOS

My last article showed you how to find an optimal tour of all 48 continental US state capitols using operations research. I used the Python API of the popular Gurobi solver to create and solve a traveling salesman problem (TSP) model in a few seconds.

In this post I want to show you how to use Concorde, the world’s best TSP solver for free on the cloud using the NEOS optimization service. In less than 100 lines of Python code, you can find the best tour. Here it is:

TSP_Tour48_Bokeh

Using NEOS is pretty easy. You need to do three things to solve an optimization problem:

  1. Create a NEOS account.
  2. Create an input file for the problem you want to solve.
  3. Give the input file to NEOS, either through their web interface, or by calling an API.

Let’s walk through those steps for the state capitol problem. If you just want to skip to the punchline, here is my code.

Concorde requires a problem specification in the TSPLIB format. This is a text based format where we specify the distances between all pairs of cities. Recall that Randy Olson found the distances between all state capitols using the Google Maps API in this post. Here is a file with this information. Using the distances, I created a TSPLIB input file with the distance matrix – here it is.

The next step is to submit the file to NEOS. Using the xmlrpc Python module, I wrote a simple wrapper to submit TSPLIB files to NEOS. The NEOS submission is an XML file that wraps the contents of the TSPLIB data, and also tells NEOS that we want to use the Concorde solver. The XML file is given to NEOS via an XML-RPC call. NEOS returns the results as a string – the end of the string contains the optimal tour. Here is the body of the primary Python function that carries out these steps:

def solve_tsp_neos_concorde(dist):
xml = make_neos_concorde(dist)
neos = NeosClient()
result = neos.run(xml)
return tour_from_neos_concorde_result(result)

When I run this code, I obtain the same tour as in my initial post. Hooray! You can also extend my code (which is based on NEOS documentation) to solve many other kinds of optimization models.

Computing Optimal Road Trips Using Operations Research

Randy Olson recently wrote about an approach for finding a road trip that visits all 48 continental US state capitols. Randy’s approach involves genetic algorithms and is accompanied by some very effective visualizations. Further, he examines how the length of these road trips varies as the number of states visited increases. While the trips shown in Randy’s post are very good, they aren’t quite optimal. In this post I’d like to show how you can find the shortest possible road trips in Python using the magic science of operations research. I suggest you read Randy’s post first to get up to speed!

An “optimal road trip” is an ordering of the 48 state capitols that results in the smallest possible driving distance as determined by Google Maps. This is an example of what is known as the Traveling Salesman Problem (TSP). In his work, Randy has made a couple of simplifying assumptions that I will also follow:

  • The driving distance from capitol A to capitol B is assumed to be the same as from B to A. We know this isn’t 100% true because of the way roads work. But close enough.
  • We’re optimizing driving distance, not driving time. We could easily optimize “average” driving time using data provided by Google. Optimizing expected driving time given a specified road trip start date and time is actually pretty complicated given that we don’t know what the future will bring: traffic jams, road closures, storms, and so on..

These aren’t “bugs”, just simplifying assumptions. Randy used the Google Maps API to get driving distances between state capitols – here’s the data file. Note that Google Maps returns distances in kilometers so you’ll need to convert to miles if that’s your preference.

Randy’s approach to solve this problem was to use a genetic algorithm. Roughly speaking, a genetic algorithm starts with a whole bunch of randomly generated tours, computes their total distances, and repeatedly combines and modifies them to find better solutions. Following the analogy to genetics, tours with smaller total distances are more likely to be matched up with other fit tours to make brand new baby tours. As Randy showed in his post, within 20 minutes his genetic algorithm is able to produce a 48 state tour with a total length of 13,310 miles.

It turns out that we can do better. An inaccuracy in Randy’s otherwise great article is the claim that it’s impossible to find optimal tours for problems like these. You don’t have to look at all possible 48 city road trips to find the best one – read this post by Michael Trick. What we can do instead is rely on the insanely effective field of operations research and its body of 50+ years of work. In an operations research approach, we build a model for our problem based on the decisions we want to make, the overall objective we have in mind, and restrictions and constraints on what constitutes a solution. This model is then fed to operations research software (optimization solvers) that use highly tuned algorithms to find provably optimal solutions. The algorithms implemented solvers rule out vast swaths of possible tours in a brutally efficient manner, making the seemingly impossible routine.

The best-in-class TSP solver is Concorde, and is based on an operations research approach. You don’t need Concorde to solve this TSP – a 48 city road trip is puny by operations research standards. I have chosen to use the Gurobi solver because it is a very powerful solver that includes an easy-to-use Python API, and it has a cloud version. Gurobi even includes an example that covers this very problem! The key to their model is to define a yes-no decision variable for each pair of state capitols. A “yes” value for a decision variable indicates that pair of cities is on the optimal tour. The model also needs to specify the rules for what it means to be an optimal tour:

  • The shorter the total distance of the tour (which is determined by the distances between all of the “yes” pairs of cities), the better. This is the objective (or goal) that we seek to optimize.
  • The traveller will arrive at each capitol from another capitol, and will leave for another capitol. In other words, exactly two decision variables involving a capitol will be “yes”.
  • Tours don’t have cycles: visiting Boise more than once is not allowed.

(Sidebar: If you are not used to building optimization models then the code probably won’t make much sense and you may have no idea what the hell the Gurobi tutorial is talking about. No offense, Gurobi, the tutorial is very well written! The challenge of writing optimization models, which involves writing down precise mathematical relationships involving the decision variables you want solved, is what prevents many computer scientists and data scientists from using the fruits of operations research more often. This is especially the case when the models for classical problems such as the TSP require things like “lazy constraints” that even someone experienced with operations research may not be familiar with. I wrote about this in more detail here. On the other hand, there are a lot of great resources and tutorials out there and it’s simply good practice to rely on proven approaches that provide efficient, accurate results. This is what good engineers do. Anyway, the point is that you can easily steal Gurobi’s sample for this problem and replace their “points” variable with the distances from the data file above. If I wanted to do this with an open source package, or with Concorde itself I could have done it that way too.)

My code, based on Gurobi’s example, is able to find a tour with a total length of 12930 miles, about 380 miles shorter than the original tour. What’s more, it takes seconds to find the answer. Here is my Python code. Here is the tour – click here to explore it interactively.

TSP_Tour48

A text file with the tour is here and a GPX file of the tour is here courtesy of gpsvisualizer.com. This optimal tour is very close to the one the genetic algorithm came up with. Here is a screenshot for reference:

TSP_Tour48Olson

An interesting twist is that Randy extends the problem to consider both the driving distance and the number of states visited. If we are willing to do a tour of, say, 10 states, then clearly the total distance for the tour will be much shorter than a 48 state tour. Randy has a nice animation showing tours of differing numbers of states, as well as a chart that plots the number of states visited against the estimated driving time. This curve is called the efficient frontier – you sometimes see similar curves in financial models.

The modified problem of finding the shortest tour involving K of 48 total state capitols can also be solved by Gurobi. I extended the optimization model slightly:

  • Introduce new yes-no decision variables for each capitol: “yes” if the capitol is one of the lucky K to be visited.
  • Exactly K of the new decision variables should be “yes”.
  • Fix up the original model to make sure we don’t worry about the other N-K cities not on our mini tour.

(I also had to modify the model because I am running on the cloud and the “lazy constraints” mentioned in Gurobi’s TSP writeup don’t work in the cloud version of Gurobi.)

With this new code in place I can call it for K=3…47 and get this optimal efficient frontier curve:

TSP_Pareto

The distances and tours for all of these mini tours are given here.

What have we learned here? In about 200 lines of Python code we were able to efficiently find provably optimal solutions for the original road trip problem, as well as the “pareto optimization” extension. If you’re a data scientist, get familiar with operations research principles because it will certainly pay off!