The Amazingly Wonderful Effectiveness of Operations Research

Hey, have you been reading Sanjay Saigal’s posts on Jim Fallows’s blog this week? I have been enjoying them all, particularly this one about "the Unreasonable Effectiveness of Operations Research". This post is about that post, so go read it now if you haven’t already. I’ll wait…
 
In a recent NYTimes blog post, Jim Lohr cites an improvement of 43 millon in the performance of a particular software program over a span of 15 years (see page 71 of the original White House report). While a factor of 1,000 was due to faster processors (Moore’s Law), the remaining factor of 43,000 was due to improvements in the underlying software algorithms. Simply put, human beings are getting smarter about the code they write. Sanjay Saigal writes that this example overstates the case for "software outstripping Moore’s Law" in two ways:

  1. the software program being measured (in this case, optimization software for "mixed-integer programs") is only one part of a larger system, so the true impact on the user of such a system is less than what is claimed,
  2. mixed integer programming is not representative of optimization solvers, let alone all software.

He goes on to make three observations (which I’ll get into later). I love the post, but I actually take issue with both conclusions and two of the three observations 😉

But let’s hit pause first. For those who haven’t heard about operations research, or "mixed-integer programs", I can imagine a lot of this sounds pretty esoteric. I imagine eyes glazing over in the same way that mine do when I read certain aviation-related posts on Jim Fallows’ blog – I get a bit lost in the details. I don’t know what the terms mean, and I am too lazy to find out. This is not Fallows’ or Saigal’s fault! When we graze blogs, twitter feeds, TV, and books the way we do – superficially and often without pause or reflection – sometimes the big picture is lost. The big picture, as I see it, is this: software improvements, especially in the form of algorithms, have had a much, much, much bigger impact on human progress than hardware improvements the last 40 years. (Admittedly one cannot exist without the other.) So this discussion is ultimately about understanding the source of these phonomenal improvements. These improvements have had a profound impact on the lives of pretty much everyone in the developed world. (And make no mistake, it’s shameful that there has not been a similar impact on the developing world – and an interesting question for another day is: how the hell do we fix that?)

Let’s turn to Saigal’s first point, about the end-user impact of these speed-ups. In the words of Gordon Gecko (as near as I can remember): speed is good. Speed is good not just because it allows you to get an answer faster, but because it allows you to get answers to questions than you weren’t able to answer at all. Companies such as Boeing, Amazon, UPS, and yes, Microsoft all solve immensely huge and difficult optimization problems daily (even hourly). They do so because the software built by people like Bob Bixby (and me ;)) spit out numbers that represent decisions – decisions that make the difference between profit or loss, smiles or eye-rolling, click or no click. So as innovation happens the result is not only faster software, but broader applicability. Optimization software will touch the lives of increasingly more and more people as it gets faster (and easier to use). This must also be taken into consideration when assessing the impact of these improvements, confined as they are to one chapter of a larger story.

Regarding the second point, it’s worth noting that even though "mixed integer programming" sounds esoteric, it is not. It is perhaps the most widely employed tool for solving optimization problems – I would bet (and win…) that most Fortune 500 companies solve them on a regular basis. Mixed integer programming is ubiquitous, and so the astonishing improvements in this area are significant enough to carry quite a bit of weight – though admittedly not quite as much as is claimed by Lohr.

Let’s move on to Saigal’s observations:

1.Without taking anything away from computational savants like Bob, this success rests on a foundation of publicly funded research going back to the fifties and sixties. Basic research remains the key to progress in technology and business.

One tiny little word needs to be changed: "basic research remains a key to progress in technology and business." The other key is the implementation of this research in commercial-grade software, applied by customers to real-world business and consumer applications. Operations Research is indeed an applied science, and nothing replaces the hard cold truth of having to solve somebody’s problems for a living. Not for fun (or tenure) – for profit. The progress of mixed-integer programming (MIP) solvers is due to a sometimes implicit, othertimes explicit partnership between research institutions and industry. In many cases the key conceptual pieces (such as new cutting techniques or improvements in the Simplex method) have been devised and published by researchers and then fine-tuned in industry. In other cases the key conceptual pieces themselves have been invented "out in the wild", more than just "combing through research". Some innovations only come from the field because they have access to information that researchers often do not have. Researchers and "savants" like Bob Bixby alike will tell you that models provide insight. "Models" essentially mean case studies – experiences of customers trying to use software to solve *their* problem. By scrutinizing, in painstaking detail, the hitches and glitches that occur when trying to solve a particular customer’s problem, every so often a new breakthrough occurs that leads to a huge improvement in software. (Not coincidentally, the models that an operations research software company uses to test its software are often considered as valuable as the source code itself. Go and try asking Bob Bixby for his models!) What’s notable here is that these breakthroughs may be totally uninteresting from a pure research perspective: they do not rely on cutting edge mathematics or new computer science methodologies. At best they result from good engineering and at worst are simply hacks – black magic. But they work, and they contribute to the incredible improvement factors cited in the NY Times blog. So the contributions made by commercial-grade software developers must be considered to be on equal ground with those of research community, both of which are significant and worthy of praise. (By my wording – commercial grade – I hope I have made it clear that I include the open source community as well! I see that COIN has joined twitter…)

2. At least at the level of a sub-field, innovation is difficult to plan for or predict. In 1991, linear programming was thought to be a mature field. From 1991 through 1998, linear programming performance improved dramatically. But mixed-integer programming (its variant) remained difficult. Following a tectonic performance jump in 1998, mixed-integer programming also began to be considered practically tractable. No external pivotal event occurred in 1998; success came "overnight" due to sustained long-term effort.

No problems from me here, except to note again that this sustained long-term effort came from both researchers and software developers.

3. Branding often provides limited stickiness in niche software markets. For optimization software in particular, switching costs can be low. It’s easy to benchmark performance, so businesses quickly switch to whatever gives them the edge. Tall marketing claims are rarely left unchallenged. Vendor-promised benchmarks are routinely tested and validated (or thrown out). The scientific temper of operations research creates a near-ideal competitive landscape. The progress should be attributed to a combination of openness and scientific skepticism.

Speaking from the "inside", I can tell you that software markets for software such as this can actually be quite sticky! The issue is simple: trust. Optimization solvers produce decisions, and decisions determine the fate of a business. People are justifiably wary of changing the software that makes their decisions for them. Moreover, optimization software has traditionally been built to make it somewhat difficult to switch from one package to another. This is not a plug – seriously – but part of the motivation behind Solver Foundation is to allow people to specify their decision problems in a form that allows them to choose the underlying computational workhorse that gives them the best results.

I dropped what I was doing to write this because what Sanjay is writing about is so important. The incredible record of achievement in the field of operations research is due to a strong partnership between researchers, software developers, and practitioners. Lohr and Saigal have both done us all a service by shining a light on this success story, because it’s one that could and should be replicated.

Author: natebrix

Follow me on twitter at @natebrix.

2 thoughts on “The Amazingly Wonderful Effectiveness of Operations Research”

Leave a comment