Plenary [Steven Boyd]: The plenary speaker was Steve Boyd who talked about real-time embedded convex optimization. He considered cases where problems need to be solved in milliseconds or less: control, signal processing, resource allocation, and even finance (think flash trading). The approach is basically to extend “disciplined convex programming”. DCP is a modeling system where problems are convex “by construction” because they combine operators with well-known properties; the system is then able to rewrite the problem in the standard form required by a convex programming solver (such as an IPM QP/SOCP solver). The new step is that the system can now generate highly optimized C code for a custom solver for the particular problem family described by the model. (The algorithm itself is standard IPM.) You can do all sorts of optimizations in this case: the problem structure is known so the symbolic ordering can be done in advance, you can ensure better memory locality, etc. He gave many example where small QP instances were solved in tensof microseconds. Fun stuff.
Morning Sessions: I attended a morning session concerning approaches for nonconvex optimization problems. Jon Lee talked about solving a particular class of parametric nonlinear problems, Kurt Anstreicher talked about *nonconvex* QP and associated bounds. I then went to a straight-up theory session talking about convergence rates & asymptotic costs for IPM – there were a couple of new results and an interesting “corrector-predictor” (instead of the other way around) approach that attains better asymptotic convergence. There is an embarrassment of riches at ISMP: I missed by John Hooker about integrating MIP, constraint & global optimization, a great MIP session including Bob Bixby from Gurobi, and all sorts of other stuff. There are a staggering number of good talks to attend.
Afternoon Sessions [Modeling / Stochastic]: I went to a modeling languages track in the afternoon. The first speaker was from LINDO Systems who talked about their new LINGO offering. The primary new feature is to support stochastic modeling. There were some screenshots from What’s Best, their spreadsheet solver. They had a few nice samples as well.
Gautam Mitra from OptiRisk was up next – he talked about SAMPL, stochastic extensions for AMPL. They have extended SAMPL to support:
- Chance constraints
- Integrated chance constraints
- Robust optimization
Chance constraints are interesting – they occur in a range of financial problems including VaR and CVaR calculation. Gautam riffed on the difference between stochastic and robust optimization by invoking Rumsfeld: there are known knowns [deterministic optimization], known unknowns [stochastic], and unknown unknowns [robust optimization]. In robust optimization you may have problem coefficients that are in some range with an unknown distribution and you want the constraints to hold in all (or most) possible circumstances. Robust optimization also has SOCP reformulations, and I am a big believer because I think RO can be cleanly expressed in modeling languages. The last talk in the session was about YALMIP support for robust optimization – YALMIP has had widespread adoption by control theory specialists and has had a long history of improvements. This talk was very code-oriented and fun to watch.
A few overall impressions: I think it is fair to say that SOCP is entering the mainstream with LP/QP/MIP – it came up a lot today. On the modeling side, it was interesting to see that stochastic was featured in all three talks in the session I attended. A larger trend is that MINLP (mixed integer nonlinear programming) is hot. There are tons of tracks on it, and it is [unsurprisingly] being taken on by both the MIP and NLP communities. The problem description is very general so of course there are applications, but the dust has not settled on the solver side.