Meszaros: Unfortunately I missed the plenary talk by Friedrich Eisenbrand due to microbrews. I attended an IPM session after that – first up was Csaba Meszaros – interesting to hear from him because BPMPD has been such an impressive code for so many years. His talk was about recent improvements in BPMPD. BPMPD has been in continuous development for 15 years or so. His recent work appears to focus on a) QCQP b) exploiting “hypersparsity”. He is using a primal-dual log barrier method – not a primal-dual HSD method as we at Solver Foundation have chosen for our SOCP solver. So his handling of starting points and infeasibility detection is different. He went into the tradeoffs between using augmented and normal systems for the search direction, which affect both performance and numerical stability. He has implemented both and has logic on top to determine which to use. He also talked about QCQP presolve, which often whittles down the size of “real” models substantially.
An interesting bit of trivia: he showed a table that compared lines of code in various modules, “then and now”:
2009 1981 Ordering 5700 155 Cholesky 2825 100 Backsolve 820 45
I think the difference is indicative of both Csaba’s commitment to improving BPMPD as well as the number of practical issues you need to think about when building an industrial grade solver.
Bliek (IBM): Next up – IPM session chair Christian Bliek from IBM. His talk was about barrier improvements in CPLEX. For the single threaded case, CPLEX 12 is about 5% faster than 11 [note that these numbers are totally unofficial and any errors are my own]. With 4 threads it is about 20% better. There was no magic bullet here, more tuning at the solver and algebra layer. With a sophisticated solver such as CPLEX the improvements come through hard work, no tricks. There was also some data on simplex vs. barrier. For their complete set of their benchmark LPs, for problems that take 100 seconds or more barrier (without crossover) beats dual simplex on average. If you allow crossover, then performance vs dual simplex is about even for small problems (~1s solution time) and after that crossover really takes over. For problems that take 1000s and up, the geometric mean for the performance ratio against dual is 0.55. So almost twice as fast.
For QP, barrier is way better than their active set approach. Of course this does not take into account scenarios where warmstart could be used, where simplex rocks. More on that later. They also have a “concurrent” setting, meaning to race the simplex & IPM solvers. This setting gives the best results. Solver Foundation Services has also supported this type of “transparent parallelism” since version 1.0. The last bit of the talk was about some experiments using PARDISO instead of the solver’s own algebra stack. Their conclusion was that using a “black box” linear algebra module shows potential.
Warmstart: The last talk in the session was by Anthony Vannelli, who talked about IPM warmstart. He takes a very pragmatic approach and layed out a very sensible procedure for attacking the difficult problem of using the solution of an IPM problem to seed the solution of a related problem. The net was that using a relatively simple warmstart procedure they were able to get (roughly) a factor of 3 speedup for their test set. There were also some other IPM warmstart researchers in attendance – in particular Alper Yilidrim and Hande Benson. Alper had a great survey talk of existing IPM warmstart methods, and Hande is one of the top researchers in the area. I am sure there have been interesting discussions this week!
Erling Andersen (Mosek):The afternoon IPM session had talks by Erling Andersen from Mosek, Jacek Gondzio, and myself. Erling talked about SOCP improvements in MOSEK 6 (which will be out later this year). He’s had SOCP support forever, but for version 6 he has refactored and reworked much of the implementation. He has run into problems getting high accuracy solutions (I feel his pain), and he has had some problems with infeasible or nearly infeasible problems. He had some interesting thoughts on infeasibility criteria, rooted in his experience with real problems. He’s going to get about a 15% speedup, which is quite impressive since he is already starting with a very fast solver. Erling has built a great set of solvers and I wish him well with Mosek 6.
Jacek Gondzio talked about the use of indirect methods (such as preconditioned conjugate gradient) in IPM. This hasn’t worked out well in the past because such methods are not able to solve the systems to the accuracy required by IPM search directions. And the problem is still not solved, but for his new approach he did show some impressive results for dense problems.
The title of my talk was Interior Point Methods in Solver Foundation. I gave an overview of where IPM sits in the Solver Foundation stack, discussed our user and solver models. (And why different user and solver models are necessary – the “user model” serves Solver Foundation Services as well as programmers, whereas the solver model is structured so as to attain the most numerically stable, high performance solver possible. Separation of concerns!) Then I gave a high-level overview of our solution approach, and talked about a couple of implementation issues including handling of free variables, equality constraints, etc. Lastly I talked about some numerical stability issues. I got nice feedback from people afterwards and there was a lot of curiosity about who is using our stuff and what is next.