No more standalone releases of Microsoft Solver Foundation

Microsoft has announced that there will not be further standalone releases of Solver Foundation. If you are looking for other optimization or analytics frameworks, consider Azure ML or Frontline Systems’ Solver SDK.

Here is the announcement from the Solver Foundation MSDN forum:

As users have pointed out, Microsoft has not been active on the Solver Foundation forums since Nate left.   We have been quiet while we have gone through restructuring and planning.  Some would say we have been too quiet.  We know we have very loyal and enthusiastic users who want to know the future of Solver Foundation.  So, here is a long overdue statement about our plans for Solver Foundation.

The current 3.1 release of MSF will be the last release as a standalone install.  We are working hard on integrating Microsoft Solver Foundation into a larger analytics framework that will help users build both prescriptive and predictive analytics.  We look forward to releasing this new product for your use as soon as we are able to do so.  This new product will provide a migration path for current Solver Foundation users and partners.

We would like to continue to keep the current forum open to the community to discuss MSF until the release of the new product. However, Microsoft will be providing limited support of MSF in terms of monitoring the forums and providing bug fixes during this transition time.

We have been responding to email and will continue to do so.  If you have feedback on issues/bugs/improvements, we welcome your feedback via  Please check back on the forum for future announcements with regards to the new analytics product.

Thanks for your support.

The Solver Foundation Team


O.R. and social networking: A Solver Foundation MIP model to suggest Twitter followers

Please note that Solver Foundation is no longer supported.

For this month’s INFORMS blog challenge I thought it would be fun to write a simple Solver Foundation – Twitter mashup. I’m not as happy with it as I could be, but maybe someone will find it interesting.

My goal is to write a web application that will suggest interesting twitter users to follow. Here’s what the finished product looks like:


As you can see, the application asks for:

  • a keyword,
  • a maximum number of twitter users to suggest,
  • minimum and maximum number of expected tweets per day.

and produces a list of twitter users to follow. I will describe some of the key steps in building this application, but I won’t provide all of the source simply because it would be a lot of work for me to clean up the code! I will try to provide enough so that the motivated reader can finish it off. If you want to follow along, get Solver Foundation, open Visual Studio 2010, and create a new web site (say MsfTwitter). Drag the controls shown (several Labels, one CheckBox, three TextBoxes and a Button) on the screenshot onto your main form, then double click on the button (which I named "Suggest"). Then you’ll be sitting in the codebehind file for the click button handler:

  protected void Suggest_Click(object sender, EventArgs e) {

The end goal here is to fill this with some code that will find suggestions. In order to do this, we’re going to have to know something about twitter users. For now, let’s assume that we’re able to obtain the number of tweets per day and an "interest score" that indicates how interesting a twitter user is for a particular topic. Then we can define a User data structure which is the basis for our model. Add a new C# file into the App_Code directory and add this:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Web;

[DebuggerDisplay("{Name}, score = {InterestScore}, tweets = {Tweets}")]
public class User {
  public string Name { get; set; }
  public double InterestScore { get; set; }
  public double TweetsPerDay { get; set; }

Our application logic is really divided into two parts:

  1. Finding interesting twitter users for a given topic. This gives us a list of Twitter users, including their interest score and tweets per day.

    2. Selecting a "best" subset of these users given the preferences specified on the web page.

These are actually both great analytics problems. I am going to rely on third party services to solve the first problem, and I will use Solver Foundation for the second (since that is an optimization model). So for now, let’s assume that we’ve got a list of twitter users and we want to solve problem 2. (If you’re following along in Visual Studio, now is a good time to add a reference to Solver Foundation using “Add Reference”.)

Following my standard pattern of identifying inputs, outputs, goals, and constraints, our model is as follows:

  • The input is a bunch of users. Each one has a score and expected number of tweets per day.
  • The output is a boolean value for each user: to follow or not to follow?
  • The goal is to maximize the total interest score over all selected users.
  • One constraint is that the expected number of tweets is between user specified values min_tweets and max_tweets.
  • Another constraint is that the number of users that we select is at most K.

    The next step is to write a class that we can use from the web page. I will call the class SuggestModel and give it one method. The signature is pretty straightforward based on my description above.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using Microsoft.SolverFoundation.Common;
using Microsoft.SolverFoundation.Services;

public class SuggestModel
  public static IEnumerable<User> Solve(IEnumerable<User> data, 
int minSuggestions, int maxSuggestions, int maxTweets) { } }

Creating the C# code for Solve is also straightforward given the description of the model. The Parameters for the model are the interest scores and tweets per day. Both are indexed over the set of users. We use the SetBinding method to associate the appropriate fields of the User objects with the parameters.

public static IEnumerable<User> Solve(IEnumerable<User> data, int minSuggestions, 
  int maxSuggestions, int maxTweets) {
  SolverContext context = SolverContext.GetContext();
  Model model = context.CreateModel();
  Set users = new Set(Domain.Any, "users");
  Parameter score = new Parameter(Domain.Real, "score", users);
  score.SetBinding(data, "InterestScore", "Name"); // bind to the InterestScore field.
  Parameter tweets = new Parameter(Domain.Real, "tweets", users);
  tweets.SetBinding(data, "TweetsPerDay", "Name"); // bind to the TweetsPerDay field.
  model.AddParameters(score, tweets);
  Decision choose = new Decision(Domain.IntegerRange(0, 1), "choose", users);
  model.AddGoal("goal", GoalKind.Maximize, Model.Sum(
    Model.ForEach(users, user => choose[user] * score[user])));
    minSuggestions <= Model.Sum(Model.ForEach(users, user => choose[user])) <= maxSuggestions);
    Model.Sum(Model.ForEach(users, user => choose[user] * tweets [user])) <= maxTweets);
  return data.Where(user => choose.GetDouble(user.Name) > 0.01);

Notice the last line: using a LINQ expression we return a list of users where the value of the choose decision is nonzero. That’s what we wanted. The decision values are integer valued and all the constraints and goals are linear, so this is a mixed integer programming model. It’s essentially a knapsack problem – no big deal.

Now we can return to "problem 1": finding interesting twitter users for the topic entered into the text box. To solve this problem I will rely on the technique (first suggested by Polya?) of being lazy – I will use other people’s code.

Twitter provides a simple REST API that is the equivalent of the "Who to Follow" button on You can learn more about it (and try it out) here.

To actually call this “who to follow” twitter API from our program, we need three more things:

  1. A twitter REST API key (so we can make calls to twitter). So I registered my app here: You can too – it’s free.
  2. Code to make authenticated REST calls. The authentication system is called OAuth. Shannon Whitley wrote a nice OAuth package for .Net and posted it on his blog here. Go grab that and put it in your App_Code directory. I changed the namespace to “OAuth”.
  3. Code to convert the API results (returned in a format called JSON). There is a Json.Net ackage written by James Newton-King that makes this easy. Follow this link for the details. In the code below where I work with JObject/JToken/JArray, add a ”using Newtonsoft.Json.Linq” statement to the top of the source file.

Since the twitter API is authenticated, there is a dance that we have to do to obtain an access token given our API key. This involves redirecting to twitter, allowing our registered application to have access to twitter data, then redirecting back to our page. I am omitting that code because it’s not really the point of this post.

Once you go through all of that mess, the code is short and sweet. Put this in the same file as your click event handler (probably Default.aspx.cs):

  private oAuthTwitter _oAuth;
  private JArray GetTwitterUsers() {
    string url = String.Format({0},
HttpUtility.UrlEncode(Keyword.Text)); string json = _oAuth.oAuthWebRequest(oAuthTwitter.Method.GET, url, String.Empty); return JArray.Parse(json); }

This gives me a bunch of interesting users but I don’t have a score or expected number of tweets. We can estimate the number of tweets per day by looking at two fields in the GetRelevantUsers result: statuses_count (the number of tweets for the user account) and created_at (when the account was created). Easy.

The last ingredient is the interest scores. A colleague referred me to Klout, which fits the bill nicely. The documentation for the API is here, but it is very simple: pass a comma-delimited list of user names in the query string and you get back the "klout scores". I will use those as our interest scores. Using the same raw materials as for the twitter REST API, the code looks like this:

  private JToken GetKloutScores(IEnumerable<string> users) {
    string uri = "{0}&users={1}";
    string usernames = HttpUtility.UrlEncode(String.Join(",", users)); 
    string json = _oAuth.WebRequest(oAuthTwitter.Method.GET, 
String.Format(uri, _kloutKey, usernames), String.Empty); var result = JObject.Parse(json); return result["users"]; }

Now we have two data sets: one from Twitter and one from Klout. We need to join this information together to get a list of users to pass into SuggestModel. We can do that in a single LINQ Join statement. Again, skipping some OAuth details, here is the code:

  protected void Suggest_Click(object sender, EventArgs e) {
    if (_authorized) {
      if (SetAccessToken()) {
        // todo: read values for these from the TextBox controls.
int minSuggest = 1, maxSuggest = 10, tweets = 100; var twitter = GetTwitterUsers(); var klout = GetKloutScores(twitter.Select(token => (string)token["screen_name"])); IEnumerable<User> data = klout.Join( twitter, k => (string)k["twitter_screen_name"], t => (string)t["screen_name"], (k, t) => new User { Name = (string)t["screen_name"], InterestScore = (double)k["kscore"], TweetsPerDay = (int)t["statuses_count"] /
DaysBetween(t.DateTime("created_at"), DateTime.UtcNow) }); var suggestedUsers = SuggestModel.Solve(data, minSuggest, maxSuggest, tweets);
// todo: do something with the output.
} private double DaysBetween(DateTime start, DateTime end) { return (end - start).TotalDays; }

A braindead way of displaying the output is to drag a Literal control on the form and  write the output as HTML. We can replace that last “todo” with this:

StringBuilder build = new StringBuilder();
build.AppendFormat("<h2>{0}</h2>", "Suggestions");
foreach (var user in suggestedUsers) {
  build.AppendFormat("<div>{0}, tweets/day = {1}</div>", 
HttpUtility.HtmlEncode(user.Name), user.TweetsPerDay); } apiResponse.Text = build.ToString();

I have made major, major shortcuts all the way along: the error checking is poor, I could consider caching information rather than hitting twitter & Klout each time, the interest scores are not necessarily perfect, the UI and logic is all mixed together, existing follows are not considered, the app should itself be exposed as a REST service, and so on. Even with all the simplifications I have made, a lot of glue code is necessary to tie all of this together. This obscures the relative simplicity of the model, which is a shame.

Getting solution values using Solver Foundation Services

Please note that Solver Foundation is no longer supported.

There was a question on the Solver Foundation forum about iterating through the results in a Decision. Here’s a code sample of how to do it in two simple cases.

Decision objects are either indexed or scalar. A decision with one index set is essentially a vector, two means matrix. You supply the Sets in the Decision constructor. In the code below, x is an indexed decision. If you don’t supply any sets then you have a non-indexed scalar Decision – that’s what we’re doing with y.

Once you solve the model you can use two different methods to pull out the results. The GetValues method returns an array: one for each value in an indexed decision. In this case, we have one index set that has 10 values, so GetValues will return 10 items. Each item is an array. The first element in the array is the value for the decision for the specific index. The remaining entries are the index values themselves. By running the program below you will get the idea.

Getting the values for non-indexed decisions is easy: just call the GetDouble method.

Here’s the code:

using System;
using System.Linq;
using Microsoft.SolverFoundation.Services;

namespace DecisionTest {
  class Program {
    static void Main(string[] args) {
      SolverContext context = SolverContext.GetContext();
      Model model = context.CreateModel();
      // Let's create an indexed parameter.
      Set s = new Set(Domain.Real, "s");
      Parameter p = new Parameter(Domain.Real, "p", s);
      // Here's some dummy data.
      var input = Enumerable.Range(1, 10).Select(item => new { Id = item, Value = item });
      p.SetBinding(input, "Value", "Id");

      // Let's create an indexed decision.
      Decision x = new Decision(Domain.Real, "x", s);

      // Here's a scalar decision.
      Decision y = new Decision(Domain.Real, "y");

      // Here are some dummy constraints to tie everything together. Just for fun.
      model.AddConstraint("c1", Model.ForEach(s, i => p[i] == x[i]));
      model.AddConstraint("c2", Model.Sum(Model.ForEach(s, i => x[i])) == y);

      // Extract data from the indexed decision.
      foreach (object[] value in x.GetValues()) {
        Console.WriteLine("x[{0}] = {1}", value[1], value[0]);
      // Extract data from the scalar decision.
      Console.WriteLine("y = {0}", y.GetDouble());

A fun math problem

Here is a question from the Mathematical Association of America’s American Mathematics Competitions blog (which I found through this constraint programming blog):

Let x and y be two-digit integers such that y is obtained by reversing the digits of x. The integers x and y satisfy x^2 – y^2 = m^2 for some positive integer m. What is x + y + m?

This is easily solved using Solver Foundation’s constraint programming solver. The OML model is simple:

    Integers[0, 9], x1, x2, y1, y2
  Decisions[Integers[10, 100], x, y],
  Decisions[Integers[1, Infinity], m, theAnswer],

    x == 10 * x1 + x2,
    y == 10 * y1 + y2,
    y == 10 * x2 + x1,
    x * x - y * y == m * m,
    theAnswer == x + y + m

Update 10/4/2011: reader Bruno Repetto noted the following:

  • In the declaration  
    Decisions[Integers[0, 9], x1, x2, y1, y2],
    the range should be Integers[1, 9].  If 0 is included, this declaration alone could allow for one of the numbers x or y to be single-digit.
  • In the declaration  
    Decisions[Integers[10, 100], x, y],
    the range should be Integers[10, 99], as x and y are restricted to be two-digit integers.

Of course, given the constraints of the problem, the first and second declarations will work together to ensure that the x and y numbers are only two-digit integers, but it doesn’t hurt that the declarations be tighter.


The rumored Big Ten realignment is extremely fair…and I can prove it.

Andy Katz from reports that the Big Ten divisions will be:

Division A Division B
Iowa Illinois
Michigan Indiana
Michigan State Ohio State
Minnesota Penn State
Nebraska Purdue
Northwestern Wisconsin

Yesterday I posted several Solver Foundation models that attempted to find a realignment that is “as fair as possible”. If you take a characterization of a program’s historical strength to be its Sagarin rating over the past twelve years, and you are looking to build two evenly matched divisions then this is an extremely fair proposal. The average Sagarin rating is almost identical:

Division A   Division B  
Iowa 77.46 Illinois 69.56
Michigan 82.98 Indiana 65.55
Michigan State 75.82 Ohio State 87.67
Minnesota 73.96 Penn State 82.03
Nebraska 83.65 Purdue 77.25
Northwestern 69.61 Wisconsin 81.59
Average A 77.25 Average B 77.27

In fact, this is the fairest possible realignment that follows these rules:

  • Six teams per division.
  • Preserve the Michigan, Ohio, and Indiana in-state rivalries. (But not Illinois.)
  • “Fairness” is measured by Sagarin rating.

No artificial rules about splitting Michigan and Ohio State are required – that happens naturally as a result of trying to find a fair split. Division A has 427 total conference wins since ‘93, Division B has 412. Division A has 724 total wins versus Division B’s 708. Note however that Nebraska is in Division A and that it had a run of near perfection in the early 90’s. The average attendance for Division A schools is 69,128 versus 74,035 in Division B; much of the difference is due to Northwestern.

Download this spreadsheet to see more details, or to create your own realignments. Follow my instructions from yesterday if you wish to use Solver Foundation to experiment with the models.

Optimal Big Ten realignment using operations research

Using Solver Foundation I have created several mixed integer programming models which figure out how to realign Big Ten teams as fairly as possible. Using the information I have on hand, this seems to be a good realignment if you want to have competitive divisions with roughly equal attendance:

Division A: Indiana, Iowa, Michigan, Michigan State, Nebraska, Purdue
Division B: Illinois, Minnesota, Northwestern, Ohio State, Penn State, Wisconsin


The Big Ten is an intercollegiate athletic organization composed of eleven (yep) schools from the Midwest. Big Ten schools compete in a number of sports, but the one that receives the most fan interest is football. Big Ten schools compete in the highest division of college football, the Football Bowl Subdivision. Recently the Big Ten extended an invitation to the University of Nebraska to join the conference as the twelfth member institution starting in 2011. As a result the Big Ten has decided to realign itself into two six-team divisions. Schools in the same division will be guaranteed to play each other each year, and the champions of each division will play each other in a Big Ten championship game. There’s been a lot of talk about how the divisions will be selected. There are a number of considerations, including geography, rivalry, revenue, and the overall strength of each team. Realignment will have long term financial implications for Big Ten schools, and more importantly an emotional impact on football crazed fans such as myself. So what’s the right way to make such a decision? A recent article on piqued my interest:

Kerry Whisnant has spent a chunk of the past few days crunching the numbers, and the results may have some of the Hawkeye faithful scratching their heads. A physics professor at Iowa State by day, Whisnant ran a program last week to try and figure out which teams, based on their overall and conference records since 1993 (when the Big Ten went to 11 schools), should be paired up in order to create two truly, mathematically fair divisions.

Assuming that you a.) preserve in-state rivalries; b.) split Ohio State and Michigan and Iowa and Wisconsin up; and c.) make sure the Buckeyes, Wolverines, Lions and Nebraska Cornhuskers are divided evenly across two divisions, Whisnant’s computer came up with just eight possible scenarios. And in terms of league victories, these were the three most “balanced” alignments:

Option 1 — Division A: Iowa, Minnesota, Nebraska, Illinois, Northwestern, Ohio State (422 conference wins and one tie since 1993); Division B: Michigan, Michigan State, Indiana, Penn State, Purdue, Wisconsin (421 wins and one tie).

Option 2 — Division A: Iowa, Nebraska, Indiana, Purdue, Michigan, Michigan State (422 and one tie); Division B: Ohio State, Penn State, Wisconsin, Minnesota, Northwestern, Illinois (421 and one tie)

Option 3 — Division A: Iowa, Illinois, Michigan, Michigan State, Northwestern, Penn State (419 and one tie); Division B: Ohio State, Wisconsin, Purdue, Nebraska, Indiana, Minnesota (424 and one tie).

(It did not escape my notice that an Iowa State professor is looking into Big Ten realignment! Perhaps it is a welcome distraction from thinking about Iowa State football…)

Operations research techniques have obvious application to this problem. Operations research aims to solve real world problems by modeling them mathematically, providing optimal solutions by means of sophisticated algorithms. My team builds Microsoft Solver Foundation, a software product for modeling and solving operations research problems. Using Solver Foundation I have created several mixed integer programming models which compute optimal realignments given different sets of rules. See the “Experimenting with the Models” section to download an Excel spreadsheet with the models and data.

Models and Results

I have created four different models. All four models share the following rules:

  • Rule 1: Each division has six teams. (This may seem obvious, but a mathematical model has to include everything, even the obvious stuff…)
  • Rule 2: Preserve in-state rivalries. This means that Northwestern-Illinois, Michigan-Michigan State, Indiana-Purdue must be assigned to the same division.
  • Rule 3: Place Ohio State-Michigan and Iowa-Wisconsin in different divisions.
  • Rule 4: Each division must contain two of: Ohio State, Michigan, Penn State, Nebraska.

The models differ according to the criteria that describe which realignment is “best”:

  • Model ConfWins: assign the teams to divisions so that the total number of conference wins since 1993 is as even as possible.
  • Model AllWins: assign the teams so that the total number of wins (including nonconference games) since 1993 is as even as possible.
  • Model Sagarin: assign the teams so that the average Sagarin rating since 1998 is as even as possible.
  • Model Attendance: assign the teams so that the 2009 average attendance is as even as possible. (I have included this criteron as a lame attempt to model financial considerations. Obviously more sophisticated criterion could be used.)

(For Nebraska in the “ConfWins” model we use the number of Big 12 victories.)

One comment before showing the results: I have claimed that this method obtains “optimal” realignments. Football fans may roll their eyes a bit saying, “here comes another Moneyball-reading, pencil necked geek thinking he knows better than anyone else.” Most of this may well be true, but all I am saying is that if you use these rules, and these specific criteria, here are the best realignments. The beauty of operations research (and mathematical modeling in general) is that if you don’t like the results, then you can change your assumptions, or add more. The model is only as good as the assumptions behind it: for example if we wanted to incorporate other factors such as the series history between each pair of teams, or TV ratings, we could do that. Fans (and sportswriters) often complain about the use of computer models to determine the BCS standings. These complaints are legitimate in the sense that the models sometimes lack transparency, are guided by rules that don’t seem to make sense, or are too simplistic. Those complaints may apply here too, but hey, you get what you pay for.

Here are the optimal realignments for each model:

Conference Wins
Division A Division B
Indiana 33 Illinois 45
Iowa 71 Minnesota 44
Michigan 94 Northwestern 59
Michigan State 63 Ohio State 106
Nebraska 96 Penn State 86
Purdue 63 Wisconsin 79
420 419
All Wins
Division A Division B
Illinois 75 Indiana 68
Iowa 119 Michigan 146
Minnesota 92 Michigan State 106
Nebraska 165 Penn State 147
Northwestern 96 Purdue 104
Ohio State 170 Wisconsin 144
717 715
Division A Division B
Indiana 65.55 Illinois 69.56
Iowa 77.46 Michigan 82.98
Minnesota 73.96 Michigan State 75.82
Ohio State 87.67 Nebraska 83.65
Penn State 82.03 Northwestern 69.61
Purdue 77.25 Wisconsin 81.59
77.32 77.20
Division A Division B
Indiana 41833 Illinois 59545
Iowa 70214 Minnesota 50805
Michigan 108933 Northwestern 24190
Michigan State 74741 Ohio State 105261
Nebraska 85888 Penn State 107008
Purdue 50457 Wisconsin 80109
72011 71153

Let’s make some general observations. First of all, none of the proposed realignments seem crazy – that’s a good sign. Second, note that the solution to the “ConfWins” and “Attendance” models are identical, even though they use very different criteria. That seems to be a happy accident. Third, in all cases the teams are divided very evenly according to the model criteria: within 1% in all cases. The teams are so evenly divided we might ask ourselves if we are being too simplistic. After all, it doesn’t much matter whether the difference between conference wins is 1 or 3, or whether the difference between Sagarin ratings is 0.1 or 0.4. Might we gain something if we weren’t so focused on a single goal? Lastly, note that the rules of our model greatly restrict the range of possible realignments. Rules 3 and 4 separate certain teams with the aim of making the realignment more balanced. But this seems arbitrary – why not just let the model do the work by specifying better goals?

Let’s try two more models in attempt to do a little better. First, let’s re-run the Sagarin model, dropping rules 3 and 4 that prevent, for example, Iowa and Wisconsin from being in the same division:

Sagarin no 3,4
Division A Division B
Indiana 65.55 Illinois 69.56
Iowa 77.46 Michigan 82.98
Minnesota 73.96 Michigan State 75.82
Ohio State 87.67 Nebraska 83.65
Purdue 77.25 Northwestern 69.61
Wisconsin 81.59 Penn State 82.03
77.25 77.28

Wisconsin and Penn State flip, which makes a little more sense geographically. But we’re still “overtuning”: who really cares whether the difference in Sagarin ratings is 0.03 or 0.12? Let’s continue to ignore rules 3 and 4, but instead of using a single goal, combine all four goals: all wins, conference wins, Sagarin rating, and attendance. We combine them by “normalizing” each component: for example dividing the attendance difference by the average attendance. This way we will get a result that is satisfactory across all four criteria. Here goes:

All goals no 3,4
Division A Division B
Indiana 65.55 Illinois 69.56
Iowa 77.46 Minnesota 73.96
Michigan 82.98 Northwestern 69.61
Michigan State 75.82 Ohio State 87.67
Nebraska 83.65 Penn State 82.03
Purdue 77.25 Wisconsin 81.59

For this model, conference wins are 420 vs 419, all wins are 708 vs 724, Sagarin difference is 0.28, and average attendance is within 1000 fans. For these criteria, this alignment seems like a good choice.

Experimenting with the Models

You can download an Excel spreadsheet with all the models described above here. In order to run the models you will need to install Microsoft Solver Foundation. The free Express version can be downloaded here: you will need Excel 2007 or Excel 2010 and the .Net Framework (which you probably already have on your machine) to install. Here’s how to try out the model:

  • Install Solver Foundation. Click on either the “Solver Foundation v2.1 – 32-bit” or “Solver Foundation v2.1 – 64-bit” icon. If you aren’t sure which one to select, pick 32-bit.
  • Open the spreadsheet (you will need to actually open it in Excel, not just view it on the web).
  • Notice that Sheet1 has two tables: the top one has information about each Big Ten conference team. The bottom one has a proposed realignment along with information about the total number of wins, Sagarin rating, and attendance. This sheet represents the data used by the Solver Foundation model.
  • You should notice a new “Solver Foundation” tab in the Excel ribbon. Click on it.
  • You should see a “Modeling Pane” on the right hand side of the screen. If not, click on the “Model” button in the Solver Foundation ribbon tab. The modeling pane is the place where you define the model.
  • Click on the “Goals” tab in the Modeling Pane. Notice that there are 5 goals, and that the “balanceConf” goal is checked. This means that we will find a realignment that balances conference wins.
  • Click on the Solve button in the Solver Foundation tab. The results are updated in the bottom table in Sheet 1.
  • If you check other goals and click Solve then you can see how the results change. Make sure only one goal is checked.
  • If you are feeling brave, click on the Constraints tab. You can ignore constraints by unchecking them. If you are feeling really brave you can add new constraints. For example, what if I wanted to make sure that Penn State and Michigan State were assigned to the same division?

Once you get comfortable with the modeling language you can do all kinds of experimentation.

Building the Models

Here’s where it gets technical: let’s review how I built the models in Solver Foundation. Those familiar with operations research will recognize this as a simple mixed integer programming model. Regular readers of my blog know that I like to break down models by identifying the sets of objects involved, the output decision variables we want to compute, the input parameters representing the “data”, and finally the goals and constraints. Once we have these we can easily write down the model in Solver Foundation’s OML modeling language and solve it. The sets for this model are teams (there are 12) and divisions (there are 2). What I want to figure out is which division each team should belong to. Since there are only two divisions, all I really want to know is whether each team belongs to what I have called “Division A”: either in or out. This can be modeled as a binary (0-1 integer) decision, one for each team: let’s call this “InDivisionA”, for example InDivisionA[“Iowa”] = 1 if Iowa is in Division A. The parameters are the number of conference wins, number of overall wins, the average Sagarin ratings, and attendance for each team. Let’s think about the constraints:

  • Rule 1: Each division has six teams. Since InDivisionA[t] is 1 if a team t is in Division A, just make sure that the sum of InDivisionA[t] over all teams is 6.
  • Rule 2: Preserve in-state rivalries. Make sure that InDivisionA has the same value for teams in the same state. For example, InDivisionA[“Michigan”] == InDivisionA[“Michigan State”].
  • Rule 3: Place Ohio State-Michigan and Iowa-Wisconsin in different divisions.  The sum of InDivisionA for these pairs of teams should be 1 – so that one is in each division.
  • Rule 4: Each division must contain two of: Ohio State, Michigan, Penn State, Nebraska. The sum of InDivisionA must be 2.

Now let’s turn to the goals. A simple goal is to minimize the difference in conference wins between divisions. If I want to find the number of conference wins for division A, I simply take each team “t” and multiply InDivisionA[t] by the number of wins. For division B, I multiply by (1 – InDivision[t]) instead. Then using the trick described in this post, I can minimize the absolute value of the difference between the two. Here is the OML model:

  Parameters[Sets[Any], Teams],
  Parameters[Reals, ConfWins[Teams]],
  Decisions[Integers[0, 1], InDivisionA[Teams]],
  Decisions[Integers,  ConfWinsDiff],
  Goals[Minimize[balanceConf -> ConfWinsDiff]],
    c1 -> Sum[{t, Teams}, ConfWins[t] * InDivisionA[t]]
             - Sum[{t, Teams}, ConfWins[t] * (1 - InDivisionA[t])] <= ConfWinsDiff,
    c2 -> Sum[{t, Teams}, ConfWins[t] * (1 - InDivisionA[t])] 
             - Sum[{t, Teams}, ConfWins[t] * InDivisionA[t]] <= ConfWinsDiff,
    samecount -> Sum[{t, Teams}, InDivisionA[t]] == Sum[{t, Teams}, (1 - InDivisionA[t])],
    in_state_mi -> InDivisionA["Michigan"] == InDivisionA["Michigan State"],
    in_state_in -> InDivisionA["Indiana"] == InDivisionA["Purdue"],
    in_state_il -> InDivisionA["Illinois"] == InDivisionA["Northwestern"],
    tosu_mich -> InDivisionA["Michigan"] + InDivisionA["Ohio State"] == 1,
    iowa_wisc -> InDivisionA["Iowa"] + InDivisionA["Wisconsin"] == 1,
    tosu_mich_neb_psu -> InDivisionA["Ohio State"] + InDivisionA["Michigan"] 
+ InDivisionA["Nebraska"] + InDivisionA["Penn State"] == 2,

The goals involving total wins, Sagarin rating, and attendance can all be modeled exactly the same way: I just add parameters that represent (for example) Sagarin rating and add similar constraints and goals. Check out the Excel spreadsheet for the full model.

Planning office moves with Solver Foundation: Part V

It’s time to finish our mixed integer programming modeling sample for office space allocation problems. In a previous post in this series I presented a Solver Foundation model that enforces constraints for office space problems, optimizing for space allocation.  I initially assumed that we wanted to strictly enforce all of the constraints in the model, but that’s not always what we want to do. For example, I may have constraints that say “Banksy and Christo need to be located near each other” or “Christo and Fabio need to be located away from each other”. With a number of these constraints we may quickly get into a situation where we cannot find an assignment that satisfies all of them: my model may be infeasible. Moreover, sometimes these constraints actually represent preferences rather than conditions that must hold at all costs. We can instead model these conditions as “soft constraints”, meaning that we allow the constraint to be violated but add a penalty to the goal to compensate.

If you have a model that uses hard constraints it is usually not too hard to change it into a model that uses soft constraints. Usually the trick is to introduce auxiliary boolean (0-1) decision variables and add terms to the hard constraints. When we do this we want to make sure that:

  • The constraint will always hold, but…
  • If the condition represented in the original hard constraint does not hold then the auxiliary decision gets the value 1.

Then we can stick the auxiliary decision in the goal and multiply it by a constant: this represents the penalty for violating the soft constraint. Our model represents six different kinds of constraints. Rather than going through them all, I will go through the “allocate to” and “same room” constraints to give you an idea for how it works.

Allocate To: if we want to make sure that entity e is assigned to room r, the hard constraint is simply allocate[e, r] == 1. To form the soft constraint we add a new decision “d” and add the constraint d == 1 – allocate[e, r].  The decision d goes into the goal as a penalty term: if the hard constraint is violated then allocate[e,r] == 0 and d == 1.  We can modify the OfficeSpaceModel.AllocateTo (from this post) as follows:

    public void AllocateTo(bool hard, string e, string r) {
      int id = ++constraintCount[ConstraintType.AllocateTo];
      if (hard) {
        model.AddConstraint("alloc_" + id, allocate[e, r] == 1);
      else {
        Decision d = new Decision(ZeroOne, "d_alloc_" + id);
        model.AddConstraint("i_alloc_" + id, d == 1 - allocate[e, r]);

Same Room: the “hard” version of the constraint ensures that the rows of the allocate decision corresponding to two entities e1, e2 are identical. In the “soft” version of the constraint we introduce one auxiliary decision d_r for each room. We want the value of such a decision to be 1 if the values of allocate are different, i.e. the absolute value of the difference is nonzero. We cannot directly model an absolute value so we use a modeling trick instead: we add lower and upper bounds to the constraint involving d_r. These bounds force d_r = 1 if and only if the entries of allocate are different.

    public void SameRoom(bool hard, string e1, string e2) {
      CheckEntities(e1, e2);
      int id = ++constraintCount[ConstraintType.SameRoom];
      if (hard) {
        model.AddConstraint("same_" + id,
          Model.ForEach(roomsSet, r => allocate[e1, r] - allocate[e2, r] == 0)
      else {
        // See equations 10 and 11.
        Decision d = new Decision(ZeroOne, "d_same_" + id); // indicator
        Decision d_r = new Decision(ZeroOne, "d_same_r_" + id, roomsSet); // per-room indicator
        model.AddDecisions(d, d_r);
        model.AddConstraint("same_" + id,
          Model.ForEach(roomsSet, r => 2 * d_r[r] - 1 <= allocate[e1, r] - allocate[e2, r] <= 1 - eps + eps * d_r[r])
        model.AddConstraint("i_same_" + id, d == Model.Sum(Model.ForEach(roomsSet, r => d_r[r])));

The soft constraint versions of the other constraint types are similar so I will not review them in detail here. Download the complete solution (including the code from all posts in this series here.)

We’ve replicated everything in the Landa-Silva/Ülker paper that was our original inspiration. Let’s see how the code performs. First, let’s re-solve our sample problem with soft constraints. Here is the last part of the output:

Alexander 201
Banksy 202
Christo 202
David 101
Einstein 203
Fabio 104
Galileo 106
Holmes 105
Icarus 103
Jian 102

d_grp_21 = 1
d_grp_31 = 1

101 = 2.0000    Room = 8        Entity = 9 (count = 1)
103 = 2.0000    Room = 6        Entity = 4 (count = 1)
105 = 2.0000    Room = 6        Entity = 4 (count = 1)
201 = 18.0000   Room = 6        Entity = 15 (count = 1)
202 = 6.0000    Room = 22       Entity = 25 (count = 2)
203 = 2.0000    Room = 14       Entity = 15 (count = 1)
TOTAL USAGE = 31.9999999999983

OBJECTIVE = 54.3600000000084
COST      = 54.36
COST      = 186.36

If you compare this to the output from the hard constraint solution, you will notice that the cost is 54.36 compared to 62. This happens because here the “group by” constraints that request the Form and Function teams to be on the same floor were violated (Icarus and Fabio are on Floor 1).

The code seems to do quite well for larger problems too. In particular, the code improves on the best known solution for largest sample problem (“notta”). The previously best-known objective value was 379.88, but in 6 minutes we obtain a solution with objective value 317.06:

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0   178.0506    0  349  4437.5000   178.0506  96.0%     -  148s
H    0     0                        935.1500   178.0506  81.0%     -  152s
H    0     0                        610.2700   178.0506  70.8%     -  159s
H    0     0                        341.6600   178.0506  47.9%     -  181s
     0     0   217.5068    0  334   341.6600   217.5068  36.3%     -  202s
H    0     0                        320.6600   217.5068  32.2%     -  220s
     0     0   221.5996    0  394   320.6600   221.5996  30.9%     -  234s
     0     0   225.7156    0  389   320.6600   225.7156  29.6%     -  263s
H    0     0                        317.0600   225.7156  28.8%     -  277s
     0     0   229.1693    0  390   317.0600   229.1693  27.7%     -  292s
     0     0   229.8062    0  395   317.0600   229.8062  27.5%     -  306s
     0     0   230.4912    0  406   317.0600   230.4912  27.3%     -  322s
     0     0   231.8208    0  396   317.0600   231.8208  26.9%     -  337s
     0     0   232.7833    0  395   317.0600   232.7833  26.6%     -  345s

Since this is a very large model in order to run this problem you will need the enterprise version of Solver Foundation. If you are a professor, student, or researcher and your academic institution has a Developer AA subscription then you will be able to download the unthrottled Enterprise edition of Solver Foundation (and tons of other stuff) at no cost through the MSDNAA program.

Using Solver Foundation to get a job at Facebook

Facebook has opened an office in Seattle, check it out. Unfortunately (or fortunately depending on one’s perspective) you need to solve two puzzles to get an invite to the opening party. Here is the first puzzle:

You are given a list of relationships between the letters in a single word, all of which are in the form: "The first occurrence of A comes before N occurrences of B.” You can safely assume that you have all such relationships except for any in which N would be 0. Determine the original word, then go to[insert-word-here] to find the second part of the puzzle.

The first occurrence of ‘e’ comes before 1 occurrence of ‘s’.
The first occurrence of ‘i’ comes before 1 occurrence of ‘n’.
The first occurrence of ‘i’ comes before 1 occurrence of ‘i’.
The first occurrence of ‘n’ comes before 2 occurrences of ‘e’.
The first occurrence of ‘e’ comes before 1 occurrence of ‘e’.
The first occurrence of ‘i’ comes before 1 occurrence of ‘v’.
The first occurrence of ‘n’ comes before 1 occurrence of ‘i’.
The first occurrence of ‘n’ comes before 1 occurrence of ‘v’.
The first occurrence of ‘i’ comes before 1 occurrence of ‘s’.
The first occurrence of ‘t’ comes before 1 occurrence of ‘s’.
The first occurrence of ‘v’ comes before 1 occurrence of ‘s’.
The first occurrence of ‘v’ comes before 2 occurrences of ‘e’.
The first occurrence of ‘t’ comes before 2 occurrences of ‘e’.
The first occurrence of ‘i’ comes before 2 occurrences of ‘e’.
The first occurrence of ‘v’ comes before 1 occurrence of ‘t’.
The first occurrence of ‘n’ comes before 1 occurrence of ‘t’.
The first occurrence of ‘v’ comes before 1 occurrence of ‘i’.
The first occurrence of ‘i’ comes before 1 occurrence of ‘t’.
The first occurrence of ‘n’ comes before 1 occurrence of ‘s’.

Here’s an OML model that solves the puzzle.


    Integers[0, 8], e, e_, s, n, v, i, i_, t

    e < s, 
    i < n, 
    i < i_,
    n < e, 
    e < e_,
    i < v, 
    n < i_, 
    n < v, 
    i < s, 
    t < s, 
    v < s, 
    v < e, 
    t < e,
    i < e, 
    v < t, 
    n < t, 
    v < i_, 
    i < t, 
    n < s

You’re welcome. (I’m banking on the fact that my readership is small enough to avoid screwing up their event.) It turns out Solver Foundation can be used to solve the second puzzle too, but I will refrain from posting a solution here. Thanks to my buddy John for the tip.

Seminar scheduling with OML

In response to my series on office space allocation models, I received an email from a reader who is trying to adapt the code to solve a related problem. Here is a reworded description of his problem:

  1. I need to assign people to tables for each week of a three-week seminar.
  2. I have 8 people.
  3. I have 4 tables / PCs.
  4. For the second week of the seminar, I don’t want the groups to be the same, and each person must not be at the same table.
  5. For the third and last week these conditions are the same, taking into account the previous two weeks.

The reader was almost able to figure it out but got stuck on the “different groups” and “different tables” constraints. Let’s work through the problem and build an OML model. I like to start by defining the sets for my problem. In this case, they are:

  • People. To be consistent with my previous posts I will call them “Entities”. There are 8 of them.
  • Tables. There are 4 of them.
  • Time periods. There are 3 of them (one week duration).

What do I want Solver Foundation to compute? The assignment of entities to tables for each period. So let’s add a 0-1 integer decision called “assign[e, t, p]” that is 1 if and only if Entity e is assigned to Table t for Period p. If I have this decision then it is easy to write two constraints which are not explicitly stated in the problem description, but which must hold:

  • Assign all: Each entity is assigned to exactly one table for each time period.
  • Two per table: there are two entities per table.

Now I’ve got to handle the “different tables” and “different groups” constraints. It seems natural to use the Unequal operator – this operator forces all of the arguments inside to be different. For “same table” what I want to say is “for each entity, the entity is assigned to a different table over all time periods.” But the “assign” decision is not very helpful; instead I want a decision “table[e, p]” which evaluates to the table that entity e is assigned to for period p. But using a modeling trick I can actually get values for this “table” decision by summing over assign[e, t, p] for a fixed entity and time period and multiplying each by “t”.

The last constraint is the “same group” constraint. This is a bit tricker. Suppose Adam and Bonnie are assigned to table 4 during week 1. I need to make sure that Adam and Bonnie are not assigned to the same table (it doesn’t matter which one) for all other weeks. To model this, I can compute what amounts to a hash (called “group”) for each entity and period. Let’s define group[e, p] to be the sum of all the entity IDs at the same table as Entity e for Period p. If I have this hash then I can simply say that group[e, p] must be different for each period p.

The OML model is below. If you want to try it out, open Excel, flip to the Solver Foundation tab, click on the “Model” tab, change to “Manual” mode, paste the OML, and click Solve.

Two more notes:

  • I need to specify that there are 8 people, 4 tables, and 3 time periods. I do this by specifying ranges in the “assignAll” constraint. This implicitly establishes the values for the Entities, Tables, and Periods sets.
  • The Unequal operator can only be used in Constraint Programming models. CP models cannot contain any real-valued parameters or decisions. It is also possible (I think) to write a mixed integer programming model for this problem but it’s messier.
  Parameters[Sets[Integers], Entities, Tables, Periods],
  Decisions[Integers[0, 1],
    assign[Entities, Tables, Periods]
  Decisions[Integers[0, Infinity], 
    table[Entities, Periods], group[Entities, Periods]

    assignAll -> Foreach[{e, 0, 8}, {p, 0, 3}, 
        Sum[{t, 0, 4}, assign[e, t, p]] == 1

    twoPerTable -> Foreach[{p, Periods}, 
      Foreach[{t, Tables},
        Sum[{e, Entities}, assign[e, t, p]] == 2

    whichTable -> Foreach[{e, Entities}, {p, Periods}, 
      table[e, p] == Sum[{t, Tables}, t * assign[e, t, p]]

    differentTables -> Foreach[{e, Entities},
      Unequal[Foreach[{p, Periods}, table[e, p]]]

    whichGroup -> 
      Foreach[{e1, Entities}, {p, Periods},
        group[e1, p] == Sum[{t, Tables}, 
          assign[e1, t, p] * Sum[{e2, Entities}, e2 * assign[e2, t, p]]]

    differentGroups -> Foreach[{e, Entities}, 
      Unequal[Foreach[{p, Periods}, group[e, p]]]

Planning office moves with Solver Foundation: Part IV

The previous three posts in this series introduced all the components necessary to describe and model office space allocation problems. Here’s the payoff: a C# program that uses the components to solve problems. The important classes are:

  • OfficeSpaceData. This class stores information about the entities, rooms, and constraints (e.g. “two entities should share a room”) that make up a problem instance.
  • OfficeSpaceModelSimple. This class encapsulates a Solver Foundation model for the office space allocation problem. Callers invoke methods to describe constraints, and then an Allocate() method to solve the model.
  • Solution. This class represents a solution for a problem instance, in particular the room assignment for each entity in the model. A Solution instance is returned by the OfficeSpaceModelSimple.Allocate() method.

Let’s modify the test program from this post to use all three components. The complete test code is below. Note the following:

  • The Main method passes the XML filename, schema filename, and time limit to Run(). (Here is a link to some sample problems.)
  • Run() does the following:
    • Creates a new OfficeSpaceData which reads the data,
    • Establishes the penalties for constraint violation,
    • Creates a new OfficeSpaceModelSimple, passing in the penalty weights,
    • Calls the Fill() method (see below),
    • Solves the model,
    • Reports the results (see below).
  • Fill() scans through the constraints in the OfficeSpaceData object and calls the right methods on OfficeSpaceModelSimple. Remember that constraints have an “IsHard” property that specifies whether a constraint is strictly enforced, but that OfficeSpaceModelSimple currently ignores this setting and treats all constraints as “hard”.
  • The Report() method prints out the results in an easy-to-understand format. The assignment is read from the Solution object. We also manually calculate the wastage and overuse penalties from the assignment. Finally, we call an Evaluate() method that loops over all the constraints and uses the SolutionEvaluator class to calculate the cost of the solution. This calculation is redundant it is supposed to be exactly the same as the goal in our model! I have included this redundant calculation on purpose to check that the model is indeed solving the right problem.
using System;  
using System.Collections.Generic;
using System.IO;
using System.Linq;  
using System.Text;  

namespace Samples.OfficeSpace {
  public class Program {

    private bool printAssignment = true;
    private bool includeDefaultConstraints = true;

    static void Main(string[] args) {
      if (args.Length == 0 || !args[0].EndsWith(".xml", StringComparison.InvariantCultureIgnoreCase)) {
        Console.WriteLine("Please specify the path to an XML file.");

      string fileName = args[0];
      if (!File.Exists(fileName)) {
        Console.WriteLine("Could not find input file '{0}'.", fileName);

      string schemaFileName = fileName.ToLowerInvariant().Replace(".xml", ".schema.xml");

      int seconds = args.Length > 1 ? Int32.Parse(args[1]) : 10;
      Program p = new Program();
      p.Run(fileName, schemaFileName, TimeSpan.FromSeconds(seconds));

    private void Run(string fileName, string schemaFilename, TimeSpan limit) {
      Console.WriteLine("FILE: {0}", fileName);

      OfficeSpaceData data = new OfficeSpaceData();
      data.ReadFromFile(fileName, schemaFilename);

      Dictionary<ConstraintType, double> penalties = GetPenaltyWeights();
      OfficeSpaceModelSimple model = new OfficeSpaceModelSimple(penalties);
      model.SolveTimeLimit = limit;
      string modelName = fileName.Substring(0, fileName.IndexOf('.'));
      Fill(modelName, data, model);
      Solution solution = model.Allocate();
      Report(data, penalties, solution);
private static Dictionary<ConstraintType, double> GetPenaltyWeights() { // page 5 Dictionary<ConstraintType, double> penalties = new Dictionary<ConstraintType, double>(); penalties[ConstraintType.Adjacent] = 10; penalties[ConstraintType.AllocateTo] = 20; penalties[ConstraintType.AwayFrom] = 10; penalties[ConstraintType.NoSharing] = 50; penalties[ConstraintType.SameRoom] = 10; penalties[ConstraintType.GroupBy] = 11.18; penalties[ConstraintType.Wastage] = 1; penalties[ConstraintType.Overuse] = 2; return penalties; } private void Fill(string modelName, OfficeSpaceData data, OfficeSpaceModelSimple model) { model.InitializeModel(modelName); model.SetSizeBinding(data.Rooms, "Size", "ID"); model.SetSpaceBinding(data.Entities, "Space", "Name"); if (includeDefaultConstraints) { model.Wastage(); model.Overuse(); } Console.WriteLine("PROCESSING {0} CONSTRAINTS", data.Constraints.Count()); string e1, e2, room; foreach (var constraint in data.Constraints) { switch (constraint.Type) { case ConstraintType.NoSharing: if (TryGetSubjectEntity(data, constraint, out e1)) { model.NoSharing(constraint.IsHard, e1, data.EntityCount); } break; case ConstraintType.AllocateTo: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetRoom(data, constraint, out room)) { model.AllocateTo(constraint.IsHard, e1, room); } break; case ConstraintType.GroupBy: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { model.GroupBy(constraint.IsHard, data.Rooms, e1, e2); } break; case ConstraintType.SameRoom: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { model.SameRoom(constraint.IsHard, e1, e2); } break; case ConstraintType.Adjacent: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { model.Adjacent(constraint.IsHard, data.Rooms, e1, e2); } break; case ConstraintType.AwayFrom: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { model.AwayFrom(constraint.IsHard, data.Rooms, e1, e2); } break; default: throw new InvalidOperationException("Invalid constraint."); } } } private void Report(OfficeSpaceData data, Dictionary<ConstraintType, double> penalties, Solution solution) { ReportSection("WEIGHTS:"); foreach (var pair in penalties) { Console.WriteLine("{0}\t{1}", pair.Key, pair.Value); } if (solution.Quality.StartsWith("infeasible", StringComparison.OrdinalIgnoreCase)) { Console.WriteLine("*** MODEL IS INFEASIBLE ***"); return; } if (printAssignment) { ReportSection("ASSIGNMENT:"); Console.WriteLine("ENTITY ROOM"); foreach (var assignment in solution.EntityToRoom.OrderBy(p => p.Key)) { Console.WriteLine("{0} {1}", assignment.Key, assignment.Value); } } ReportSection("USAGE TERMS:"); foreach (var entry in solution.Usage) { string r = entry.Item1; Console.WriteLine("{0} = {1:f4}\tRoom = {2}\tEntity = {3} (count = {4})", r, entry.Item2, data.RoomsByID[r].Size, solution.EntityToRoom.Where(t => t.Value == r).Sum(t => data.EntitiesByName[t.Key].Space), solution.EntityCount(r) ); } Console.WriteLine("TOTAL USAGE = " + solution.Usage.Sum(u => u.Item2)); ReportSection("SUMMARY:"); Console.WriteLine("OBJECTIVE = " + solution.Objective); Console.WriteLine("COST = " + Evaluate(data, penalties, solution.EntityToRoom)); } private static void ReportSection(string label) { Console.WriteLine(Environment.NewLine + label); Console.WriteLine(new string('-', label.Length)); } private static double Evaluate(OfficeSpaceData data, Dictionary<ConstraintType, double> penalties, IDictionary<string, string> entityToRoom) { SolutionEvaluator e = new SolutionEvaluator(data, entityToRoom, penalties); double cost = 0; string e1, e2, room; foreach (var constraint in data.Constraints) { switch (constraint.Type) { case ConstraintType.NoSharing: if (TryGetSubjectEntity(data, constraint, out e1)) { cost += e.NoSharingCost(e1); } break; case ConstraintType.AllocateTo: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetRoom(data, constraint, out room)) { cost += e.AllocateToCost(e1, room); } break; case ConstraintType.GroupBy: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { cost += e.GroupByCost(e1, e2); } break; case ConstraintType.SameRoom: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { cost += e.SameRoomCost(e1, e2); } break; case ConstraintType.Adjacent: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { cost += e.AdjacentCost(e1, e2); } break; case ConstraintType.AwayFrom: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { cost += e.AwayFromCost(e1, e2); } break; default: throw new InvalidOperationException("Invalid constraint."); } } cost += data.Rooms.Sum(r => e.RoomUsageCost(r.ID)); return cost; } private static bool TryGetSubjectEntity(OfficeSpaceData data, Constraint constraint, out string entity) { entity = constraint.Subject; if (!data.EntitiesByName.ContainsKey(constraint.Subject)) { Console.WriteLine(String.Format("*** Invalid subject entity [{0}] in row {1}", entity, constraint)); return false; } return true; } private static bool TryGetTargetEntity(OfficeSpaceData data, Constraint constraint, out string entity) { entity = constraint.Target; if (!data.EntitiesByName.ContainsKey(entity)) { Console.WriteLine(String.Format("*** Invalid target entity [{0}] in row {1}", entity, constraint)); return false; } return true; } private static bool TryGetTargetRoom(OfficeSpaceData data, Constraint constraint, out string room) { room = constraint.Target; if (!data.RoomsByID.ContainsKey(room)) { Console.WriteLine(String.Format("*** Invalid target room [{0}] in row {1}", room, constraint)); return false; } return true; } } }

Here is the output of the program for my sample problem (you should be able to confirm this by downloading the Express edition of Solver Foundation). As you can see, the optimal cost is 62. If you look at the assignment, the “Form” team is allocated to the first floor and the “Function” team is allocated to the second floor. Also notice that some of the rooms on the first floor are unused! This is because the sample model has constraints that specify that teammates should be “nearby” each other, which means allocated to the same floor. Since this constraint is strictly enforced this effectively means that each team must be on a separate floor. If we did not have such a strict interpretation of the “nearby” constraints then perhaps we could find an allocation that had better space usage. In order to support soft constraints we will need to revisit the model class, which we will do in the next post.

c:\temp\OfficeSpace\bin\Release>OfficeSpace.exe \temp\data\nott_data\sample.xml
FILE: \temp\data\nott_data\sample.xml
Optimize a model with 227 Rows, 326 Columns and 1853 NonZeros
Presolve removed 109 rows and 227 columns
Presolve time: 0.03s
Presolved: 118 Rows, 99 Columns, 845 Nonzeros
Objective GCD is 1
Found heuristic solution: objective 104.0000

Root relaxation: objective 2.401047e+01, 133 iterations, 0.00 seconds

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0    24.0105    0   41   104.0000    24.0105  76.9%     -    0s
     0     0    41.7082    0   33   104.0000    41.7082  59.9%     -    0s
     0     0    41.8302    0   40   104.0000    41.8302  59.8%     -    0s
     0     0    41.8302    0   37   104.0000    41.8302  59.8%     -    0s
H    0     0                         62.0000    41.8302  32.5%     -    0s
     0     0    47.5859    0   34    62.0000    47.5859  23.2%     -    0s
     0     0    47.5859    0   37    62.0000    47.5859  23.2%     -    0s
     0     0    47.5859    0   36    62.0000    47.5859  23.2%     -    0s
     0     0    47.5859    0   36    62.0000    47.5859  23.2%     -    0s
     0     2    47.5859    0   34    62.0000    47.5859  23.2%     -    0s

Cutting planes:
  Implied bound: 3
  MIR: 7

Explored 16 nodes (595 simplex iterations) in 0.11 seconds
Thread count was 2 (of 2 available processors)

Optimal solution found (tolerance 1.00e-04)
Best objective 6.2000000000e+01, best bound 6.2000000000e+01, gap 0.0%

Adjacent        10
AllocateTo      20
AwayFrom        10
NoSharing       50
SameRoom        10
GroupBy 11.18
Wastage 1
Overuse 2

Alexander 102
Banksy 101
Christo 203
David 202
Einstein 201
Fabio 105
Galileo 202
Holmes 202
Icarus 103
Jian 202

101 = 4.0000    Room = 8        Entity = 10 (count = 1)
102 = 18.0000   Room = 6        Entity = 15 (count = 1)
103 = 2.0000    Room = 6        Entity = 4 (count = 1)
104 = 6.0000    Room = 6        Entity = 0 (count = 0)
106 = 6.0000    Room = 6        Entity = 0 (count = 0)
201 = 18.0000   Room = 6        Entity = 15 (count = 1)
202 = 6.0000    Room = 22       Entity = 25 (count = 4)
203 = 2.0000    Room = 14       Entity = 15 (count = 1)

COST      = 62