# Stochastic optimization, kidneys, and ethics

I have been reading about online stochastic optimization problems in response to some internal interest in applying Solver Foundation to solve them. Pascal Van Hentenryck’s book is a great introduction. Anyway, I came across a very interesting article by two researchers at Carnegie Mellon: here’s the link. The authors have applied online stochastic optimization techniques to facilitate kidney transplants. When a patient needs a kidney transplant, often a family member or loved one will step up to offer to donate. Unfortunately in many cases the prospective donor and patient are not compatible.  By sharing information about donors and patients, it is possible to set up a “kidney exchange network” whereby willing donors are matched up to patients. By making good matches and performing swaps, more patients get kidneys and lives are saved.

More formally, we have a directed graph and we are seeking the largest collection of vertex-disjoint cycles where the cycle length is under some limit.  A cycle represents a swap between patients and donors.  We want a limit because big cycles (swaps involving lots of parties) are not very practical. We want to apply a so-called “online” algorithm because the problem is continually changing.  New donors appear, and sadly some patients pass away due to the inability to find a match. These events are unpredictable, so we need to come up with a strategy for when and how to perform swaps in the face of an uncertain future.

The technical side of this is very interesting, and it is an inspiring example of the power of mathematical programming to effect positive change in the world. It’s clear that the authors, while remaining completely professional and scientific in their approach, are aware of this aspect to their work. I am sure that it occurred to the authors – as it did to me – that ethical questions quickly arise when we consider how to model and solve such problems.  This leads us to Section 5.3.

Section 5.3 considers the impact of so-called “dummy actions”. In this context, a dummy action means choosing NOT to make any swaps at a time where there are swaps that could be made. In Section 5.3 the authors show that by allowing dummy actions the expected solution quality is around 2% better. And to be really blunt – 2% in this case means 2% more transplants. But consider the implications: this means that somewhere in the country there may be a patient who needs a kidney; somewhere else there is a compatible donor; we are aware of this information; and the model is telling us that we should NOT do an exchange. The logic is that by delaying a swap now, we will have more options later, and we’ll be in a position to help more people. But imagine that you run the kidney exchange: how much confidence must you have in your model to justify such a decision? Perhaps you know that a match exists but the donor and patient do not. Do you feel good about saying nothing? Should we consider disallowing “dummy actions” in certain cases even though we know on average it leads to a better overall outcome?

The beauty of mathematical programming is that we put our cards on the table. We build models and list our assumptions explicitly. It also affords us the ability to make different assumptions and understand (however imperfectly) their impact. Our models seldom capture all the complexities of nature and ethics, but they are often so much better than simply guessing. Awasthi and Sandholm are doing us all a favor when they raise these tough questions because they cause us to think more deeply about what the hell is going on. It is my hope that when we apply deep math to solve real problems we are just as transparent about policies and tradeoffs.