Optimizing 19th Century Typewriters

The long title for this post is: “Optimizing 19th Century Typewriters using 20th Century Code in the 21st Century”.

Patrick Honner recently shared Hardmath123’s wonderful article “Tuning a Typewriter“. In it, Hardmath123 explores finding the best way to order the letters A-Z on an old and peculiar typewriter. Rather than having a key for each letter as in a modern keyboard, the letters are laid out on a horizontal strip. You shift the strip left or right to find the letter you want, then press a key to enter it:

Screen Shot 2018-11-26 at 12.40.38 PM.png

What’s the best way to arrange the letters on the strip? You probably want to do it in such a way that you have to shift left and right as little as possible. If consecutive letters in the words you’re typing are close together on the strip, you will minimize shifting and type faster.

The author’s approach is to:

  • Come up with an initial ordering at random,
  • Compute the cost of the arrangement by counting how many shifts it takes to type out three well-known books,
  • Try to find two letters that when you swap them results in a lower cost,
  • Swap them and repeat until you can no longer find an improving swap.

This is a strong approach that leads to the same locally optimal arrangements, even when you start from very different initial orderings. It turns out that this is an instance of a more general optimization problem with an interesting history: quadratic assignment problems. I will explain what those are in a moment.

Each time I want to type a letter, I have to know how far to shift the letter strip. That depends on two factors:

  1. The letter that I want to type in next, e.g. if I am trying to type THE and I am on “T”, “H” comes next.
  2. The location of the next letter, relative to the current one T. For example, if H is immediately to the left of T, then the location is one shift away.

If I type in a bunch of letters, the total number of shifts can be computed by multiplying two matrices:

  • A frequency matrix F. The entry in row R and column C is a count of how often letter R precedes letter C. If I encounter the word “THE” in my test set, then I will add 1 to F(“T”, “H”) and 1 to F(“H”, “E”).
  • A distance matrix D. The entry in row X and column Y is the number of shifts between positions X and Y on the letter strip. For example, D(X, X+1) = 1 since position X is next to position X+1.

Since my problem is to assign letters to positions, if I permute the rows and columns of D and multiply this matrix with F, I will get the total number of shifts required. We can easily compute F and D for the typewriter problem:

  • To obtain F, we can just count how often one letter follows another and record entries in the 26 x 26 matrix. Here is a heatmap for the matrix using the full Project Gutenberg files for the three test books:

Screen Shot 2018-11-26 at 12.58.40 PM.png

  • The distance matrix D is simple: if position 0 is the extreme left of the strip and 25 the extreme right, d_ij = abs(i – j).

The total number of shifts is obtained by summing f_ij * d_p(i),p(j) for all i and j, where letter i is assigned to location p(i).

Our problem boils down to finding a permutation that minimizes this matrix multiplication. Since the cost depends on the product of two matrices, this is referred to as a Quadratic Assignment Problem (QAP). In fact, problems very similar to this one are part of the standard test suite of problems for QAP researchers, called “QAPLIB“. The so-called “bur” problems have similar flow matrices but different distance matrices.

We can use any QAP solution approach we like to try to solve the typewriter problem. Which one should we use? There are two types of approaches:

  • Those that lead to provably global optimal solutions,
  • Heuristic techniques that often provide good results, but no guarantees on “best”.

QAP is NP-hard, so finding provably optimal solutions is challenging. One approach for finding optimal solutions, called “branch and bound”, boils down to dividing and conquering by making partial assignments, solving less challenging versions of these problems, and pruning away assignments that cannot possibly lead to better solutions. I have written about this topic before. If you like allegories, try this post. If you prefer more details, try my PhD thesis.

The typewriter problem is size 26, which counts as “big” in the world of QAP. Around 20 years ago I wrote a very capable QAP solver, so I recompiled it and ran it on this problem – but didn’t let it finish. I am pretty sure it would take at least a day of CPU time to solve, and perhaps more. It would be interesting to see if someone could find a provably optimal solution!

In the meantime, this still leave us with heuristic approaches. Here are a few possibilities:

  • Local optimization (Hardmath123’s approach finds a locally optimal “2-swap”)
  • Simulated annealing
  • Evolutionary algorithms

I ran a heuristic written by Éric Taillard called FANT (Fast ant system). I was able to re-run his 1998 code on my laptop and within seconds I was able to obtain the same permutation as Hardmath123. By the way, the zero-based permutation is [9, 21, 5, 6, 12, 19, 3, 10, 8, 24, 1, 16, 18, 7, 15, 22, 25, 14, 13, 11, 17, 2, 4, 23, 20, 0] (updated 12/7/2018 – a previous version of this post gave the wrong permutation. Thanks Paul Rubin for spotting the error!)

You can get the data for this problem, as well as a bit of Python code to experiment with, in this git repository.

It’s easy to think up variants to this problem. For example, what about mobile phones? Other languages? Adding punctuation? Gesture-based entry? With QAPs, anything is possible, even if optimality is not practical.

Advertisements

The Origin of CC and BCC

Those born into the computer age unwittingly use metaphors without awareness of their origins. I will explain one such metaphor painfully, and at length: CC.

Before the advent of word processors and personal computers, the typewriter was the dominant tool for producing professional documents. Here is a picture of Robert Caro’s typewriter, the Smith-Corona Electra 210:

sce210.jpg

An advantage of typewriters is that they can produce legible, consistently formatted documents. A disadvantage is that they do not scale: 1000 copies requires 1000 times the work, unless other accommodations are made. The mass production of a single document was the primary job of the printing press. Later, the mimeograph and the photocopier began to be used in certain schools and organizations. But printing presses, mimeographs, and photocopiers were all expensive.

So along with these more sophisticated tools, there was a simpler, more primal method for document duplication – the carbon copy. Carbon paper, a sheet with bound dry ink on one side, was placed in between two conventional sheets of paper, and the triplet fed into the typewriter. When the keys of the typewriter were struck, the type slug pressed against the ribbon, marking the top page with a character. The slug also pressed against the carbon paper, pressing the dried ink on the back side of the carbon paper onto the second plain sheet of paper, making the same mark. In this way, one key press marked two pages at once. Magic!

Style guidelines for typewritten letters and documents directed authors to indicate when multiple copies of the same document were being distributed to multiple recipients. The notation for this notification was to list the recipients after “cc”, for “carbon copy”.

In other words, an abbreviation for the means of duplication became a notification of duplication.

A variant is the “blind carbon copy“, or BCC. This originally meant carrying out the physical act of duplication – using carbon paper – but omitting the notification. Hence the “blind”: if you are looking at the document, you cannot determine the list of recipients. This carried over to email too.

If you are my age or older, you already knew this. If you are younger, you very likely did not. I was interested in computers in middle school but computer classes were not available. I did the next best thing and took a typing class. That’s how I learned.

Chaining Machine Learning and Optimization Models

Rahul Swamy recently wrote about mixed integer programming and machine learning. I encourage you to go and read his article.

Though Swamy’s article focuses on mixed integer programming (MIP), a specific category of optimization problems for which there is robust, efficient software, his article applies to optimization generally. Optimization is goal seeking; searching for the values of variables that lead to the best outcomes. Optimizers solve for the best variable values.

Swamy describes two relationships between optimization and machine learning:

  1. Optimization as a means for doing machine learning,
  2. Machine learning as a means for doing optimization.

I want to put forward a third, but we’ll get to that in a moment.

Relationship 1: you can always describe predicting in terms of solving. A typical flow for prediction in ML is

  1. Get historical data for:
    1. The thing you want to predict (the outcome).
    2. Things that you believe may influence the predicted variable (“features” or “predictors”).
  2. Train a model using the past data.
  3. Use the trained model to predict future values of the outcome.

Training a model often means “find model parameters that minimize prediction error in the test set”. Training is solving. Here is a visual representation:

Screen Shot 2018-08-19 at 3.40.32 PM

Relationship 2. You can also use ML to optimize. Swami gives several examples of steps in optimization algorithms that can be described using the verbs “predict” or “classify”, so I won’t belabor the point. If the steps in our optimization algorithm are numbered 1, 2, 3, the relationship is like this:

Screen Shot 2018-08-19 at 3.40.39 PM

In these two relationships, one verb is used as a subroutine for the other: solving as part of predicting, or predicting as part of solving.

There is a third way in which optimization and ML relate: using the results of machine learning as input data for an optimization model. In other words, ML and optimization are independent operations but chained together sequentially, like this:

Screen Shot 2018-08-19 at 3.40.46 PM

My favorite example involves sales forecasting. Sales forecasting is a machine learning problem: predict sales given a set of features (weather, price, coupons, competition, etc). Typically business want to go further than this. They want to take actions that will increase future sales. This leads to the following chain of reasoning:

  • If I can reliably predict future sales…
  • and I can characterize the relationship between changes in feature values and changes in sales (‘elasticities’)…
  • then I can find the set of feature values that will increase sales as much as possible.

The last step is an optimization problem.

But why are we breaking this apart? Why not just stick the machine learning (prediction) step inside the optimization? Why separate them? A couple of reasons:

  • If the ML and optimization steps are separate, I can improve or change one without disturbing the other.
  • I do not have to do the ML at the same time as I do the optimization.
  • I can simplify or approximate the results of the ML model to produce a simpler optimization model, so it can run faster and/or at scale. Put a different way, I want the structure of the ML and optimization models to differ for practical reasons.

In the machine learning world it is common to refer to data pipelines. But ML pipelines can involve models feeding models, too! Chaining ML and optimization like this is often useful, so keep it in mind.

Other Minds: Consciousness and Evolution

I highly recommend Other Minds, by Peter Godfrey-Smith. It’s a fascinating exploration of the minds of cephalopods, who independently from vertebrates developed sophisticated nervous systems and what any reasonable person would call intelligence. In so doing, Godfrey-Smith explores the tree of life, the origins and components of complex thought and consciousness, and the ways of formless, curious creatures deep below. Read this book!

Other-Minds-Cover-Crop

Godfrey-Smith describes two important revolutionary periods in Earth’s evolutionary history. In each case, a means of communication between organisms became a means of communication within them.

The first is the Sense-Signaling revolution. Roughly 700 million years ago, the first organisms that we could reasonably call animals – sensing and acting organisms – evolved. Just a bit later, around 542 million years ago according to Godfrey-Smith, certain organisms began to develop not only sensing mechanisms, but signaling mechanisms too. Both sensing and signaling, directed outward, provide evolutionary benefits: they help animals navigate and influence their environments. There is another advantage: these same sensing and signaling mechanisms can be used inside the space of the organism to better coordinate its sense-action loop. Input is processed through the senses, and then signaled in a targeted fashion to another part of the organism (be it tentacle, flipper, paw, or hand) to generate a specific response. Sensing and signaling happens inside only higher order organisms like animals. The internalization of sensing and signaling marks the beginnings of the development of the nervous system. Millions of years later, the sensing and signaling mechanisms found in animals are often incredibly complex.

The second is Language. Less than half a million years ago, human language emerged from simpler forms of communication. Language is nothing but an elaborate, auditory form of sensing and signaling. Its more rudimentary forms are used by our primate cousins to warn, coax, plead, and threaten. In humans, these signals became more universal in expressive power, and more nuanced (despite recent examples to the contrary).

Godfrey-Smith, building on Hume, Vygotsky and others, notes that speech is not only for our others, but for ourselves. Each of us has an inner dialogue that runs through our heads from wake to sleep, and even in our dreams. Our inner speech is inseparable from our conscious selves. In a beautiful passage, Godfrey-Smith writes, “inner speech is a way your brain creates a loop, intertwining the construction of thoughts and the reception of them.” This loop not only helps us direct our action, but it can clarify, integrate, and reinforce our conceptions. For Godfrey-Smith and others such as Baars and Dehaene (whom I’ll cover in a future post since Consciousness and the Brain is amazing), our inner speech is a necessary ingredient in our integrated subjective experience as human beings. It helps us to direct our thoughts in a deliberate, planful way – the “System 2 thinking” that Kahneman writes about in Thinking Fast and Slow.

The Sense-Signaling and Language revolutions were both forerunners of radical planetary change. In the first case, the Cambrian explosion, in the second the rise to primacy of homo sapiens.

Speaking of us: it is interesting to compare the revolutions described by Godfrey-Smith to those described by Yuval Harari in Sapiens: A Brief History of Humankind (link). Harari’s, as summarized by Wikipedia, are the following:

  • “The Cognitive Revolution (c. 70,000 BCE, when Sapiens evolved imagination).
  • The Agricultural Revolution (c. 10,000 BCE, the development of farming).
  • The unification of humankind (the gradual consolidation of human political organisations towards one global empire).
  • The Scientific Revolution (c. 1500 CE, the emergence of objective science).”

Harari’s Cognitive Revolution, in my view, maps reasonably well to Godfrey-Smith’s Language revolution. The remaining items in Harari’s list, when considered in Godfrey-Smith’s context, seem like nearly inevitable consequences of the first. Perhaps I am giving us too little credit, or perhaps too much.

Four Things I Learned from Jack Dongarra

Opening the Washington Post today brought me a Proustian moment: encountering the name of Jack Dongarra. His op-ed on supercomputing involuntarily recalled to mind the dusty smell of the third floor MacLean Hall computer lab, xterm windows, clicking keys, and graphite smudges on spare printouts. Jack doesn’t know it, but he was a big part of my life for a few years in the 90s. I’d like to share some things I learned from him.

I am indebted to Jack. Odds are you are too. Nearly every data scientist on Earth uses Jack’s work every day, and most don’t even know it. Jack is one of the prime movers behind the BLAS and LAPACK numerical libraries, and many more. BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra Package) are programming libraries that provide foundational routines for manipulating vectors and matrices. These routines range from the rocks and sticks of addition, subtraction, and scalar multiplication up to finely tuned engines for solving systems of linear equations, factorizing matrices, determining eigenvalues, and so on.

Much of modern data science is built upon these foundations. They are hidden by layers of abstractions, wheels, pips and tarballs, but when you hit bottom, this is what you reach. Much of ancient data science is also built upon them too, including the solvers I wrote as a graduate student when I was first exposed to his work. As important as LAPACK and BLAS are, that’s not the reason I feel compelled to write about Jack. It’s more about how he and his colleagues went about the whole thing. Here are four lessons:

Layering. If you dig into BLAS and LAPACK, you quickly find that the routines are carefully organized. Level 1 routines are the simplest “base” routines, for example adding two vectors. They have no dependencies. Level 2 routines are more complex because they depend on Level 1 routines – for example multiplying a matrix and a vector (because this can be implemented as repeatedly taking the dot product of vectors, a Level 1 operation). Level 3 routines use Level 2 routines, and so on. Of course all of this obvious. But we dipshits rarely do what is obvious, even these days. BLAS and LAPACK not only followed this pattern, they told you they were following this pattern.

I guess I have written enough code to have acquired the habit of thinking this way too. I recall having to rewrite a hilariously complex beast of project scheduling routines when I worked for Microsoft Project, and I tried to structure my routines exactly in this way. I will spare you the details, but there is no damn way it would have worked had I not strictly planned and mapped out my routines just like Jack did. It worked, we shipped, and I got promoted.

Naming. Fortran seems insane to modern coders, but it is of course awesome. It launched scientific computing as we know it. In the old days there were tight restrictions on Fortran variable names: 1-6 characters from [a-z0-9]. With a large number of routines, how does one choose names that are best for programmer productivity? Jack and team zigged where others might have zagged and chose names with very little connection to English naming.

“All driver and computational routines have names of the form XYYZZZ”

where X represents data type, YY represents type of matrix, and ZZZ is a passing gesture at the operation that is being performed. So SGEMV means “single precision general matrix-vector multiplication”.

This scheme is not “intuitive” in the sense that it is not named GeneralMatrixVectorMultiply or general_matrix_vector_multiply, but it is predictable. There are no surprises and the naming scheme itself is explicitly documented. Developers of new routines have very clear guidance on how to extend the library. In my career I have learned that all surprises are bad, so sensible naming counts for a lot. I have noticed that engineers whom I respect also think hard about naming schemes.

Documentation. BLAS and LAPACK have always had comprehensive documentation. Every parameter of every routine is documented, the semantics of the routine are made clear, and “things you should know” are called out. This has set a standard that high quality libraries (such as the tidyverse and Keras – mostly) have carried forward, extending this proud and helpful tradition.

Pride in workmanship. I can’t point to a single website or routine as proof, but the pride in workmanship in the Netlib has always shone through. It was in some sense a labor of love. This pride makes me happy, because I appreciate good work, and I aspire to good work. As a wise man once said:

Once a job is first begun,
Never leave it ’till it’s done.
Be the job great or small,
Do it right or not at all.

Jack Dongarra has done it right. That’s worth emulating. Read more about him here [pdf] and here.

JackDongarra

2018 NCAA Tournament Picks

Every year since 2010 I have used data science to predict the results of the NCAA Men’s Basketball Tournament. In this post I will describe the methodology that I used to create my picks (full bracket here). The model has Virginia, Michigan, Villanova, and Michigan State in the Final Four with Virginia defeating Villanova in the championship game:

Screen Shot 2018-03-13 at 9.20.05 PM

Here are my ground rules:

  • The picks should not be embarrassingly bad.
  • I shall spend no more than one work day on this activity (and 30 minutes for this post). This year I spent two hours cleaning up and running my code from last year.
  • I will share my code and raw data. (The data is available on Kaggle. The code is not cleaned up but here it is anyway.)

I used a combination of game-by-game results and team metrics from 2003-2017 to build the features in my model. Here is a summary:

I also performed some post-processing:

  • I transformed team ranks to continuous variables given a heuristic created by Jeff Sonos.
  • Standard normalization.
  • One hot encoding of categorical features.
  • Upset generation. I found the results to be not interesting enough, so I added a post-processing function that looks for games where the win probability for the underdog (a significantly lower seed) is quite close to 0.5. In those cases the model picks the underdog instead.

The model predicts the probability one team defeats another, for all pairs of teams in the tournament. The model is implemented in Python and uses logistic regression. The model usually performs well. Let’s see how it does this year!

Advice for Underqualified Data Scientists

A talented individual seeking entry-level data science roles recently asked me for advice. “How can you show a potential employer that you’d be an asset when on paper your resume doesn’t show what other candidates have?”

I’ll stick to data science, but much of what I share applies to other roles, too.

Let’s think about the question first. Why do coursework and skills matter for employers? It depends. Different employers have different philosophies about how they evaluate candidates. Most job listings specify required skills and qualifications for applicants, for example “must have 3-5 experience programming in R or Python.” Usually there is more to the story. Sometimes employers don’t expect candidates to meet all the criteria. Other times, the criteria are impossible to meet.

In most situations, employers are looking for additional attributes not provided in the job listing. Some employers will tell you their philosophy by listing the attributes they value: “ability to deal with ambiguous situations”, “being a team player”, “putting the customer first”, “seeks big challenges”, and so on. Others don’t. Even if they tell you, you don’t typically know which attributes are most important. What really matters? If I am a so-so programmer but a brilliant statistician, do I have a shot?

Individuals who make hiring decisions have a mental image of how a successful candidate will perform on the job. This mental image includes possessing and using a certain set of skills. Qualifications such as a degree, a certificate, or code on github provide part (but only part) of the evidence necessary to ensure hiring managers that they are making a sound decision.

Let’s be simplistic and say that employers consider both “explicit skills” and “implicit skills”. Examples of explicit skills are demonstrated knowledge or capability with programming language X, technology Y, or methodology Z. Examples of implicit skills might be the ability to break down a complicated problem into its constituent parts, dealing with ambiguity, working collaboratively, and so on. Certainly some employers are very focused on finding candidates with explicit skills, sometimes to the exclusion of implicit skills.

A reframing of the question is then: “If I sense that a potential employer is looking for certain explicit skills and I don’t think I have them, what do I do?” Here are some ideas:

Provide evidence you are good at acquiring explicit skills. Given an example of learning an explicit skill. (“No, I don’t know R, but I know Python. In my blah blah class I had to learn Python so I could apply it to XYZ problem, and it was no big deal. I did ABC and now my code is up on github. Learning R is really not a big deal, I’m confident I could hit the ground running. What would you have in mind for me for my first project?”)

Emphasize your implicit skills. Game plan about questions you’ll be asked and think about how you’d highlight what you believe to be your differentiating skills. (Without sounding like a politician.) By the way, now that I think about it, I followed my own advice when I interviewed at Market6 (now 84.51). I talked about the fact that I have worked in both software engineering and data science roles, and that made me uniquely qualified to work at a company that was trying to deliver data science at scale through SaaS offerings.

Do your own screening. Focus your search on employers who seem to value implicit skills. Rule out others. Do your research prior to applying. Ask friends or contacts. Early in your conversations with employers you can ask the recruiter about their philosophy. Not every job is right for you, so try and figure out which ones are.

That’s all I’ve got. I will close by telling two quick stories.

First story: My first job after finishing my PhD was as an entry level software engineer at Microsoft. When I interviewed, I was fortunate because Microsoft weighted implicit skills highly in their evaluation process. One of my favorite bosses at Microsoft was a classics major (as in Euripides, not the Stones). Another engineering manager started his career localizing dialog box messages into French. Oui, c’est vrai. Both had, and continue to have, a very strong set of implicit skills. They, in turn, looked for implicit skills. Talent comes in many different packages.

Second story: I believe that for early career stage positions it’s important to weight implicit skills more highly than explicit ones. Sometimes it’s a relief if certain explicit skills aren’t there! Several years ago, I had an entry level scientific researcher on my team who did not know how to code, in a position where lots of coding was required. This individual had very deep knowledge of optimization and statistics, was a hard worker, and was incredibly motivated. I was thrilled that they didn’t know how to code because then I could teach them! No bad habits!