Happy New Year! I look forward to a fun year of blogging – I have a number of hopefully interesting posts brewing involving: text analytics, college athletic conference comparisons, Big Data, the Open Source movement, former employers, and of course, chessboxing. The caveat is that I only post when it doesn’t cut into my regular work, and I really can’t spare much time these days. So we’ll see how much of that I get to!
The top response includes a breakdown of the various engineering roles at Facebook, including “Analyst”:
Analyst: when we’d get too carried away in debates in meetings, one of the eng managers would often remark: "warning: we are entering a data-free zone." The meaning was that without grounding our arguments in data, we’re just talking about opinions. The analysts at FB are crucial for keeping everyone grounded in actual numbers. How well/badly are we doing? What should be our measure of success? How do we tell if something is broken? Analysts play a huge role at Facebook, which will continue to be true as the company grows larger.
This strikes me as a pretty good idea.
The obvious counter is, “Do you really need a separate job description for this? Shouldn’t everyone on the team be an Analyst? Shouldn’t everyone use data to inform decisions?” Well, yes, certainly. But I like the idea of a defined role that attaches this responsibility to a particular person. After all, everyone on an engineering team should be concerned about quality, yet most agree that it is a good idea to have a Test/QA job function. Just as an effective QA team builds a culture that values quality, an effective analyst has the potential to build a culture of data-driven decision making. Additionally, by having an Analyst role this allows for specialization in the form of techniques (regression, data mining, optimization, data collection) and tools – just as many engineering teams have a “performance guru” who can profile anything, anywhere, any time.
On the other hand, I’m speculating. I have never worked on a team with such a role. Have you?
(Thanks to Michael Trick for the tip!)
Okay, so what is “better” anyway? I get the sense that for many operations research insiders, “better” is another word for “faster”, but that is wrong, wrong, wrong. “Better” means different things to different people. For example:
- More accurate.
- Less prone to failure.
- Easier to use by a broader set of people.
- Faster to develop a solution.
- Easier to integrate with other systems.
- Better supported.
- Easier to customize and modify.
- Easier to share.
- Uses less memory.
- Unencumbered by intellectual property concerns.
as well as…
“Better” is a multiobjective problem: most of us actually want many of the things on the list. How we weight the various factors depends in part on what the software is being used for:
- For academic research,
- For rapid prototyping,
- To create a model for a consulting engagement,
- For a production system.
Some of these factors can be measured (and are, thanks to the tireless efforts of Hans Mittelmann and others) while others are more subjective. Even if we are focusing exclusively on “faster”, the picture remains complicated. In a production system what matters is how quickly users get the results they want. So we care about not only the time that the solver takes, but also:
- How long it takes to retrieve the data and assemble the model to be solved,
- The predictability of response time over different user requests,
- How the solver performs in the face of many simultaneous requests.
Solver runtime differences of 5-10% don’t matter that much, generally speaking. I like to categorize how long an operation takes in real-world terms (I stole this idea from somebody else, but I don’t remember who):
- instantaneous (subsecond)
- the time it takes to check espn.com and/or twitter (5-10 seconds)
- get coffee (a few minutes)
- have lunch (30 minutes or so)
- a weekend
It’s usually not worth the effort making an engineering decision based solely on performance if you can’t move to a different bucket. Otherwise you probably have better things to do.
Hiring is the most important activity for any organization. That subject has been covered many times in many ways, so I won’t. Performance evaluation is in the top five, especially because it is linked to compensation. Yes, here’s my blog post about assigning numbers to people.
The back story for this post is that I recently spent all day in performance reviews for our organization. While describing the nature of the process and the details of who said what during today’s session would be great for page views, I’ll steer clear. I want to write about performance reviews because people are almost universally freaked out by them. It’s healthy to do regular, formal, qualitative evaluations of performance…when the evaluations are done the right way.
What are reviews like? Not every reader has been through a formal performance review, at least not in an industry setting. I don’t claim to have had a representative experience either as an employee or manager; I only know what I know. If you want to know more about "how things work" in other places, Google (or Bing) away. I can tell you that my own experiences have been pretty consistent:
- Performance is reviewed formally once or twice a year.
- Employees fill out a form where they talk about what they’ve done.
- Managers rate their employees’ performance and discuss with their peers.
- The ratings are sent up the management chain, where a series of calibrations take place to make sure everyone’s grading according to the same curve.
- The final ratings come back down the management chain.
- The review is used to determine compensation: pay raises, bonuses, stock, promotions.
- Managers and employees have a discussion about the results of the review.
And the circle of life begins anew. Is all of this necessary? Technology people – geeks – really hate this stuff. At the lunch table you will be told that performance reviews are the tool of "pointy hairs" used to suppress and control the free spirited hacker who knows what is right but is not allowed to do it. I am sure there are a thousand smug Dilbert cartoons on the subject. (I despise Dilbert.) Reviews are sometimes used to control, suppress and annoy, but this is a symptom of organizational (and sometimes personal) issues. There are legitimate reasons for reviews. It may sound crass, but:
- money is a huge motivator,
- there are limited resources, and
- there needs to be a process whereby the cash is fairly distributed.
A formal review system at least affords the opportunity to make the process somewhat transparent. If you buy that premise, and you’re in a relatively big organization, then most of the steps above kind of make sense. If you’re in a four person startup, maybe not.
For most of us privileged enough to be in a field like ours, there’s more to it than just money. Different factors motivate us and provide meaning to our work. Since many organizations are publically owned and therefore laser focused on profit, there is often a tension between employees – people – desiring to find greater meaning at work while meeting the needs of their organization. In many cases the things that provide meaning have nothing to do with money. A manager’s job should be to try and thread the needle and do right by both the employee and the organization. Those subscriptions to The Baffler may have snorted chocolate milk out of their noses at this point because this view may sound naive or even exploitive. I kind of get the skepticism, but all I can say is that if I felt I were in a place where I thought doing both weren’t possible, I’d leave. If the people that comprise an organization really want to be about something, then there is no better way to make a statement than through performance reviews. The statement might be "seemingly thankless work like maintaining a build system is valued", or "time spent mentoring new employees matters", or "showboating is lame and counterproductive", or simply "you’re doing an awesome job and we want you to stay." If management is a tightrope act balancing individual development and organizational goals, then performance reviews should be seen as the pole. Stabilizing weight hangs down on either side; a helpful burden.
So what is required to do it right? Above all else, everyone who participates in the process needs to be respectful of everyone else and truthful in their dealings. Most of the horror stories that people have about reviews boil down to a failure in one of these two areas. My own personal horror stories certainly were. (My horror stories: plural, redacted from this post, and unrelated to my current employer.) Face it: any system that involves people will fail without respect and truth. The next most important thing is shared understanding of organizational values. Sometimes the review process itself can help a team understand and articulate its own values more clearly by virtue of needing to introspect. A common trap is assessing the value of heroism: the coder who threw down a few 80 hour weeks to design and implement a brand new system to meet a deadline. New organizations risk overvaluing heroes; stagnant organizations risk undervaluing them. Organizational values make it possible to evaluate contributions. Obviously in order to carry out the evaluation you need to have a clear understanding of what people are doing and the associated value for the organization. Sometimes people spend a lot of time doing an outstanding job on tasks that are not particularly important. Who gets the blame for that depends on the situation, and yeah, realistically sometimes part of the review process is assigning blame – or "responsibility" if you want to be more PC about it.
The review process and its results should not take anyone by surprise. The #1 unwelcome surprise is when a manager tells a direct report that they’re getting a bad review when the employee isn’t expecting it. That sucks for everyone, and the blame lies entirely with the manager. It’s incumbent upon the manager to treat their employees with respect and stay up-to-date with how things are going. A manager may be tempted to blame "the system" in such cases: "I thought you’ve done a fine job, but you know how it is with the curve and all…I did what I could for you but I just couldn’t make the case well enough for you." It’s a reality that differences of opinion exist because no two pairs of heads or hearts are the same, but that’s no excuse for weaseling out of being straight with your team. Hearts and minds should be joined long before the day of judgment arrives. We owe it to each other as a team. The last important ingredient is that peer feedback at multiple levels is crucial. When collecting information for a review, the most important resource are an employee’s coworkers. Asking them directly what they think their peer is doing well, or needs to be improved, is a smart thing to do. As evaluations are reviewed by upper management, repeating this process is important. We all know that some of us are easy graders and others are tougher, so the goal is to be fair by accounting for these differences. Peer feedback needs to be shared at the end of the process (withholding names unless permission has been received) so that employees understand that the evaluation is based on the team’s input.
It’s easy, right? Be respectful, be truthful, understand your values, know what people are up to, and communicate. No, it’s not easy. It takes practice, but remember that reviews done right have still other benefits. They can inform the hiring process. If you know how to evaluate the employees that you have, you know how to look for (and get) the employees that you want. Reviews really can be a positive learning experience for everyone. The downside is that if important ingredients are missing, or applied in the wrong proportion, reviews can be a nightmarish burden. Don’t be one of those teams, and don’t shrink from the challenge!
This is too long for a tweet, so I will make it an extremely brief blog post.
It’s amazing how often engineering managers will spend all night fixing a bug or working on a powerpoint, but will not spend an hour thinking about how to build a team that works together effectively.
(And for the record, my current manager does not suffer from this problem. On the contrary: he thinks about this stuff constantly, and it shows.)
(Another in a series.)
When you work at a big software company, the design choices that you make today will shape your destiny for years to come. Much more so than at a smaller, more agile unit that may throw out code or rewrite it with abandon. The reason is simple: the tradeoffs are different when a bunch of people are trying to make improvements to the same thing at the same time.
There is always pressure to put the squeeze on design time. I don’t know how many times (on certain teams) I was asked to complete design in one week out of an eight week sprint. Even then, I sometimes had to spend half of that time cleaning up messes from the previous sprint. So much for measure twice, cut once! It’s true that everyone says that they need more time for each phase of a project, whether it’s requirements, design, implementation, or testing. But I really mean what I say – too little time is spent on design. If given the choice I would gladly trade implementation or testing time for design time, because when I design I still control my own destiny – even over requirements. Once coding begins, the team is carried along by the current and making changes is painful. It’s ironic that all of these things are true, yet the pressure to squeeze design time is entirely justified. There is a huge tendency, even among senior developers, to tinker, re-tinker, and re-re-tinker with designs. Making them bigger, smaller, more intricate, more abstract, and so on.
You can strike a balance by ensuring that:
- Dedicated time is allocated for design (and only design). Reducing this time means reducing project scope.
- There is a clearly defined deliverable (such as a design document and/or a presentation).
- Design starts on an individual or feature team basis and includes more people only as more confidence is developed.
- Design reviews (informal or formal) always happen before the designer is comfortable with what they have. This ensures that feedback comes after there is sufficient “meat” in the design, but before the designer is totally fallen in love with their ideas.
It’s common to underdesign important things and overdesign unimportant things. The tips above help to avoid both.
It’s also appropriate to change your design approach based on the type of component you’re dealing with. I wouldn’t design an airplane the same way I would design a toothbrush. UI frameworks are rewritten over and over. Calculation engines are not. Business rules are often changed or extended. Databases are often joined or merged but rarely discarded. Glue code is glue code.
At a big company, platform considerations are important but rarely under your control. Someone else has already made those decisions, and you are likely stuck. So you’ll have to learn to design under those constraints, but think of it as a challenge and not a burden (even though it is both).
Figuring out what the heck it is you are going to do is the most important part of any project, isn’t it? It’s the case whether you are fixing a toilet (which I outsource) or building the next version of a software package. As a team lead I always wanted everyone on the team to be able to describe in couple of sentences what it is that we are trying to achieve, and their role in it.
Requirements are invented things. This everyone knows, but at small companies people tend to forget to write them down, and at big companies people tend to forget that they can be changed. Either way, it’s important to keep in mind that requirements must be clearly articulated otherwise they will surely not be met. In other words, we need to write down requirements so that we will be able to determine whether we have met them. Once they’re written down, requirements documents can start to seem like stone tablets. Requirements do not exist for their own sake, but to realize a larger vision. If we can’t explain why a requirement is a requirement, then something has gone wrong. It’s common to question requirements later on in a project – perhaps they are too hard to implement. That’s okay to some extent, but in many cases people are just a bit too sloppy about thinking about (and writing down) requirements at the start of a project.
Where do requirements come from? We already said they are invented, but how do we invent them? Big companies draw from many sources:
- User surveys
- Senior management
- Customer advisory boards
- Partner feedback
The “Building Windows 8” blog is a wonderful public example of such artifacts. The B8 blog gives a true sense of how requirements are developed for a “big league” project. Basically the idea is to be like a five year-old: keep asking why questions until you get to axioms.
Also notable is the Russian doll-like nesting of requirements inside of requirements. Even a much smaller project like Solver Foundation cannot simply have one requirement document. Requirements are typically written down in increasingly narrow form: from a vision document to scenarios to themes to features to specifications. A specification, or “spec”, is the fundamental unit of requirements at a larger software shop. It describes a unit of work specified by a single PM, implemented by a single developer, and tested by a single tester during a single product release (or milestone).
- provide justification for the feature
- state goals
- state non-goals
- define user scenarios
- imply a self-contained unit of work
- specify integration points
- describe performance goals
- be written for the engineering team
- be self-contained
They should not:
- describe how a feature is built
- be a list of APIs
- be written in “business speak”
- be written for management
Sad to say, it is often the case that a spec ends up beginning with an amateurish mishmash of MBA gobbledygook and ending with a hastily cut-and-pasted set of API signatures, with comments from ten different people in the margins. Few things are less useful and more depressing than a spec of this kind. Be clear, don’t try to impress, and justify your reasoning. Write for someone who is smart but is not intimately familiar with your product and team history. After all, those are the kinds of people who will be using the thing you’re trying to build.
The software lifecycle is the same wherever you go. It’s one of these things that you are taught in school that really is the way it’s described. The steps are basically to figure out what you need to do (requirements), how it will work (design), do it (implementation), and verify you’ve done it right (testing). Then you’ve got to get the finished product to the people you promised to get it to (deployment). When you lay it out sequentially it feels very much like a waterfall development model, but of course all the same steps happen in agile as well. There’s been a big move towards agile over the past ten years and big companies are no exception. You do encounter a lot of “faux agile” as well (fauxgile?) – a team I was once a part of had one hour “scrum meetings” with 15 or 20 managers with laptops sitting down in a room. I digress.
At a big operation, each stage is documented according to organizational or team standards. This is a good idea, and it’s vital if you want to be able to share institutional knowledge among past and future team members, seed the localization and user documentation teams with good information, and form the genesis of patent applications.
Accountability for different stages in the software life cycle is divided among the team. Teams are large enough to permit specialization. At Microsoft (and other places besides), the three primary job descriptions are “program manager”, “developer”, and “tester” (or QA). Program managers are accountable for requirements, developers for implementation, and QA for testing. All three disciplines are involved in all stages, but each discipline takes its turn in the spotlight as concepts move from vague notions to concrete implementation. Different outfits divide these responsibilities in different ways. The concept of a “program manager” (as opposed to project manager) was essentially invented at Microsoft and is not universal. Some teams combine dev and QA responsibilities. Other teams include operations (accountable for deployment) in the core engineering team.
This separation of powers feels like the division that exists between legislative (PM), executive (dev), and judicial (QA) in government. As in government, tension sometimes exists between the three branches. Some amount of this is natural and healthy because after all, software engineering is an activity that is undertaken with limited resources under changing conditions. Tradeoffs are necessary, and figuring out how and when to make these changes naturally leads to difference of opinion. One difference between engineering teams and governments is that in an engineering team there is a fourth party sitting above all the others: management. Management, if it is to be useful, should step in when necessary to remind all three disciplines of their common mission and purpose, and to make the judgment calls that are necessary to keep them on track. They’re in a good position to do that when the mission is clearly defined, they can articulate it, and when they can relate it to the day-to-day work that their team is being asked to do. (Knuth: “the psychological profiling of a programmer is mostly the ability to shift levels of abstraction, from low level to high level. To see something in the small and to see something in the large.”)
I don’t know about you, but I’d rather be a president than a legislator or a judge. I always liked being a dev. It’s common for the devs to feel like they are special – I remember a conversation early on in my Microsoft career where a more senior dev told me that devs were special because they were the only ones that could perform the other two job functions. I have found that it is not really true – a great PM could be a dev or a tester, and a great tester could be a PM or a dev. This makes sense because in order to do your job well, you need to understand how the work you do fits into the larger story. It is common for Microsoft employees to change from one discipline to another in the course of their careers.
A “triad” of PM, dev, and tester form a basic unit that can take a portion of a product (a feature) from start to finish. It’s become more common in certain divisions at Microsoft to make this partnership more formal by calling this triad a “feature crew”. They meet regularly from the inception of the project to the very end, reviewing each other’s work and tracking its progress together. Opinions vary on whether formal feature crews are a good idea or simply bureaucracy, but I liked them. Camaraderie develops between the triad, which is enjoyable and effective.
Next time (whenever that is) I will talk a bit about the first stage of the process: requirements.
This afternoon I gave a talk at the University of Iowa ACM conference, where I spoke about software engineering in large organizations. It’s a topic I enjoy speaking and writing about, and I was particularly enthusiastic because the audience was mostly undergrads and grads in the CS department. I tried to resist the temptation to simply tell “war stories” for 75 minutes and I almost succeeded.
The premise behind the talk is that in a healthy organization, team success and professional development go hand-in-hand, but in practice the reality often differs from the ideal. A key to success and professional fulfillment is to build hard and soft skills that allow you to achieve team goals as well as individual growth in the face of these realities.
Team success and individual professional development are clearly beneficial to everyone involved, and not at all impossible to achieve simultaneously. An organization that makes long term investments in talented people working on clearly defined goals, extending the opportunity to accept and conquer big challenges is likely to succeed on both counts. (Not incidentally, it’s impossible to pull this off without creating a positive, encouraging, lively work environment.) Unhappily, it’s often the case that organizations collectively apply tactical thinking to ill-defined or changing goals, leading to poorly managed projects with periods of tedium followed by “death marches”, caffeine and cold pizza.
Employees of large organizations are not unique in facing these challenges, but the impact may be more acute because simple math says that they are likely to have less control over their professional environment. The serenity prayer comes to mind. Nevertheless, large organizations provide tremendous advantages. Big companies draw outstanding talent and are able to provide them all the tools they need to do their job. There’s a lot going on – the diversity of interesting, relevant projects at Microsoft continued to inspire and amaze me year after year. The same is true at Nielsen, and other companies. Big companies tend to have formal processes in place for employee evaluation and development. They can be great places to learn new skills, be they technical or interpersonal.
In order to understand why software development at a big company is different from a startup, you need to think about what the job requires. Software development is a creative activity requiring engineering discipline. There are huge differences in the skill level of coders, even out in the professional world. Ubercoders do exist. But that is not the key factor determining success. I like to think of software engineers in terms of the following axes:
No matter where you work, you want to be a pro – in the upper right hand corner. Engineering discipline and creativity can absolutely coexist – and the traces of both are plainly evident in technology that truly inspires, be it the iPad, the Kinect, or whatever. Engineering discipline is simply more important (in a relative sense) than in smaller organizations, and for sound reasons. Large companies have different considerations. Big companies have big teams working together towards common goals. They all must march and work together. The cost of failure is often higher. Typically the team needs to support a large past body of work, such as a previous version. Mistakes can have consequences that last years. The list goes on.
A potentially uncomfortable but eminently fair example is with Windows Vista and Windows 8. I need to be careful here: I have never been a part of the Windows team and I don’t know anything that you don’t, and even if I did I would not reveal anything about a company that I no longer work for but still root for. Furthermore, Windows 8 has not shipped, and opinions may differ as to whether it will be enough of a success to stave off Apple, etc. But – I am sure that when it ships, Windows 8 will be a precise embodiment of a vision that was laid out years ago; that the choices that were made will be able to be justified with data; that it will ship on time and with high quality; that it will be something that the team who built it will be exceedingly proud of. It is a matter of public record that this was not the case with Vista. Why the difference? I will tell you what is not the case. It is not the case that Microsoft fired all the Vista people and hired new programmers (although there is new leadership). It is not the case that the existing team became significantly better programmers, testers, and designers. It is not the case that there were was a lack creative, interesting ideas during Vista development. (Au contraire, mon frere.) What happened was that the team realized a collective understanding of the importance of engineering discipline in all phases of the software cycle. Change is hard and not without cost, as is described in the recent Business Insider article on Microsoft (check it out). But I am sure that the costs will be repaid many times over by the work that has resulted from these changes. I am sure that many of those who stuck with the changes are better software engineers too.
This was the message behind the first portion of the talk. In the remainder of the talk I walked through the software lifecycle, giving my best account of how things work in a big organization and doing my best to explain why, with an aim towards identifying skills and techniques that improve one’s game. As time permits, in future posts I will share some of my thoughts from the remainder of the talk.
I ran into a situation recently where I was asked to debug a legacy C# program that was crashing due to multiple threads trying to write to the same file at the same time. I was asked because I was the last guy to modify it, so I guess I had no room to complain. I focused on the changes that I had made, trying to figure out how the heck I could have introduced the bug – my changes weren’t anywhere close to the source of the crash!
Then it hit me – my changes were a bunch of refactoring to make the code faster. The bug was always there, we were just more likely to hit it after my changes since each thread executes in less time. I should have probably guessed right away – there was a shared resource that was not being handled properly – but I was blinded by my assumption that my changes had to have introduced the bug. I guess that’s one moral of the story. (And now that I think about it, in the past *I* have been the guy that introduced a parallelism related bug that someone had to fix later.)
Another lesson is perhaps that it’s unwise to screw around with multicore parallelism unless you know what you are doing. Say you’ve got 4 cores, and let’s say that you get a 3x speedup out of them (which is often pretty generous). Many times I would rather be 3x slower but compleletely reliable and avoid random crashes. Microsoft’s task parallel library is kind of cool, but kind of dangerous. I’m not sure how often it’s really helpful.
Some of the numerical libraries groups at my workplace recently had an offsite to discuss future plans. I didn’t get the chance to say anything, but if I did, here’s what I would have said (more or less). It’s an analogy about boats, which is tenuous since I grew up in Iowa.
Building solvers is like shipbuilding. Shipbuilding is an ancient discipline which in more recent times has grown into a sophisticated engineering task. Building ships takes time to do right, and it takes a lot of practice to learn – it helps if you’re an apprentice first. It’s not an art, but it’s not quite a science either. For all of the technology, for all of the engineering, as a shipbuilder there are certain things you just don’t do, because that’s just the way the that you were taught. People long before you have tried it a different way and it just didn’t work. You don’t need to be young to be a shipbuilder – in fact in some ways it might be a little bit better if you are a bit older and wiser. Some people think the very idea of building a boat is a terrible bore (or just too damn hard), but for others it’s captivating. That’s pretty much all they want to do.
Using a solver - modeling – is like sailing. Sailors care very deeply about boats, of course, but that’s not all they care about. If you have ever taken a sailing lesson, one of the first things you will hear about is the weather. Your instructor might tell you to pay attention to the way the wind plays off the water, the trees near the shore, flags on buildings, and so on. A sailor needs to understand how their boat interacts with the wind and the water. When the forces of nature and the sailboat are in harmony, the experience is almost magical. If you’re sailing anywhere interesting then you will also need to know about any hazards along the way, like rocks, mermaids or whatever. Even experienced sailors can’t figure this stuff without some help – navigational aids, or the advice of locals is often helpful. Some people sail for fun, and others sail because it’s their job, but either way there’s a goal in mind. What matters is not the general characteristics of boats, or wind, or water, but the specific characteristics that come into play on the particular voyage the sailor is on. Shipbuilders and sailors evaluate boats differently. Shipbuilders have no idea how their boat is going to be used, so they have to think about the entire range of conditions they may face. They may think of the worst storm possible and design for that. Sailors only care about the voyages that they themselves are on. But since people sail for a reason, the destination and the conditions are often out of the ordinary, and may stress the boat in ways that shipbuilders, or even Hans Mittelmann, may not have anticipated.
Just because you can sail doesn’t mean that you can build a boat. Sailors want boats, not shipbuilders, but you can’t have a boat without a shipbuilder. Good shipbuilders are hard to find. That all seems obvious. The funny thing is that thinking about the reverse can really give people problems. Just because you can build a boat doesn’t mean that you know how to sail. Some know how to sail really well, it’s true…but you can’t bank on it. I certainly wouldn’t try to turn a bunch of shipbuilders into sailors without some sort of training or evaluation. Maybe because we’re so in awe of the few that know how to build ships that we think that they simply must know how to sail. I think things are starting to change, but it seems to me that most operations research graduate students are trained to be shipbuilders. This is not a bad thing. The thing is that once they’re trained to be shipbuilders, they are often hired to be sailors. For my part, I was trained to be a shipbuilder as a CS grad at the University of Iowa. When I joined Solver Foundation three years ago, I was (thankfully) hired on to be a shipbuilder, writing our interior point solvers. The more I got familiar with Solver Foundation and its customers, and especially after I took over leadership of the team, I began to realize that many of our customers were asking us to teach them to sail, or just to sail the damn boat for them. My head was filled with the alphas, mus, and sigmas of the shipyard but they are not always all that useful out on the water. I needed to learn to sail. Now I find that I like sailing more than I do shipbuilding. Go figure. Management is sometimes not as familiar with nautical terminology so these distinctions are sometimes lost. For them it is helpful if you phrase everything in terms of cars.
I’ve neglected to mention the most important group. People who ride on boats – passengers (or voyagers if you are more romantic) – are like people who use models. Some people take trips for fun, like a trip around Puget Sound, or a cruise to Alaska. Other people take trips to get from one place to another, like from Seattle to Victoria. Some voyages don’t involve passengers at all – we’re moving freight. The important difference is that in each of the cases, the fact that sailors and shipbuilders are involved at all is incidental – a passenger is paying for an experience, or for a service. While you and I are boat enthusiasts – can’t get enough! – most passengers couldn’t give a crap about the type of fabric used to build the sail, or the horsepower on the engines, or how narrow the strait is. The experience is what matters. Sometimes there is only a dim awareness that a boat is involved at all – all they know is that when they go down the ramp, they are in a new place. Solvers help determine how an Amazon package gets to your door. That’s amazing.
There are only so many shipbuilders in the world. There are many more sailors, but even sailors are overwhelmingly outnumbered by passengers. It’s not even close. But I don’t think that means that any one of these groups is any more important, or any more noble or intelligent. Everyone is coming at this from their own perspective, and you have to respect that. Shipbuilders have mastered a craft, and that requires a lot of dedication. Modelers are able to adjust to conditions and get people where they need to go. Passengers have their own lives, and often a particular trip is only a line or two in the larger narrative that is their lives. The sailor shouldn’t mock the guy playing shuffleboard – he’s probably earned the right to kick back a little. The passenger shouldn’t look down on the sailor, either. They will keep you from drowning.
I wonder – which would you rather do? Build boats, sail, or go on a voyage? Where is the most money? Does it matter? Will we ever be able to eliminate the need for boats, voyages, or trips? Beats me.