On Introducing Change

One of the most important aspects of leadership is changing the status quo. Things are never perfect and thus can always improve, and the only way to do that is to make changes. But doing so is tricky, because there are only a few ways it could go:

Implemented optimallyImplemented poorly
The right changeGreat!Probably bad
The wrong changeBadEek!

There are a couple of other important dimensions to making a change — such as its urgency and impact — but it wouldn’t change the point that there are a lot of ways to screw up the introduction of a change, and a pretty narrow set of ways to do it well. This is absolutely one of the cases in which the road to hell is paved with good intentions. Further below are my general principles to implementing beneficial changes in optimal ways but first, a few caveats:

  1. Not uncommonly, changes come from outside our span of control. Be it from a new law or regulation, to a change in strategy from the CEO, to rippling effects from the latest financial models, to your best friend Dave who manages another team letting you know your team will have to spend 2 weeks refactoring their code because they need to swap out a library. Because of that, an equally important skill — alongside willingly introducing changes under your control — is managing change imposed by others. Both in terms of material effects to the roadmap, though also in terms of morale. But that’s another article.
  2. No one gets it right anywhere near 100% of the time. Not only because it’s hard to even please most of the people most of the time, but also because organizations are what Cedric Chin calls complex adaptive systems, which are incredibly difficult to model because we as leaders can never hope to have enough visibility into the motivations, biases, etc, etc, understanding, or many times even the response from any given person. (And if you’re interested in an in-depth look at introducing organizational changes — org design, in Chin’s words — definitely read his article The Skill of Org Design.)
  3. Each situation is unique, and there’s no generalizing change management. There are impactful changes and changes no one cares about. There are short-term and long-term changes. Ones that affect a lot of people and ones that don’t. Simple and complex. Clear and unclear effects. And on and on. The principles below, I’ve found to be widely applicable — though certainly not universal. However, I always consider them before occasionally sidestepping them.
  4. Your Values May Vary. I like building consensus when possible, I like engagement and participation, I like transparency, and I like debate. I think those things are fundamental to a great culture, a great working environment, and positive morale, all of which are important because happy employees are more productive. But some issues are divisive and there’s no consensus to be had, some people just don’t care about some things, sometimes we can’t show our cards, and endless debate is a drain on productivity. And while I agree that there’s a time for debate and a time for action, if you tend to index more on the latter set, you might not agree with the below.

And now, the list of 6 Simple Things to Make All Changes a Breeze. Kidding. There’s only 5.

Do as much homework & due diligence as practical

I believe in big design up front (though not waterfall) because efficiency is important to me. My time in the opera only cemented that:

[The operatic development process] should all be very familiar, but how does that differ from the process in most software shops? From my experience, it’s the beginning and the end: the design and the testing. Most shops focus on the middle: the development itself, with testing being a necessary evil, and design often being a quick drawing with some boxes and arrows, maybe just in someone’s head.

Software & Opera, Part 3: Design & Test

In programming, CI/CD, automated testing, and auto-updates make design defects less painful. But with organizational changes, it’s usually a lot harder to fix a defective change, or roll it back. So thinking about it thoroughly before deploying it — i.e., designing it well — is even more important.

A good design depends on a great understanding of the area in question, so it’s important to find out as much as time allows: to talk to the people with opinions, read what others have done, what the best practices are, and understand what happened in similar situations, and why. Just because something worked for Netflix, doesn’t mean it’ll work for my team. The more I can refine my mental model of possible changes and how each option might affect this particular group of people, the more likely I’ll be to pick the optimal option, and the more successful the rollout is going to be. Ergo, this first step is worth spending as much time on as I need to feel comfortable about that mental model.

Shop and iterate the concept

Once I have something I think is good, I write it down. This is a draft of the written change and it’ll at least be referenced later by the “official” announcement, if not be included with it. Writing it down allows me to read it from the view point of specific other people and better gauge what I think their reactions will be: to specific phrasings, to the tone of the whole, and to the intent of the document.

I then distill that into an elevator pitch, and casually drop it in conversation with some people I know will have thoughts. Maybe during a 1:1, or while waiting for a meeting to start, or whatever. “Hey Jane, I’ve been thinking about that problem and am curious what you’d think if we …”. Gauging that initial reaction is important. Is it positive? If not, is it an issue with how I said it, or a problem with the actual change? Why?

I go back and refine. Not everyone will like the change, but it’s important that I think this is still the best change, in spite of specific criticism, and that the right message is getting across — especially to the people that disagree with it. And if most people disagree with the change but I’m still convinced it’s the right one, I take a look at the messaging. What piece are they missing, or misunderstanding, or seeing differently? Sometimes it just comes down to differing values, in which case there’s little to be done but agreeing to disagree.

I then start showing people the written doc — again, live. The interested and affected parties first, then the remaining powers that be. Are they parsing the writing as intended? What helpful suggestions do they have? Is all the relevant context available to them?


Once I’ve gotten input from as many interested, affected, or responsible people that I can — ideally all of them, but at least a significant sampling for larger changes — I share it out more widely, though informally, as a request-for-comments, probably in a relevant Slack channel: “We’re thinking about making a change to the thing to improve this stuff. Please share your thoughts!”.

By now, I should have no surprises to the reactions. Hopefully my doc addresses the criticisms I’ve heard and thought of, though some people will still bring them to me — either because they think they can change my mind through novel arguments, or by adding to what they are sure is an enormous chorus of opposition, or maybe just because they didn’t read the doc carefully. WET comms are important in helping to avoid that last one, especially with larger groups: different people need the same thing communicated to them in different ways.

In this step, I try to engage with the bigger discussion as much as a I can, but I’m more on the lookout for what I’ve missed earlier. The bee no one’s seen flying around. And if I see one, I evaluate the impact: how much does this change? If I’ve done my homework in step 1, it shouldn’t be fundamental. Hopefully I won’t have to go back to the core group I started with. But, better to do that than push The Wrong Change forward. Humility is important here. It’s easy to get attached to a bad idea because “no way it can be that wrong after all the thought and work I’ve put into it so far!”.

Be the glass you want to see through in the world

I value transparency, so I try to be as transparent as possible. This is not always possible, of course. But it’s possible more often than not. So in my document, I try to explain context, rationale, alternatives not chosen, and anything else that might be useful — not only to the curious bystander now, but to the next person who has my role. Or to my next boss. Or to the awesome engineer that starts next year. Or me in 3 years, having forgotten all the details of how this all went down.

Why was this decision taken? This particular one. Is it still valid under the current conditions, or should we do something else? Having everything written in the record can help dramatically both now and later.

Rush only when necessary

Sometimes change has to happen quickly. The monolith is on fire every other day. My best engineers are thinking about leaving. A re-org is coming next week. But most changes are not that. The new set of Priority fields in JIRA can be rolled out when they’re good and ready. Yes, they should improve reporting, but it’s more important to get it right, because we’re not changing them again this year. Probably not next year either.

It is important to keep making progress and not let proposed and socialized changes languish because “I’ve been so swamped this quarter”, but as long as useful activity is happening, it shouldn’t be rushed unless it needs to be. And sometimes change is urgent and so skipping through some of the above principles, or abbreviating them, is needed.

But when its not, and news of the change spread through the grapevine and people are clamoring for it, take that as good news! It’s useful to update them on the status, but the great part is that they want the change, which makes the overcommunication aspect way easier. And of course it adds some time pressure with the masses waiting with bated breath, but it’s still important to not get too eager, because again: it’s often hard to roll organizational changes back. Even if it’s just a matter of sending an email and no one has to even remember to change their workflows again, it’s still trust in your insight as a leader that gets lost. Political capital.

As leaders, the more of a track record we have of making thoughtful, positive changes, the easier it is to get consensus and make subsequent changes. It’s a feedback loop that’s driven — like a lot of things — by care and diligence, paying attention to details and, more than anything, valuing people.

Sweat Some of the Small Stuff

Of the three hard problems in computer science, the one I probably spend the most time on is “naming things”. (Off-by-one errors are too often the bane of my programming existence though.) And sometimes, I get push-back or eye rolls on how it’s not worth spending any time on the name. “Let’s just pick one at random and move on”.

This is usually the argument of someone who doesn’t believe in, or care about The Thing’s future. Think about the times when everyone obsesses over choosing a name: when it’s for their kid, or their pet, or their startup. To a lesser degree, their project or their social media handle. Stuff that has a real future for them and they know will likely and hopefully be used for years and come to be shorthand for something near and dear to them.

xkcd #910, “Permanence”

So when someone rushes through the name of a Python module or a wiki page or a feature, it often means that they don’t think it’ll matter: be it because it won’t last, or because no one will come across it again, or even that just they won’t. In their mind, there’s no future in which This Page will need to come up in Confluence search results, or in which Dave the Python dev will need to quickly understand what a variable called “idx” does.

But this is not an article about naming things — though, I’d be shocked if I don’t write that one someday. It’s about the more general problem of failing to recognize the importance of some of the “small” stuff, like the names of things which may be typed and spoken and searched by untold masses for years to come. Of course, there are a lot of small things that are unimportant; things that can be safely overlooked or bypassed or pushed off ad infinitum, because they don’t really have downsides. Things like learning Italian, diagramming your codebase in UML, or showing up to your 1:1s. Okay calm down — yes, that last one was a trick.

But some very important things are deceivingly small. Like honey bees and gut bacteria. Or out of sight, like the type of insulation in your house, which will make a difference in your HVAC bill in the tens of thousands of dollars over its lifetime. Or the lack of automated testing / CI/CD / code formatting / linting which will make that same difference in dramatically less time. Or skimping on offsites for remote teams. Under-funding the development of developer tools. Not prioritizing documentation. Not doing interview training. Not writing a Slack culture guide. Skipping 1:1s.

There are a lot of these small things that have outsized effects: the individual effort is small, but the cumulative results are big. Just like compounding interest, a tiny seed, or that domino trick above. Conversely, not doing them can lead to death by a thousand paper cuts. Or if not death, at least an increasing leak of money. One example from the wild I often think about is how iOS updates are (now) fairly seamless and how much money that investment probably saves: because it encourages people to upgrade quickly, Apple can avoid wasting thousands and thousands of hours per year on supporting a large fleet of outdated devices.

And while it’s tempting to keep going with a list of these items like above, the reality is that the list depends a lot on the circumstances: the age of the company, the size, the culture, the tech stack, etc. There’s no one list to rule them all, as far as I can tell. The closest it comes to is that a lot of this mighty small stuff falls under “enablement” — but that’s a broad term for a lot of things. So it becomes yet another one of those aspects of leadership that’s ambiguous and requires judgement calls based on experience of what’s important in concrete terms, right here and right now and with this group of people. And some of that important stuff will be the seeds that makes the organization 10x more efficient.

Unfortunately, the bigger problem is what comes next: you’ve recognized what needs gardening, but now you need to convince everyone else of the value. If not done well, this attempt at persuasion can cause problems ranging from politely being chuckled off the virtual podium, to getting a stern talking to about wasting your time on trifles. “Can’t you see the backend is crashing twice a week? And you’re talking to us about automated tests? We already have QA.” This is why most people don’t even bother taking up the mantle and arguing for the small stuff.

It’s hard to persuade others to spend time gardening when the roof is leaking — and there’s potentially a lot to lose in trying. But it’ll be a worse situation when the roof is inevitably fixed, and the garden is devoid of food. Depending on what time period this poor house is in, a forgotten garden might mean starvation, or it might mean living on pizza and Mountain Dew for years until the heart disease kicks in, or it might mean leaking money on buying produce. None great things.

So spend time on the argument. Learn to become persuasive. Learn what kinds of arguments work on different stakeholders. Become great at crafting a powerful narrative that can change minds. And leverage that for good. In a way, this is smallest and mightiest of all the things.

Thaw with her gentle persuasion is more powerful than Thor with his hammer. The one melts, the other breaks into pieces.

– Henry David Thoreau

The Case for the Diagnostics Team

I recently watched a lecture by Kevin Hale, who co-founded a startup named WuFoo back in 2006, grew it over five years to millions of customers, and sold it for 35M$, to SurveyMonkey. He subsequently became a partner at Y-Combinator for several years. The lecture was about making products people love, and one of the points he made was around WuFoo’s obsession with the customer:

  1. Each team member had a turn in the customer support rotation
  2. Their response time to customer support issues was a few minutes during the day, and a little longer at night
  3. They hand-wrote personalized thank you cards to random customers weekly
  4. Even though their business (form creation) was dry, the website was designed to be fun and warm, not business-y

It’s a great 45 minute video, and absolutely worth watching — it’s embedded down at the end. But what really drew my attention was that first point above, about everyone doing a customer support rotation. And that’s because at Voalte, which also had a customer obsession, we took a similar approach that we called The Diagnostics Team.

Voalte mobile app
Voalte is a communication platform for hospital staff

The team was like the cast of House: expert detectives in their domain that could tackle the hairiest problems, sometimes getting that “eureka!” moment from the unlikeliest of events. I/O throughput was our lupus.

The mission was a take on the support rotation, but with some twists:

  1. The team handled “Tier 4” support issues: the kind of stuff where a developer with source code knowledge was needed because the previous three tiers couldn’t figure out the issue.
  2. It was cross-functional, so that each codebase (Erlang backend, iOS, Android, JavaScript) was represented on the team
  3. The rotation was 6 months
  4. The team priorities were:
    1. Any urgent issues
    2. Code reviews, with a support and maintainability point of view
    3. Any customer-reported bugs
    4. Proactive log analysis, to find bugs before they’re noticed in the field
    5. Trivial, but noticeable bugs that would never get prioritized by the product teams
  5. Team members nominally did at least one customer visit during that 6 months

The model worked really well, and I think the team is still around, two acquisitions later, at Baxter. It wasn’t perfect (we never got that good at proactive log analysis while I was there, and customer visits ebbed and flowed depending on priorities and budgets) but overall, we hit the goals. “And what were those goals?”, you say. I’m glad you asked!

Cast of House, season 1

Remove uncertainty from the product roadmap

This was the main reason I pitched the idea of a Diagnostics team. After our initial release of Voalte Platform, we were constantly getting team members pulled off of product roadmap work in order to take a look at some urgent issue that a high profile customer was complaining about. And you could never tell how long they’d be gone: a day? a week? 3 weeks? How long does it take to find your keys? And if we had a couple of these going on at the same time, it would derail an entire release train.

The thinking was that having a dedicated team to handle those issues, while costly, was probably both less costly than the revenue lost from release delays, while also saving us money in the long run by preventing urgent issues.

And it worked: our releases became a lot more predictable. Not perfect of course, but a big improvement.

Keep a focus on customer needs and pain-points

Our customers were hospitals and we wanted to make sure things worked well in our app, because lives were literally on the line. Having a team that was plugged in to the voice of the customer meant that less complaints fell through the cracks of prioritization exercises. And while the Diagnostics team generally didn’t build features, once in a while they did: if the feature fixed a big pain-point.

This being Tier-4 support though is one major way in which it differed from WuFoo’s model, because the team wasn’t as much exposed to Tier-1 issues that were known to the frontline customer support people. When developers hear about a frustrating bug for the 4th time, they tend to just go ahead and fix it. But if they’re only exposed to that bug via a monthly report, it won’t frustrate them as much.

Our ideal here though, was to crush the big rocks, improve the operational excellence so that no more big rocks form, and then the team would be able to focus on the pebbles. We had varying success on this, depending on the codebase.

The other prong was customer visits. Each developer would pick a hospital and arrange a ~2 day visit. The hospital would generally assign them a buddy, and they would get the ground truth both from that buddy and by walking around to as many nurses’ stations as possible and asking them about the app.

Most of the time, they wouldn’t have anything to say. When they did, most of the time it was some known problem. But like 10% of the time, it would be revelatory: some weird issue because they tapped a combination of buttons we’d never thought of, or used a feature in a completely novel way than we meant it. And we’d write debriefs of the visit after the fact to share with the team.

No matter what was learned on the trip though, the engineers came back with a renewed sense of purpose and empathy for the customer, not to mention a much better understanding of how hospital staff work and use the product.

The House version of customer visits were rotations in the free clinic.
Great supercut on how not to act on your customer visits.

Improve the quality of the codebase over time

One of the things we were worried about in creating this team is that it would disconnect the developers on the product teams from the consequences of their actions. They’d release all kinds of bugs into the field and never be responsible for fixing them and so never improve. This was part of the reason we wanted Diagnostics to be a rotation. (Though, it ended up mostly not being a rotation, but more on that later.)

Our main tactic to prevent this problem was to make the Diagnostic team a specific and prominent part of the code review process. Part of the team’s remit was to review every PR for the codebase they worked in, and look for any potential pitfalls around quality and maintainability. Yes, those are already supposed to be facets of every code review, but:

  1. The Diagnostician would have a better sense of what doesn’t work, and
  2. They have more of a stake in preventing problematic code from seeing the light of day

Build expertise around quality and maintainability

To our great surprise, at the end of the team’s very first 6 month rotation, half of the members wanted to stay on indefinitely. They found the detective work not only interesting, but also varied in is breadth and depth, and fulfilling in a way that feature work just isn’t.

We debated on whether to allow long-term membership on the team, because we did want to expose all of the team members to this kind of work. But ultimately, we decided that the experience these veterans would build would be more valuable to the effort — especially when combined with them sharing that experience through code reviews and other avenues.

Over the years, they got exposed to more and more issues reported by customers — which are the ones that matter most — and they developed an intuition about what bothers them most and what kind of mistakes cause those kinds of issues. They also developed a sense of what programming patterns cause the Diagnosticians themselves problems both in terms of both monitoring and observability, so they can easily diagnose issues, but also in terms of refactoring code to fix problems, and what characteristics problematic components have in common.

That’s the kind of insight from which arises the most valuable part of the return on investment: preventing painful tech debt and convoluted bugs from ever getting shipped. It more than makes up for the cost the team.

The Empathetic Metamorph

There’s an old episode of Star Trek: The Next Generation about Picard falling in tragic love with an alien woman. Okay, there’s a bunch of those, but there’s one in particular in which the alien has the ability to change her personality to perfectly match her mate, and become the ying to his yang, being precisely what he needs in a mate. I never liked that episode, or others of its ilk, because it was slow and mushy, and I watch sci-fi more for the “sci” than the “fi”. Plus, I don’t have the attention span to deal with an episode full of dull romantic scenes.

Kamala, the empathic metamorph, with Data

I don’t think I’ve seen that episode since I was teenager, but the concept of the empathic metamorph, which is what Kamala was, really stuck with me. Having empathy is naturally important to the success of most, if not all relationships, but this concept goes a step farther because it’s not just understanding the other person’s point of view, but also changing how you interact with them, in an intentional way. And not superficially, saying the right things in a difficult conversation or knowing which buttons to push to make them happy, but understanding them deeply enough to know what they need in the short term and the long term, and understanding one’s self deeply enough to know how, with the skills available, to help them achieve their goals.

It’s clear that this would be an incredibly complex process, and it takes a lot of time and investment, but I think that the most motivational leaders — the ones that elicit trust, loyalty, and high performance — are essentially empathetic metamorphs. Not in the Star Trek sense, (and not empathic, like Betazoids are), but in the sense that they can change their behavior to bring out the best in a given situation, with the given audience. Sociopaths probably can too, though of course they would do it for their own gain, rather than the other person’s.

The Dimensional Space

The word “metamorph” literally means “shapechanger”: in Greek, “meta” is “change” and “morph” is “shape”. I don’t think an official antonym exists, but let’s call it a “statimorph” — those who can’t change shape. If we look at leaders along the two binary dimensions of having empathy and being able to change themselves in response to it, we get a simple K-map:

ApatheticBad leadersSociopaths
EmpatheticGood leadersGreat leaders

We can try to understand the empathetic metamorph better by looking at the other types of leader.

The apathetic statimorph is the manager that doesn’t care. They’re not a people leader: they’re a true manager. It’s probably not that they don’t want to be better, but for whatever reason, they just don’t have the ability to understand people. They probably rely a lot on process and rules and have a “no exceptions” policy, because rules and rules and you have to be fair. And those rules are probably conceived, written, and implemented by the apathetic statimorph without much input from anyone else.

The empathetic statimorph is, I think, most leaders. They care about their employees and want to see them succeed. They listen to suggestions and complaints and try to improve the environment and processes and mentor their employees so that they grow and in their careers. But they themselves cannot change. They are the same boss to the entry-level engineer as to the principal engineer, to the introvert and the extrovert, to the Builder and the Hacker. They understand how each employee feels and what motivates them, but they aren’t able to be warmer and more encouraging to the entry-level engineer, to afford the principal more intellectual leeway, to be an introvert with the introvert and an extrovert with the extrovert, and to talk about vision with the hacker, but goals with the builder.

The apathetic metamorph, I don’t think I’ve seen in real life. They would be like The Talented Mr. Ripley though. Or Anna Delvey. Able to show whatever version of themselves other people need to buy in order for the metamorph to get ahead. I think people like this are rare, both because most people are good at heart, but also because apathetic metamorphs can probably fake being empathetic until they actually become empathetic, and simply go down this path because they realize selfish people tend to not be that successful.

So finally, the empathetic metamorph is what all the others aren’t. They’re the boss that learns chess in order to better bond with a chess-obsessed employee. The one that, before finalizing a team decision, makes sure to ask the new engineer about it in private, realizing that they might not have been comfortable enough yet to have spoken up during the team meeting. The one that uses metaphors from The Office even though they don’t really like The Office, because they know most of the team loves The Office. The one that knows which employee wants to hear the bad news coming a mile away, and which one would get paralyzed with anxiety about it — and waiting until the last minute to tell them.

I’m not sure exactly how to become one, though I do think it can be learned — probably with lots and lots and lots of practice. Years of deliberate practice in expanding one’s knowledge and skills in order to be able to meet everyone around them on their own terms. Whether that’s worth it probably depends, like most things, on the cost-benefit analysis: how hard it is for the leader to do, vs how important it is for them to be that good.

The Art of Interviewing

The job interview process is a high-stakes dance that’s notoriously difficult and full of missteps — on both sides. (At least, in software engineering it is. If you care about another industry, <Jedi hand wave/> this is not the article you’re looking for.) Avoiding the missteps is an art, just like it is in dancing; we get better with practice, just like dancing; it’s a lot of fun when it goes well, just like dancing; and leading with dexterity is paramount. This article focuses on that last bit: not on what questions to ask or how to structure the time, but rather on artfully leading an interview. First, however — because it’s a crucial part of the interview context — we should start with how the job candidate views the process. Just like dancing.

Because while for Larry the Interviewer, it’s just a small fraction of his day — maybe an annoying one that pulls him from the dynamic programming problem he was definitely not working on — for Jane the candidate, it’s one of the most important events of her life. The median job tenure is about 4 years, so people will only have somewhere around 10 jobs their whole professional life. And each of those jobs is going to significantly affect Jane’s life both during, and after, it. The CV builds on the shoulders of the previous job, yes, but work is also where we spend the bulk of our weekday, make friends, and develop a large part of our identity.

So what does Jane go into this high stakes dance equipped with? If the company is large, maybe she can get a sense of the general culture from Glassdoor, or news articles, or social media. Otherwise, maybe a sense of how it wants to be seen, from its marketing. But even then: what will her daily life be like there? Would she like her new boss? Her teammates? The bureaucracy? The tasks she’ll be working on over the next year? Four years? Unless she knows someone on the inside, these questions mostly won’t get answered until after the start date. Sure, she’ll get to peek in now and then through cracks in the process and through the five minutes of each interview in which she can ask stuff, but: largely unanswered. Which means it all adds up to a big gamble.

The gamble is better for more mercurial and adaptable personalities than those more averse to change, but your resume can only tolerate a couple of quick stints before it starts getting tossed aside. So it’s a big gamble for everyone. Or rather, I should say “for every job searcher”, because well… it’s a small gamble for Larry the Interviewer. Worst case for him? It isn’t even giving his “thumbs up” to a terrible employee — because someone truly terrible will get fired before too long. No, the worst case is that he gives his thumbs up to a really annoying Taylor that ends up on his team and really annoys him for the next 4 years. And also produces mediocre work that Larry then has to constantly deal with. But Taylor isn’t annoying enough and the work just isn’t bad enough to get fired or even be put on a PIP. Taylor kind of coasts just baaaarely on the good side of the policy. And in doing so, makes Larry’s work life feel like a chirping smoke alarm that he can never find. That is the worst case for Larry.

The best case? He ends up with a really awesome coworker. Which, depending on the existing coworkers, may or may not matter a lot. But in any case except for the narrow and unlikely one of Taylor, the stakes for Larry are much, much, much lower than for Jane. And yet, this is who decides Jane’s fate.

The irony is that all of us will — at different times — be the Jane, and most of us will also be the Larry at other times. And when we are the Larry, interviewing a job candidate, do we act as the interviewer we’d like to have interview us, when we’re searching for a job? When I grudgingly leave the house, I usually drive, and sometimes I walk, and sometimes I bike. And what I find fascinating is that when I’m a driver, I get mad at other drivers, when I’m a cyclist I get mad at drivers, and when I’m a pedestrian, I get mad at everyone. Even though I know exactly what challenges they’re all going through.

Through this lens, how should Larry — all of us, when we are the Larry — conduct his interviews? How can we be the Larry we want to see in the world?

Larry does have a concrete output from the interview; he needs to answer one simple question: is Jane likely to be successful in this role? “But wait,” you ask, “can Jane’s ideal Larry even answer that question? Can he both be empathetic and figure out if she should be hired?” To which I say: “not only can he, but that’s the best way to do go about it!” Let me illustrate by looking at some questions Larry should not be trying to answer:

  1. Did Jane answer all my questions correctly?
  2. Was Jane quick on her feet and graceful under pressure?
  3. Did Jane pick up on my algorithmic hints?
  4. Was her solution complete?
  5. Do I like Jane, as a human being? Could we be friends?

Those are certainly signals one can get in an interview, but are they relevant to her success in the role? Does her answering all of Larry’s questions correctly mean there’s a good chance she’ll get a high performance rating next year? It depends on the questions, right? Well then, what about the inverse: does getting answers wrong correlate with bad performance reviews? Or does it correlate with nervousness? Or miscommunication? Or ignorance of a concept that Jane could learn in the first week on the job?

The hard truth is that while most interviews test something, that something is more likely to be “ability to pass our interview” than “ability to do the job well”. Which — if the abstract interview is indeed a conscious choice, and more often it is not — then the argument for it goes like this:

We want people that will do what they need to in order to succeed; if they study hard for our arbitrary and irrelevant interview process and pass it, it means they (a) really want to work here, and (b) can do the same for a real-world project

Even for well-known companies that can attract enough people to run their gauntlet, this is of dubious value, because they’re filtering for a specific criteria: studiousness; and simultaneously filtering out many that are at odds with it, starting with people that don’t have the luxury of time to study for their interview.

Instead, I think interview questions should be relevant for the job and tailored to illuminate whether Jane will be successful in it. Does it matter if Jane came up with a coding solution in 10 minutes versus 20? How much time would she have in the course of her job for that problem? Does the job require her to be well-versed in algorithms? Even if Stack Overflow didn’t exist. Is it relevant that she didn’t finish the last part, even though she explained how it would work? Are the people we like more productive than the ones we don’t? Do these rhetorical questions make my point?

Which is that the interview is not a test and it’s certainly not an interrogation. It is above anything else a conversation, ideally between equals, in which both parties are trying to figure out if an employment arrangement would be a win/win scenario. And as the interviewer, by leading it with empathy, you accomplish two things:

  1. Have a much better chance at arriving at the truth
  2. Leave the candidate with a great impression

So how should Larry approach the interview? I like to think about role models, because we humans are great at mimicking, and in this case I think the right archetype is a podcaster. They have a guest on their show and they generally try to make the experience pleasant, to keep the content interesting, and to really get at what makes their guest tick in their particular way. If they have an actor on, they try to figure out what makes them a great actor; if it’s a business tycoon, what makes them great at business; if it’s a scientist, what makes them great at science. And similarly, Larry’s job is to figure out what makes his guest great at programming.

In order to successfully do that, the guest has to first and foremost be comfortable. People won’t let you in unless they’re comfortable. And if they won’t let you in, it is a giant barrier to understanding them. So Larry should spend some time in the beginning of the interview breaking that ice. He should make small talk — genuine small talk, not awkward conversation about the weather. Tell Jane a little about himself: what he does at the company, what he’s passionate about in his work, and what his background is. Things that will give Jane an understanding of him and help them find common ground. It’s not only worth the investment, but it’s what makes the rest of the interview worthwhile.

Once a good rapport has been established — or Larry’s given up on such happening — only then should he go into the topics he needs to cover. And he should always remember to treat Jane as he would a valued guest in his home. To be kind, to be forgiving, and to give her the benefit of the doubt. If she says something that sounds incorrect, he should make sure it’s not due to a simple mistake or misunderstanding. He should ask polite questions that help him understand how much Jane knows and understands about graph traversal or whatever — not merely that she does (or doesn’t) know that those words are the answer to his question. Because in the end, Larry’s job isn’t that of a proctor, to simply grade Jane on her performance as if this were an audition or a midterm exam. Larry’s job is actually much more difficult: it’s to create, in 45 to 60 minutes, a sorely incomplete mental model of Jane from a certain angle — be it programming ability or cultural fit or leadership style or what have you — and to then decide if that mental model is a good match to fill the open role.

Again: this is hard and it takes a lot of practice to do well, just like dancing. But taking shortcuts will just lead to lots of false positives or negatives. Tech companies love to industrialize the process and create complex questions with “objective” answers and rubrics that tally up those answers into a simple pass/fail exam and then also point to that process when talking about fairness. But in truth, there’s nothing fair about standardized tests — which is why higher education is finally moving away from the SATs.

They add a veneer of objectivity on top an industrialized process that answers not so much what a person understands, but what they’ve been exposed to and what they can quickly recall in a stressful situation. It’s like being tested on “ticking time bomb diffusal” for a job making watches. And the opportunities for subjectivity still abound: from how comfortable or nervous the candidate is, to how many hints they’re given, to how much sleep they’ve gotten the night before, to whether they’ve seen a similar question recently, and to the leniency of the proctor.

What sets Oxford and Cambridge apart from most other universities is that they use the Tutorial System, in which students learn the subject matter in whatever way makes sense, meet in very small groups with a tutor and have a discussion — in which it will become readily apparent how well they understand the subject.

It has been argued that the tutorial system has great value as a pedagogic model because it creates learning and assessment opportunities which are highly authentic and difficult to fake.

from Tutorial System at Wikipedia

Software interviews of a similar ilk have the same value.

But the one thing to remember is that regardless of the questions we ask when we’re being the Larry — no matter how good or fair or comprehensive they are — the questions are just a means to an end, and not the end. They are a conversation starter, and it’s up to us to guide that conversation in such a way that will allow us to understand our guest to such a degree that we can answer the only question that matters: “is this person likely to be successful in this role?”

The 8 Engineers You Might Know

A few years ago, I was asked to give a talk to a class of early engineering college students about the characteristics of a “good” engineer, so that they may begin to emulate those traits — or, presumably, drop out of the program if the very thought of such a thing made them ill. But as I was thinking about what these characteristics might be, I realized that there’s no such thing as a model engineer. In thinking back through my career I could identify, at least, several pretty distinct kinds of engineers, each with their own special sauce that made them great at different things. But there was no kind of Renaissance engineer, at least in my experience, that could simply excel at everything. So I started the presentation with this quote:

Everyone is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.

of unknown origin, but not Einstein
The case of fish climbing the tree...

I know brilliant developers who cannot work well in a team, but can debug a field problem like no one else. And amazing architects that come up with the most elegant designs, but who can’t stick to one thing too long. Or the opposite: deep tinkering thinkers who would give you the death stare if you tried to pull them off their passion project.

For the college kids, I came up with some examples, like the above, of great engineers and the situations that they excel in, but I couldn’t let go of the notion that there probably exists something like the Meyers-Briggs Types or the classical Four Temperaments, but for engineering. Pseudoscience yes, inasmuch as people don’t fit into nice and tidy boxes like that, but still helpful in thinking about strengths and tendencies.

Googling didn’t turn up anything like this existing in the tubes, and since the idea wouldn’t leave me alone, dimensions that seemed useful to me eventually took shape:

  1. Teamwork: collaboration vs vision. Do they value teamwork more for its own sake, or for seeing their vision be realized?
  2. Focus: design vs goal-oriented. The journey, or the destination? The means or the end? The architecture or the building?
  3. Scope: broad vs specific. Do they like working on lots of different things, either at once or in fairly quick succession, or to focus on one thing for as long as it makes sense?

Three dimensions with two values each means 23 = 8 broad kinds of engineers, which also felt like a good number for this sort of thing. We’ll take a look at those eight kinds next, and I’ll leave the more dry discussion on methodology until the end, but before we go any further, a caveat:

If you use these concepts for anything, it should be either for fun or as a thinking exercise, but because this is in no way scientific, it should NOT be used for anything serious -- just like the MBTI should not be. Please don't try to create questionnaires out of this, or use it to justify leadership decisions, or anything of that sort. 
What I'm hoping for is that this framework provides some insight for engineers to maybe think about themselves and their career path, a shorthand for talking about certain behaviors, or probably more than anything: just as a fun lunch conversation. 

All models are wrong, but some are useful

of unknown origin

Hopefully this is somewhat the latter. Okay: with all that out of the way, here is the creamy center.

The Types

The Generalist

This persona is collaborative, design-oriented, and has broad interests. They love to hear and incorporate people’s ideas and feedback, they love to produce a beautiful cathedral of a codebase, and they don’t really care what they work on as long as it’s interesting. Design and requirements meetings are something they enjoy, and many times organize, and they take no particular pride in the cathedral because they see it as the group effort it really is, with their role as a facilitator more than anything.

I think of this persona as the typical systems engineer or spec writer, or sometimes architect. Often times, because of their people skills, they end up in leadership positions.

The Specialist

Same as the Generalist, but with specific interests. Not necessarily narrow, but they enjoy being an expert in some small number of things, and then working with teams that need that expertise. They love deepening their knowledge on the subjects they master, and they love putting that knowledge to good use to efficiently and elegantly solve challenging problems that, without their expert advice, might otherwise take the team twice as long to create something half as good. They’re often the special guest in the meeting, because they’re an expert in authentication or databases or whatever, and this team needs some authentication or database know-how imparted upon them.

The Builder

Collaborative, goal-oriented, with broad interests. They just love to build things. They have preferences of course, but by and large are open to working on a wide variety of projects. They don’t really care what it is or what the tools involved are or what the platform or the frameworks are — they’ll learn it all and make it work, and work well. If there’s a design in place, they’ll follow it, but if there isn’t, they’re happy to make one. They work very hard, will do everything possible to hit a deadline, and will deliver as good a product as you can expect.

Because of their broad interests, they gain broad experience, and because they’re goal oriented and get a reputation for meeting those goals, they tend to end up in leadership positions also.

The Conductor

Same as the Builder but with specific interests, the Conductor loves to get a particular thing done. But she’s collaborative, and the combination of traits here means this is almost always someone who quickly gravitates toward leadership. Project management, if that position exists, or whatever other role fulfills that function: tech lead, manager. The important thing is to be able to work in a team and motivate that team to do the thing well. Before she was in leadership, she got frustrated time and time again when the goal wasn’t met, and vowed that she could do better.

She doesn’t get involved in all the details of how every component works, because what she cares about is her specific role in it: to orchestrate all the moving parts so that the thing will ship on time. But she’ll get involved in whatever is required, do whatever it takes, and set up as many meetings as is needed to make sure someone will fix the situation so that the thing will ship on time.

The Architect

Now we’re in the Visionary half, where other people are, at best, a necessary evil to accomplish the vision, and at worst, something to be avoided as much as possible. The Architect, like the Generalist, loves building cathedrals; the difference is that they have a specific cathedral in mind. They don’t necessarily want your input, but the good ones will fight that urge and still consider it, if for no other reason than to improve their future visions.

With broad interests, they’ll work on pretty much anything they can, as long as the work is interesting, and they can put their spin on a beautiful design that will be implemented to the letter. Even if they have to implement it all themselves. Even if it takes 3x the allotted time. Even if the technology doesn’t exist, and they have to invent it themselves. Perhaps especially then. This is someone you want to take the thing to the next level. Depending on many, many things you might end up with the Wardenclyffe Tower or the Taj Mahal.

The Artisan

I would bet a small sum of money that the guy who maintains ntpd is an Artisan; probably the two that maintain OpenSSL, too. Unsurprisingly, they have a specific interest: working in some domain or in some technology or theme or whatever else is the singular thing that drives them. They love improving it and polishing it and crafting it into a beautiful creation that is their life’s work. They are watchmakers. They have a vision, often very specific, and will work tirelessly to see it come to life and possibly be successful — the latter is less important. What’s important is the act of creation.

Artisans are the developers that you talk about going into a cave and emerging with The Work some months later. The great ones do it mostly to spec, but creative license is something you generally have to deal with here, because that mode of operation is how Artisans produce the best work. Put them on a sprint team working on random tickets off the queue and they’ll wither and disengage. Give them a challenging problem with a corresponding amount of freedom, and they’ll make sure to wow the whole team.

The Hacker

A clarifying point for those not of the software industry, who may be reading this:

A computer hacker is a computer expert who uses their technical knowledge to achieve a goal or overcome an obstacle, within a computerized system by non-standard means. Though the term hacker has become associated in popular culture with a security hacker – someone who utilizes their technical know-how of bugs or exploits to break into computer systems and access data which would otherwise be unavailable to them – hacking can also be utilized by legitimate figures in legal situations


But that’s a great definition for our purposes here too: a goal-oriented, singular visionary with broad interests. Could also be called a “fixer”. They have a lot of confidence to learn what they need to and figure out the situation, through whatever means necessary, irrelevant of pressure, to get the thing done. They don’t really care if the thing is duct taped together so much — as long as it works for now. It can always be done properly later, but what’s important is that the goal was met, the crisis averted, and the mountain was climbed swiftly.

This is the kind of person you want on a diagnostics/field-support team. Or on a critical release that can’t be late. Or on a proof-of-concept that might create a lot of value, if anyone could get it to actually work somehow. Just don’t saddle them with process and red tape, and let them hack the planet.

The Marshal

The counterpart of The Conductor, the difference is that The Marshal has a vision for how the goal will be achieved. Much like The Architect, they don’t necessarily want input, but the good ones will know that they’ll get a better success rate at achieving the goal by getting the council of knowledgeable people they admire. Unlike The Hacker, they have no interest in working on different things — they have one goal, and it’s usually a sizeable one. Like freeing Europe from Hitler’s grip. Though the term “General” is too… wait for it: generic.

Marshals are great at leading focused efforts of outsized value. Because they are laser-focused on delivering, they don’t want to deal with too many personalities or process, and so need the right kind of team around them, in the right kind of environment. And under those circumstances, they lead with passion that energizes the team and they swat away all distractions, jump in and pull the weight of three people, and lead the crew to defeat Khan against all odds.

The Disengaged

There’s one more type of engineer, and this one’s not described by the model. If you’ve tried to figure out what motivates someone who is smart and capable, but their performance is consistently at best mediocre, and nothing really works out well… it might just be that they’re not interested in the work.

Maybe they don’t like the environment (the team, the project, the company, etc) or maybe they’re distracted by bigger problems in the real world or maybe they don’t like engineering and ended up doing it because someone told them it’s a good job. Whatever the reason, some people are just there to work for eight hours a day because they can’t get much enjoyment out of the work.

Passion Pushers - Why Doing What You Love Is Bad Advice

And that’s okay. Most people need a job, and if they bring value to the team, there’s a place for them. There’s always too much work for someone that has a clear niche, there’s always enough on the backlog that no one wants to do but that still needs to be done, and there will always be emergent situations that someone needs to attend. The Disengaged can be great for essentially doing whatever the project requires at that time, without having to worry about what motivates them — because nothing might, except more time off, or more money so they can retire earlier.

Cheat Sheet

SpecialistCollaborative Design Specific
BuilderCollaborative GoalBroad
ConductorCollaborative Goal Specific
ArchitectVisionaryDesign Broad
ArtisanVisionary Design Specific
HackerVisionary Goal Broad
MarshalVisionary Goal Specific
8 Engineering Types


The most accepted model for personality traits is the Big Five, which has five binary dimensions:

  1. Extraversion (outgoing/energetic vs. solitary/reserved)
  2. Agreeableness (friendly/compassionate vs. critical/rational)
  3. Openness, to experience (inventive/curious vs. consistent/cautious)
  4. Conscientiousness (efficient/organized vs. extravagant/careless)
  5. Neuroticism (sensitive/nervous vs. resilient/confident)

To me, these seem like great dimensions on which to differentiate personalities in general, but not as useful in terms of engineering. In our industry, being extraverted and agreeable only matter insofar as they’re important for working with and leading others, so I collapsed those two into “teamwork”. Similarly, “conscientiousness” and “neuroticism” aren’t as meaningful on their own, but when looking at them through the lens of what drives people, these two made more sense as a single “focus” dimension, where we differentiate between the journey and the destination. Finally, “openness” seemed an important trait on its own merit, but through my engineering lens, it became “scope” — “broad” being “curious” and “specific” being “consistent”.

But besides the Big Five, I also looked at the Four Temperaments, which is the classical view of personalities, and which is not too far off base — and is probably why it survived the centuries. It defines four personality types:

  1. Sanguine: extraverted, social, charming, risk-taking
  2. Choleric: extraverted, decisive, ambitious
  3. Phlegmatic: introverted, agreeable, philosophical
  4. Melancholic: introverted, detail-oriented, perfectionistic

If you look not-all-that-closely, two dimensions of the Big 5 are mostly at play there as well: extraversion and conscientiousness. In terms of this engineering model, you could say:

  1. Sanguine: collaborative and design-focused, the Generalist and the Specialist
  2. Choleric: collaborative and goal-focused, the Builder and the Conductor
  3. Phlegmatic: visionary and design-focused, the Architect and the Artisan
  4. Melancholic: visionary and goal-focused, the Hacker and the Marshal

The Four Temperaments were also used to seed the Keirsey Temperament Sorter, which expands them and maps them onto the Meyers-Briggs Type Indicator. Like the MBTI, it defines four dimensions:

  1. Concrete/observant vs abstract/introspective
  2. Temperament: cooperative vs pragmatic
  3. Role: informative vs directive
  4. Role variant: expressive vs attentive

They map into 16 personalities, but I had trouble mapping these to anything useful in the engineering world. On its face, going backwards from the 16 personalities, the dimension that seemed not important (aside from management and QA) was cooperative vs pragmatic, but of course that trait is very important in other roles too, so it just doesn’t seem to be a good model for our domain.

The three dimensions I ended up with, to me, highlight the most important differences in engineers: those who love to work with others vs those who love to go into the cave; those who love to release code vs those who love to create beauty; and those who love to work on anything as long as it’s challenging vs those who have a particular passion.

Again, this is all such super-soft methodology that would put whipped cream to shame and shouldn’t be used for anything serious — not only from recognizing the many shortfalls of a model like this, but also the fact that people change all the time, and that they don’t fit neatly into one or two or even eight boxes.

But I do think that, especially as a people leader or as an introspective individual contributor, being aware of these sorts of inclinations can help with knowing what kind of work makes a person happy, which is very important because of the old proverb: “do what you love, and you’ll never work another day in your life.” A happy employee is the most productive they can be.

Let me remind you of General Yamashita’s motto: be happy in your work

General Saito in The Bridge on the river Kwai (1957)

On Engineering Consensus

The main reason science works is, of course, the scientific method. It forces rigor into the process and it’s what began to fork hard science from philosophy. Engineering, while not a science per se, is a sibling discipline. It too benefits from the scientific method (though more so in the realm of testing) and the engineering method is similar:

1Ask a question“Does fire destroy matter?”Consider a problem“How can I seal a container?”
2Form a hypothesis“Burning stuff in a sealed container should tell us”Design a solution“Maybe putting silicone on a jar lid will do it”
3Make a prediction“If the container weighs the same, the matter just turned to gas”Implement the solutionPut the silicone on the lid
4Run a testBurn some stuff in a sealed containerRun a test Burn some stuff in the sealed jar
5Analyze the resultsSee if it weighs the same or is lighterAnalyze the results See if any smoke got out, and if there were any bad side effects, like melting

We can generalize them both to something like:

  1. Have an idea
  2. Figure out what to do about it
  3. Do that thing
  4. See if it worked

In any case, they’re related. And one overlooked aspect of science is that it’s not all that cut and dry. Rare is the experiment that produces unequivocable results that are obvious to any layman. Lavoisier’s experiments on the conservation of matter seem straightforward to us now, but the test could’ve been wrong in a lot of ways: the sealed glass vessel could’ve had a microscopic leak, the scales might not have been sensitive enough to detect that some matter was destroyed, and Lavoisier himself could’ve had his finger on the scale!

Antoine Lavoisier

This is why science relies on not just the method, but equally so, also peer review. Other people had to read about Lavoisier’s experiment or maybe observe it in person. People that were experts enough in the field that they understood all of the details about creating a sealed vessel and about the accuracy of scales and other aspects of the experiment. And eventually, other people recreated his experiment and got the same result, and only then was the scientific consensus attained that no — matter cannot be destroyed.

Good science works because regardless of the prestige of the scientist or the seeming quality of the experiment, the certification of the finding is independently verified by other experts in the field, peers who know what to look for and what notions to accept as scientific fact.

Good engineering works the same way. Instead of peer review of research papers we do peer review of design documents and pull requests, and if that step of achieving engineering consensus is missing, the quality of the work suffers.

“But,” you say, “the testing will prove the quality of the work!” Except that there’s a fine distinction there: tests will prove that the solution works as intended; it says nothing about how well it’s built. It could be held together by duct tape, it could be an overly complicated Rube Goldberg device that’s impossible to maintain, or it could be a pile of spaghetti that’s impossible to refactor. In engineering as in life, the ends rarely justify the means. And so we need consensus on whether those means are good.

Cosensus is a tricky thing though. I dread submitting my code for review, even when I’m very happy and confident with it, because I know there are things I might have missed, and as much as I want to embrace learning from my mistakes, I really don’t like to make mistakes. However, when certain smart, experienced people are on vacation, I don’t mind it so much, because I know I haven’t made any mistakes junior developers are likely to catch, and I can make a good argument if there are questions on my approaches. But those arguments might not fly past more senior developers, who might have insight that I’m lacking and the experience to know what works and doesn’t to back their stance with.

So it’s not enough to just get the consensus of any two people: for quality consensus, it has to be two (or more) of your peers. Developers operating on your level or higher, who not only have the general experience and skills to recognize whether your work is good, but also have the specific experience with the surrounding landscape — be it the type of thing you’re designing, if it’s a design document, or the codebase that you’re changing, if it’s code.

And it should be people who aren’t afraid to speak up. Some talented engineers that would otherwise be good peer reviewers might be intimidated by a Bob that’s less talented. Maybe this Bob is higher up the totem pole, or maybe he bullies, badgers, or simply exhausts all opposition.

The choice of peer reviewers should be truly peers. People who are:

  1. Technical equals
  2. Organizational equals
  3. Up for a debate
  4. On good rapport

(That last one is to avoid a situation where Chelsea always nitpicks Bob’s code because she thinks he’s the worst.)

This is a kind of ideal engineering consensus to strive for, and for better or worse, in practice there are two reasons why it wouldn’t happen all the time:

  1. Most teams are small and there’s no equal to the tech lead, either technically or organizationally.
  2. Most things aren’t important enough to spent a lot of time reaching consensus

Which exposes an important point in technical leadership: one of the reasons having good leaders matters is for the times when consensus matters. Good tech leaders have good rapport with the team, they mentor and build expertise in others, they set high standards for excellence, and encourage healthy debates from the team members. Over time, good leadership results in exactly the kind of savy, comfortable team that generates worthwile consensi. Consensuses? Consensi. Please excuse me while I look up the consensus on this matter.

DRY Code, WET Comms

The principle of DRY code is probably one of the most important bedrocks of professional programming. It’s a hallmark of what separates an amateur coder from a legitimate engineer: thinking ahead; designing your codebase; making it flexible, modular, and maintainable.

If you haven’t come across the acronym before, it stands for Don’t Repeat Yourself, and it means that pretty much any time you find yourself duplicating blocks of code or logic, you should think about pullling it out into its own function or class or whatever, that would then be called from both of those places.

xkcd #2347

Why? Because when you later find a bug in it, you can just fix it in one place, instead of fixing it in one place and forgetting to fix it in the other. Or worse, not knowing there even is a second place to fix it in, because you didn’t write the code in the first place, or you did, but it’s been more than three days and you forgot.

Being DRY is the opposite of copying and pasting the same for loop all over the place, because you’re just learning how to program and you’re not sure how to import modules, or really even what a function signature is. That style of programming is WET, because you Write Everything Twice. It’s amateurish and it’s something that a lot of good engineers learn to avoid on their own, after having to fix everything twice, or refactor everything twice, or three times, or six times.

However, people are not CPUs. In fact, we built CPUs, because we’re so bad at being CPUs. For us, seeing the same thing more than once is boring. It’s something we filter out, because boring is not dangerous. The motionless green grass won’t kill us, and so we hardly even notice it. But if it moves in a funny way, suddenly we think it’s because of a snake, and it’s the only thing we can see. We are neural networks. We are built to recognize patterns, to flag any dangerous ones, and to recognize them with high urgency. And so, we are not having fun adding a long list of numbers. Or cross-referencing scientific texts. At least not most of us.

But there’s a tendency, especially in the technical manager, to still think DRYly. “From now on, we’re going to use RabbitMQ for all asynchronous messaging, unless there’s a good reason not to”, you write in the the #dev-general Slack channel, adding that it’s already part of the deployment anyway, that it’s very performant, that it usually doesn’t make sense to reinvent the wheel, and that it’ll be easy to adapt existing code to use it. When you’re done typing this pithy paragraph, you tag it with @channel, giddily dismissing warnings about how many people in what timezones will be getting dinged. “Good! Ding them!”, you cackle. You handle the handful of questions and complaints that arise, explaining that they can still use JSON, that you don’t have to rewrite the project in Erlang, and that yes it’s weird that Dell used to own RabbitMQ. And VMWare. And SpringSource. And that VMWare still owns it. Yes, it’s literally a list of companies you wouldn’t expect… can we move on?

Satisfied with how you navigated that, two weeks go by and you’re in a design meeting and someone pipes up asking why RabbitMQ is now part of the design diagram. Your ears prick and you jump in “Hey Frank, did you not see my Slack announcement a couple of weeks ago?” As soon as you say it, you realize that no, he did not — for he was on vacation. You quickly acknowledge it and say you’ll send it to him, and you search for it because you forgot to pin it in the channel, but that’s fixed now.

Two more weeks go by and you come across a defect ticket being worked, on the old, homegrown asynchronous message service. “How did this even make it into the backlog??” You look at the names on the ticket and DM them “hey guys, there’s some kind of disconnect, because I see this ticket, but we were supposed to stop using the old messaging service.” It’s news to both of them. They remember seeing the Slack announcement, but didn’t realize they should start using Rabbit right away. They thought it was just for new development. “Hmmm… I guess the announcement could’ve been clearer”, you realize.

Two more weeks go by and you’re looking over a giant pull request from Stan when you notice … no … it can’t be. He’s added a whole mechanism for pushing messages over WebSockets! You look at it again, and it’s true. It’s exactly what that cowboy is doing. He just completely ignored the announcement and spent weeks working on this giant component that replicates exactly what Rabbit is supposed to do for us, and fragmented the architecture in the process.

"What Slack announcement is this?"
<you send him the link>
"Oh. Sorry, I don't remember seeing this. I don't normally pay attention to Slack unless someone DMs me."
"Yeah, can't concentrate with it dinging me all the time, so I turned notifications off."

After you’re done slamming your head against the desk, he tells you that he actually considered RabbitMQ on his own, before dismissing it because he already needed the WebSocket connection for bidirectional communication and it would’ve made the whole message path more brittle to go over two connections. All of which are good arguments, and fits into the “good reason not to” clause.

The story has a happy ending because I’m not GRRM, but it easily could have not. And a fourth one might not. The point here is that there are all kinds of opportunities and good reasons for people to not internalize things. Here are some:

  1. They didn’t get the memo, for various legitimate reasons ranging from being out that day to technical glitches
  2. They didn’t understand all or part of it
  3. They skimmed it, didn’t think it affected them, and promptly dismissed it
  4. They don’t comprehend things well in that medium: some people need to see pictures and read things, some people need to hear them, some people don’t read Slack but do read email, some are the opposite
  5. They heard it, understood it, and forgot it anyway

As a leader, I feel like it is absolutely part of the job to be the positive version of Marty McFly’s 2015 boss, that tells him he’s fired via TV and three different faxes, for some reason. Send that Slack announcement, but also send it in an email, and say it out loud in three different meetings with slightly different people, and iMessage that one guy that doesn’t pay attention to anything but iMessage, and paste it in a document and file it in a good place in your document hierarchy and make sure it uses good, searchable words, and add a joke or a pun or something to make it memorable. And repeat it every time the subject gets anywhere close to it, until people audibly start groaning in your direction. That’s when you’ve succeeded.

In other words, for the important stuff, do whatever is necessary to make sure it gets through to the right people. Go to them. Meet them on their own terms, in whatever way they process information. Write everything five times. People are different, both from themselves and from machines. Embrace those differences, because it’s what makes a team a powerhouse of creativity. So you have to repeat yourself — it’s well worth it.

The Best Testers Are Scientists

It doesn’t take long to appreciate a great software tester. And it doesn’t matter if she’s a manual tester or writes automated tests, because what really matters are the types of tests being run: curious tests. Tests that don’t just discover a bug and quickly document it away in a ticket, along with the state of the whole world at the time of discovery. But instead, tests that try to find the exact circumstances in which the bug occurs.

The more defined those circumstances, the more helpful the ticket is to the developer and, ideally, will take their mind right away to the exact function that is responsible for the bug. In those cases, you can almost see the light bulb go off:

Tester: I’ve only seen the bug on the audio configuration screen, and it usually crashes the app after single-clicking the “source” input, but I’ve seen it a couple of times from the “save” button too. And it seems to only happen after a fresh install on Android 10.

Dev: ohhhh! That’s because the way we handle configuration in Android 10 changed and the file the audio source is saved in doesn’t exist anymore!

This is exactly the kind of dev reaction you want to a bug report. It’s an immediate diagnosis of the problem, which was only made possible by a very well-researched and described bug. But notice how that description could, with changes only to the jargon, have been written by an entomologist:

Entomologist: I’ve only seen the bug on a tiny island off the coast of Madagascar, and it’s usually blue with green spots, but I’ve seen a couple of them with yellow spots too. And it seems to only come out right after sunset in the wet season.

Which is kind of obvious when you think about it, because what do scientists do? They test the software that is our reality. Galileo’s gravity experiments is one of the more famous in history (and likely didn’t happen), but what is it, in software terms? He wanted to know if the rules of our universe took weight into account when pulling things toward the Earth. A previous power user, Aristotle, figured that the heavier a thing was, the faster it would fall. But that user failed to actually do any testing. So thank God that talented testers, like John Philoponus and Simon Stevin, came along and figured out that things mostly fall at the same rate through air, and then bothered to update the documentation.

What Aristotle did was assume the software worked in a certain way. Granted that he didn’t have the requirements to reference, but he probably noticed that you have to kick a heavy ball harder to go the same distance as a lighter ball, and he figured that the Earth kicks all things equally hard. That’s the equivalent of our tester above seeing the “source” input work on Android 9 and not bothering to test it on 10. Or seeing that it worked on the video configuration screen and not bothering to test it on the audio one too.

And that’s okay, because Aristotle was not a tester. He was more like a fanboy blogger. But what testers should be, is bonafide scientists, like Simon Stevin, who follow the scientific method:

  1. Ask a question
  2. Form a hypothesis
  3. Make a prediction, based on your hypothesis
  4. Run a test
  5. Analyze the results

In our example with the “source” input, after the tester saw it the first time, she probably did something like this:

  1. “why did it crash?”
  2. “maybe it was because I pressed the ‘source’ input”
  3. “if so, that’ll make it crash again”
  4. Relaunched the app, tried it, it crashed again.
  5. “okay, that was definitely the reason”

Aristotle might stop there and file the bug: “app crashes when using the ‘source’ input”. And the developer would try replicating it on their Android 9 phone and kick the ticket back with “couldn’t replicate”, and that whole cycle would be a waste of time. But our tester asked another question:

  1. “does it crash on this other phone?”
  2. “if it doesn’t, it’s a more nuanced bug”
  3. “I think it’ll crash though”
  4. Tried it on the other phone: it didn’t crash
  5. “what’s different about this phone?”

And she continued the scientific process like that, asking more and more pertinent questions, until the environment that our bug exists in was fully described. Which is exactly what you want in a bug report, because anything less will, in aggregate, be a productivity weevil, wasting both developer and tester time with double replication efforts and conjectures about the tester’s environment and back and forths. A clear, complete bug report does wonders for productivity.

So then, why not just teach all your testers the scientific method? Because it doesn’t work in the real world. We all learn the scientific method, but few of us become scientists. And I imagine that, just like in any profession, a not-insignificant number of scientists aren’t good scientists. Knowing things like the scientific method is necessary, but not sufficient to make a good scientist. You also need creativity, in order to ask the interesting questions, and more importantly, curiosity to keep the process going until it’s natural conclusion — to uncover the whole plot.

Tangentially, curiosity is a hugely important trait in great developers, too. But for testers, even more so.

Be The Lorax

I am the Lorax. I speak for the trees.

“The Lorax”, 1971

This was Dr. Seuss’ favorite of his books. If you haven’t come across it, it’s a great fable about a woodland creature who keeps warning an industrialist to stop cutting down all the Truffula trees; but the guy doesn’t listen and proceeds to destroy the ecosystem.

The Lorax book cover

From time to time I feel like the Lorax, but instead of the trees, I speak for the developers, who are every software company’s most precious resource1. And I think this is a crucial, but often overlooked part of the software manager’s job: litigation.

At times the plaintiff, other times the defendant, but usually in opposition to someone named Taylor from another department, like finance or HR, who doesn’t understand engineers. Sometimes Taylor is someone very high up, who used to be an engineer a lifetime ago, but they’ve forgotten what it’s like, now that their days are filled with meetings about sales projections and synergy. And here you come, Sr. Cat Herder, with a request to change the new dress code policy.

“You might not have heard, but we had a bit of a problem in Marketing, so we had to institute the dress code. We didn’t want to, but you know Legal.”

You see, Taylor’s not a villain. Almost nobody is. They’re just trying to do their best, and one thing people outside of Engineering are pretty bad at doing is putting themselves in an engineer’s shoes. It would be easier for most people to pretend they were a parakeet. But this is where you come in — you who have knowledge of the way of life in the dark cave of Engineering.

So you use your nerdy charm and wit to show that having a policy is fine, but that it just needs to be tweaked, because while the developers have no intention of causing problems, roughly half of them will quit before dressing “professionally” at work. And a third of those, have not worn long pants or closed toed shoes in years — and won’t start now. Not when they can get a good job by just whispering the words “I’m looking for a change” into the ether that’s continuously monitored by eager recruiters. Because as it turns out, those developers are a lot of the best ones we have, and they’re past putting up with well-intentioned policies. And if even one of them leaves, it’ll cost us a ton in recruiting, on-boarding, shipping delay, and general business risk, which is surely worth tweaking the policy to allow cargo shorts and sandals.

“But it’s 45°F outside!”, Taylor exclaims.
“It doesn’t matter, because they’re almost never outside,” you reply, and then continue with a whisper, “and they’re really stubborn.”

If you do this well, the policy will get tweaked before the developers are even told what was in the policy email they deleted, and the builds go on without a blip.

Sometimes you have to be the change you want to see in the company, which is a bit harder. Convincing the powers that be to allow flexible hours, or to get Engineering more expensive laptops, or that the savings were not worth switching to Microsoft Teams — these are the kinds of things that might need a Powerpoint. A beautiful, well-researched, entertaining Powerpoint, which will take a lot of your precious little coding time.

But these are the things you have to do as a good manager and the sacrifices you have to make, because you are the Lorax, and you speak for the devs. Though… hopefully better than the Lorax, in that you actually succeed in preventing the collapse of the ecosystem.


  1. Just to be clear: I was using “resource” metaphorically. As much as nature abhors a vacuum, I abhor calling people resources. Trees and laptops and StackOverflow articles are resources — people are not.