On Introducing Change

One of the most important aspects of leadership is changing the status quo. Things are never perfect and thus can always improve, and the only way to do that is to make changes. But doing so is tricky, because there are only a few ways it could go:

Implemented optimallyImplemented poorly
The right changeGreat!Probably bad
The wrong changeBadEek!

There are a couple of other important dimensions to making a change — such as its urgency and impact — but it wouldn’t change the point that there are a lot of ways to screw up the introduction of a change, and a pretty narrow set of ways to do it well. This is absolutely one of the cases in which the road to hell is paved with good intentions. Further below are my general principles to implementing beneficial changes in optimal ways but first, a few caveats:

  1. Not uncommonly, changes come from outside our span of control. Be it from a new law or regulation, to a change in strategy from the CEO, to rippling effects from the latest financial models, to your best friend Dave who manages another team letting you know your team will have to spend 2 weeks refactoring their code because they need to swap out a library. Because of that, an equally important skill — alongside willingly introducing changes under your control — is managing change imposed by others. Both in terms of material effects to the roadmap, though also in terms of morale. But that’s another article.
  2. No one gets it right anywhere near 100% of the time. Not only because it’s hard to even please most of the people most of the time, but also because organizations are what Cedric Chin calls complex adaptive systems, which are incredibly difficult to model because we as leaders can never hope to have enough visibility into the motivations, biases, etc, etc, understanding, or many times even the response from any given person. (And if you’re interested in an in-depth look at introducing organizational changes — org design, in Chin’s words — definitely read his article The Skill of Org Design.)
  3. Each situation is unique, and there’s no generalizing change management. There are impactful changes and changes no one cares about. There are short-term and long-term changes. Ones that affect a lot of people and ones that don’t. Simple and complex. Clear and unclear effects. And on and on. The principles below, I’ve found to be widely applicable — though certainly not universal. However, I always consider them before occasionally sidestepping them.
  4. Your Values May Vary. I like building consensus when possible, I like engagement and participation, I like transparency, and I like debate. I think those things are fundamental to a great culture, a great working environment, and positive morale, all of which are important because happy employees are more productive. But some issues are divisive and there’s no consensus to be had, some people just don’t care about some things, sometimes we can’t show our cards, and endless debate is a drain on productivity. And while I agree that there’s a time for debate and a time for action, if you tend to index more on the latter set, you might not agree with the below.

And now, the list of 6 Simple Things to Make All Changes a Breeze. Kidding. There’s only 5.

Do as much homework & due diligence as practical

I believe in big design up front (though not waterfall) because efficiency is important to me. My time in the opera only cemented that:

[The operatic development process] should all be very familiar, but how does that differ from the process in most software shops? From my experience, it’s the beginning and the end: the design and the testing. Most shops focus on the middle: the development itself, with testing being a necessary evil, and design often being a quick drawing with some boxes and arrows, maybe just in someone’s head.

Software & Opera, Part 3: Design & Test

In programming, CI/CD, automated testing, and auto-updates make design defects less painful. But with organizational changes, it’s usually a lot harder to fix a defective change, or roll it back. So thinking about it thoroughly before deploying it — i.e., designing it well — is even more important.

A good design depends on a great understanding of the area in question, so it’s important to find out as much as time allows: to talk to the people with opinions, read what others have done, what the best practices are, and understand what happened in similar situations, and why. Just because something worked for Netflix, doesn’t mean it’ll work for my team. The more I can refine my mental model of possible changes and how each option might affect this particular group of people, the more likely I’ll be to pick the optimal option, and the more successful the rollout is going to be. Ergo, this first step is worth spending as much time on as I need to feel comfortable about that mental model.

Shop and iterate the concept

Once I have something I think is good, I write it down. This is a draft of the written change and it’ll at least be referenced later by the “official” announcement, if not be included with it. Writing it down allows me to read it from the view point of specific other people and better gauge what I think their reactions will be: to specific phrasings, to the tone of the whole, and to the intent of the document.

I then distill that into an elevator pitch, and casually drop it in conversation with some people I know will have thoughts. Maybe during a 1:1, or while waiting for a meeting to start, or whatever. “Hey Jane, I’ve been thinking about that problem and am curious what you’d think if we …”. Gauging that initial reaction is important. Is it positive? If not, is it an issue with how I said it, or a problem with the actual change? Why?

I go back and refine. Not everyone will like the change, but it’s important that I think this is still the best change, in spite of specific criticism, and that the right message is getting across — especially to the people that disagree with it. And if most people disagree with the change but I’m still convinced it’s the right one, I take a look at the messaging. What piece are they missing, or misunderstanding, or seeing differently? Sometimes it just comes down to differing values, in which case there’s little to be done but agreeing to disagree.

I then start showing people the written doc — again, live. The interested and affected parties first, then the remaining powers that be. Are they parsing the writing as intended? What helpful suggestions do they have? Is all the relevant context available to them?


Once I’ve gotten input from as many interested, affected, or responsible people that I can — ideally all of them, but at least a significant sampling for larger changes — I share it out more widely, though informally, as a request-for-comments, probably in a relevant Slack channel: “We’re thinking about making a change to the thing to improve this stuff. Please share your thoughts!”.

By now, I should have no surprises to the reactions. Hopefully my doc addresses the criticisms I’ve heard and thought of, though some people will still bring them to me — either because they think they can change my mind through novel arguments, or by adding to what they are sure is an enormous chorus of opposition, or maybe just because they didn’t read the doc carefully. WET comms are important in helping to avoid that last one, especially with larger groups: different people need the same thing communicated to them in different ways.

In this step, I try to engage with the bigger discussion as much as a I can, but I’m more on the lookout for what I’ve missed earlier. The bee no one’s seen flying around. And if I see one, I evaluate the impact: how much does this change? If I’ve done my homework in step 1, it shouldn’t be fundamental. Hopefully I won’t have to go back to the core group I started with. But, better to do that than push The Wrong Change forward. Humility is important here. It’s easy to get attached to a bad idea because “no way it can be that wrong after all the thought and work I’ve put into it so far!”.

Be the glass you want to see through in the world

I value transparency, so I try to be as transparent as possible. This is not always possible, of course. But it’s possible more often than not. So in my document, I try to explain context, rationale, alternatives not chosen, and anything else that might be useful — not only to the curious bystander now, but to the next person who has my role. Or to my next boss. Or to the awesome engineer that starts next year. Or me in 3 years, having forgotten all the details of how this all went down.

Why was this decision taken? This particular one. Is it still valid under the current conditions, or should we do something else? Having everything written in the record can help dramatically both now and later.

Rush only when necessary

Sometimes change has to happen quickly. The monolith is on fire every other day. My best engineers are thinking about leaving. A re-org is coming next week. But most changes are not that. The new set of Priority fields in JIRA can be rolled out when they’re good and ready. Yes, they should improve reporting, but it’s more important to get it right, because we’re not changing them again this year. Probably not next year either.

It is important to keep making progress and not let proposed and socialized changes languish because “I’ve been so swamped this quarter”, but as long as useful activity is happening, it shouldn’t be rushed unless it needs to be. And sometimes change is urgent and so skipping through some of the above principles, or abbreviating them, is needed.

But when its not, and news of the change spread through the grapevine and people are clamoring for it, take that as good news! It’s useful to update them on the status, but the great part is that they want the change, which makes the overcommunication aspect way easier. And of course it adds some time pressure with the masses waiting with bated breath, but it’s still important to not get too eager, because again: it’s often hard to roll organizational changes back. Even if it’s just a matter of sending an email and no one has to even remember to change their workflows again, it’s still trust in your insight as a leader that gets lost. Political capital.

As leaders, the more of a track record we have of making thoughtful, positive changes, the easier it is to get consensus and make subsequent changes. It’s a feedback loop that’s driven — like a lot of things — by care and diligence, paying attention to details and, more than anything, valuing people.

Sweat Some of the Small Stuff

Of the three hard problems in computer science, the one I probably spend the most time on is “naming things”. (Off-by-one errors are too often the bane of my programming existence though.) And sometimes, I get push-back or eye rolls on how it’s not worth spending any time on the name. “Let’s just pick one at random and move on”.

This is usually the argument of someone who doesn’t believe in, or care about The Thing’s future. Think about the times when everyone obsesses over choosing a name: when it’s for their kid, or their pet, or their startup. To a lesser degree, their project or their social media handle. Stuff that has a real future for them and they know will likely and hopefully be used for years and come to be shorthand for something near and dear to them.

xkcd #910, “Permanence”

So when someone rushes through the name of a Python module or a wiki page or a feature, it often means that they don’t think it’ll matter: be it because it won’t last, or because no one will come across it again, or even that just they won’t. In their mind, there’s no future in which This Page will need to come up in Confluence search results, or in which Dave the Python dev will need to quickly understand what a variable called “idx” does.

But this is not an article about naming things — though, I’d be shocked if I don’t write that one someday. It’s about the more general problem of failing to recognize the importance of some of the “small” stuff, like the names of things which may be typed and spoken and searched by untold masses for years to come. Of course, there are a lot of small things that are unimportant; things that can be safely overlooked or bypassed or pushed off ad infinitum, because they don’t really have downsides. Things like learning Italian, diagramming your codebase in UML, or showing up to your 1:1s. Okay calm down — yes, that last one was a trick.

But some very important things are deceivingly small. Like honey bees and gut bacteria. Or out of sight, like the type of insulation in your house, which will make a difference in your HVAC bill in the tens of thousands of dollars over its lifetime. Or the lack of automated testing / CI/CD / code formatting / linting which will make that same difference in dramatically less time. Or skimping on offsites for remote teams. Under-funding the development of developer tools. Not prioritizing documentation. Not doing interview training. Not writing a Slack culture guide. Skipping 1:1s.

There are a lot of these small things that have outsized effects: the individual effort is small, but the cumulative results are big. Just like compounding interest, a tiny seed, or that domino trick above. Conversely, not doing them can lead to death by a thousand paper cuts. Or if not death, at least an increasing leak of money. One example from the wild I often think about is how iOS updates are (now) fairly seamless and how much money that investment probably saves: because it encourages people to upgrade quickly, Apple can avoid wasting thousands and thousands of hours per year on supporting a large fleet of outdated devices.

And while it’s tempting to keep going with a list of these items like above, the reality is that the list depends a lot on the circumstances: the age of the company, the size, the culture, the tech stack, etc. There’s no one list to rule them all, as far as I can tell. The closest it comes to is that a lot of this mighty small stuff falls under “enablement” — but that’s a broad term for a lot of things. So it becomes yet another one of those aspects of leadership that’s ambiguous and requires judgement calls based on experience of what’s important in concrete terms, right here and right now and with this group of people. And some of that important stuff will be the seeds that makes the organization 10x more efficient.

Unfortunately, the bigger problem is what comes next: you’ve recognized what needs gardening, but now you need to convince everyone else of the value. If not done well, this attempt at persuasion can cause problems ranging from politely being chuckled off the virtual podium, to getting a stern talking to about wasting your time on trifles. “Can’t you see the backend is crashing twice a week? And you’re talking to us about automated tests? We already have QA.” This is why most people don’t even bother taking up the mantle and arguing for the small stuff.

It’s hard to persuade others to spend time gardening when the roof is leaking — and there’s potentially a lot to lose in trying. But it’ll be a worse situation when the roof is inevitably fixed, and the garden is devoid of food. Depending on what time period this poor house is in, a forgotten garden might mean starvation, or it might mean living on pizza and Mountain Dew for years until the heart disease kicks in, or it might mean leaking money on buying produce. None great things.

So spend time on the argument. Learn to become persuasive. Learn what kinds of arguments work on different stakeholders. Become great at crafting a powerful narrative that can change minds. And leverage that for good. In a way, this is smallest and mightiest of all the things.

Thaw with her gentle persuasion is more powerful than Thor with his hammer. The one melts, the other breaks into pieces.

– Henry David Thoreau

The Case for the Diagnostics Team

I recently watched a lecture by Kevin Hale, who co-founded a startup named WuFoo back in 2006, grew it over five years to millions of customers, and sold it for 35M$, to SurveyMonkey. He subsequently became a partner at Y-Combinator for several years. The lecture was about making products people love, and one of the points he made was around WuFoo’s obsession with the customer:

  1. Each team member had a turn in the customer support rotation
  2. Their response time to customer support issues was a few minutes during the day, and a little longer at night
  3. They hand-wrote personalized thank you cards to random customers weekly
  4. Even though their business (form creation) was dry, the website was designed to be fun and warm, not business-y

It’s a great 45 minute video, and absolutely worth watching — it’s embedded down at the end. But what really drew my attention was that first point above, about everyone doing a customer support rotation. And that’s because at Voalte, which also had a customer obsession, we took a similar approach that we called The Diagnostics Team.

Voalte mobile app
Voalte is a communication platform for hospital staff

The team was like the cast of House: expert detectives in their domain that could tackle the hairiest problems, sometimes getting that “eureka!” moment from the unlikeliest of events. I/O throughput was our lupus.

The mission was a take on the support rotation, but with some twists:

  1. The team handled “Tier 4” support issues: the kind of stuff where a developer with source code knowledge was needed because the previous three tiers couldn’t figure out the issue.
  2. It was cross-functional, so that each codebase (Erlang backend, iOS, Android, JavaScript) was represented on the team
  3. The rotation was 6 months
  4. The team priorities were:
    1. Any urgent issues
    2. Code reviews, with a support and maintainability point of view
    3. Any customer-reported bugs
    4. Proactive log analysis, to find bugs before they’re noticed in the field
    5. Trivial, but noticeable bugs that would never get prioritized by the product teams
  5. Team members nominally did at least one customer visit during that 6 months

The model worked really well, and I think the team is still around, two acquisitions later, at Baxter. It wasn’t perfect (we never got that good at proactive log analysis while I was there, and customer visits ebbed and flowed depending on priorities and budgets) but overall, we hit the goals. “And what were those goals?”, you say. I’m glad you asked!

Cast of House, season 1

Remove uncertainty from the product roadmap

This was the main reason I pitched the idea of a Diagnostics team. After our initial release of Voalte Platform, we were constantly getting team members pulled off of product roadmap work in order to take a look at some urgent issue that a high profile customer was complaining about. And you could never tell how long they’d be gone: a day? a week? 3 weeks? How long does it take to find your keys? And if we had a couple of these going on at the same time, it would derail an entire release train.

The thinking was that having a dedicated team to handle those issues, while costly, was probably both less costly than the revenue lost from release delays, while also saving us money in the long run by preventing urgent issues.

And it worked: our releases became a lot more predictable. Not perfect of course, but a big improvement.

Keep a focus on customer needs and pain-points

Our customers were hospitals and we wanted to make sure things worked well in our app, because lives were literally on the line. Having a team that was plugged in to the voice of the customer meant that less complaints fell through the cracks of prioritization exercises. And while the Diagnostics team generally didn’t build features, once in a while they did: if the feature fixed a big pain-point.

This being Tier-4 support though is one major way in which it differed from WuFoo’s model, because the team wasn’t as much exposed to Tier-1 issues that were known to the frontline customer support people. When developers hear about a frustrating bug for the 4th time, they tend to just go ahead and fix it. But if they’re only exposed to that bug via a monthly report, it won’t frustrate them as much.

Our ideal here though, was to crush the big rocks, improve the operational excellence so that no more big rocks form, and then the team would be able to focus on the pebbles. We had varying success on this, depending on the codebase.

The other prong was customer visits. Each developer would pick a hospital and arrange a ~2 day visit. The hospital would generally assign them a buddy, and they would get the ground truth both from that buddy and by walking around to as many nurses’ stations as possible and asking them about the app.

Most of the time, they wouldn’t have anything to say. When they did, most of the time it was some known problem. But like 10% of the time, it would be revelatory: some weird issue because they tapped a combination of buttons we’d never thought of, or used a feature in a completely novel way than we meant it. And we’d write debriefs of the visit after the fact to share with the team.

No matter what was learned on the trip though, the engineers came back with a renewed sense of purpose and empathy for the customer, not to mention a much better understanding of how hospital staff work and use the product.

The House version of customer visits were rotations in the free clinic.
Great supercut on how not to act on your customer visits.

Improve the quality of the codebase over time

One of the things we were worried about in creating this team is that it would disconnect the developers on the product teams from the consequences of their actions. They’d release all kinds of bugs into the field and never be responsible for fixing them and so never improve. This was part of the reason we wanted Diagnostics to be a rotation. (Though, it ended up mostly not being a rotation, but more on that later.)

Our main tactic to prevent this problem was to make the Diagnostic team a specific and prominent part of the code review process. Part of the team’s remit was to review every PR for the codebase they worked in, and look for any potential pitfalls around quality and maintainability. Yes, those are already supposed to be facets of every code review, but:

  1. The Diagnostician would have a better sense of what doesn’t work, and
  2. They have more of a stake in preventing problematic code from seeing the light of day

Build expertise around quality and maintainability

To our great surprise, at the end of the team’s very first 6 month rotation, half of the members wanted to stay on indefinitely. They found the detective work not only interesting, but also varied in is breadth and depth, and fulfilling in a way that feature work just isn’t.

We debated on whether to allow long-term membership on the team, because we did want to expose all of the team members to this kind of work. But ultimately, we decided that the experience these veterans would build would be more valuable to the effort — especially when combined with them sharing that experience through code reviews and other avenues.

Over the years, they got exposed to more and more issues reported by customers — which are the ones that matter most — and they developed an intuition about what bothers them most and what kind of mistakes cause those kinds of issues. They also developed a sense of what programming patterns cause the Diagnosticians themselves problems both in terms of both monitoring and observability, so they can easily diagnose issues, but also in terms of refactoring code to fix problems, and what characteristics problematic components have in common.

That’s the kind of insight from which arises the most valuable part of the return on investment: preventing painful tech debt and convoluted bugs from ever getting shipped. It more than makes up for the cost the team.

A Tale of Two Interview Loops

It was mainly the worst of times, when a comedy of errors left me on a prolonged job search, alternating between the spring of hope and the winter of despair. But there is lemonade to be made from the almost three dozen lemons interview loops I went through, and this article is that lemonade. Because, while astronomical amounts of money go into recruiting, and even though we live in the age of wisdom, most of the interviewing process is sorely broken at the majority of companies; and so we also live in the age of foolishness.

From my experience, most Big Tech companies more or less copy each other, and because they are giant ships that are impossible to turn quickly, change comes slowly. Many startups and smaller companies then look to the giants for inspiration, out of some combination of familiarity (because they used to work in Big Tech) or imitation (“we’re also Big Tech!”) or just assuming that anything Big Tech does must be good, because one hundred billion dollars can’t be wrong.

And so we end up with the typical tech interview loop:

  1. Sourcing: usually from a referral, but could be a recruiter reaching out after a LinkedIn search or, rarely, actually responding to an unsolicited application
  2. Recruiter screen: to do a sanity check and vet logistics (salary range and other expectations, like remote work)
  3. A general aptitude screen: coding or system design or leadership chat or some combination, depending on the role
  4. A panel, or “virtual on-site”: somewhere around 5 interviews scheduled all at once — but maybe not happening on the same day — with a mix of roles, from engineers to product managers to engineering managers to designers to QA
  5. If that went well, a final interview, with a senior leader, and then an offer

Some variant of this describes almost all of the ones I’ve come across, but: the devil is absolutely in the details, and those details are the difference between a great process and a terrible one. So let’s look at two hypothetical loops at those extremes.

The Bad Loop

If you’re hiring, and elements of the bad loop sound like your company, please think about improving them. If you’re looking, and you run into one a little too close to this: I’m sorry.


The bad interview loop usually starts off badly from the beginning. It’s obvious they don’t invest in candidate experience up front, and it rarely gets better further down the process.

There are a few ways to screw up sourcing, and the most common is just never getting back to candidates. No rejection on merit, no timeout rejection, no insulting GIF — just silence. This happens a lot.

Still better than nothing

Marginally better are companies that do reply, but after way too long. The worst of these for me was a startup that got back to my application — which was a referral — after 47 days. They set up a recruiter screen for the next day, which went well, and then an interview with the VPE for the following week. Unfortunately, that got rescheduled for the next week… and again, for the next week, and then again for the next week, and when it got to the 4th time, I decided to bow out of the process. After 84 days total.

The third common way to turn people away at this step is to send clearly automated emails or LinkedIn messages that are supposed to sound tailored to you. Like the recruiter is channeling a horoscope writer.

Hi Gabriel,

Your profile looks great. It looks like you are thriving at Hillrom, so I know it is a long shot that you’ll be interested, but hope springs eternal!

[Recruiting company] has been retained to recruit an Embedded Systems Manager at [hiring company]. This person will lead the embedded systems team comprised of electrical and software engineers while also serving as a software lead on connected care and digital health projects.

I’d welcome the opportunity to talk with you. Please let me know if you are interested to learn more. Thanks in advance and I look forward to your response.

An actual email I got. I’ve never worked with embedded systems.

In short, the sourcing is bad if they:

  1. Don’t value candidates, and
  2. Don’t value the company’s image

Recruiter Screen

This step is actually pretty hard to do badly, but it generally involves the recruiter being out of their depth and/or not caring about their craft. I’ve spoken to both (a) recruiters who had no idea there was a difference between Java and Javascript, and (b) ones who did, but treated our conversation like the most annoying part of their day — like their Zoom background might as well have a giant “I’d rather be doing literally anything else” banner. And of course, the worst is (a) and (b) combined:

Recruiter: yeah hey, I’m in a rush, but this won’t take long

Me: okay…

Them: so I saw your resume, I don’t have it in front of me right now, but can you just answer a few questions?

Me: sure

Them: ok so this is a management role, you have 5+ years of experience managing, recruiting, and mentoring a team of diverse engineers?

Me, realizing they have, in fact, not seen my resume: yeah

Them: do you have 8 years of Java?

Me: uh… [trying hard to make it past the syntax errors] … yeah

Them: do you thrive in a fast-paced environment?

Me: sure

Them: can you speak good English?

Me: I think you said the quiet part out loud.

General Aptitude Screen

After the boilerplate kind of checks, comes the first true test of the candidate’s mettle: do they actually have a notion of how to do this job, or are they a complete fraud? For engineers, this is almost always a somewhat basic (sadly, often leetcode-y) technical interview. For managers, it’s one of three things:

  1. A purely technical interview, because the thinking is managers should also know how to leetcode
  2. A leadership interview
  3. A bit of both

The bad versions of this are the ones that don’t test aptitude required for the job. We’re hiring a staff engineer for their background in designing APIs and scalable architectures, but first: let’s make sure they can figure out the trick to finding the longest palindromic substring. Because that’s the thing they’ll constantly trip over in their day-to-day job: figuring out solutions to neat little puzzles that can be done in 20 minutes.

And that’s really the crux of the problem, because this kind of exercise filters out a few kinds of traits that, in my experience, a lot of good engineers have:

  1. Not performing well while being watched
  2. Not having done algorithmic puzzles in years
  3. Not particularly caring to spend dozens of hours preparing for interviews
  4. Not having slept well the night before the interview

The arguments for leetcode interviews don’t hold water either:

  1. It’s fair: everyone from the new bootcamp grad to the Sr. Fellow gets the same questions. Is that fair though? It’s fair for the new grad, because they have no work experience and it’s all they can be tested on. And early in their career, they should still remember this stuff. But it’s not fair to someone with a decade of experience that (correctly) hasn’t thought about sorting anything beyond typing my_list.sort() in almost exactly as long.
  2. It shows motivation: the thinking is that if you’re willing to put in the hours to dust off long-forgotten skills, that means you’re a go-getter. Either that, or you have a ton of free time. Or are willing to work hard for 3 months so you can rest and vest.
  3. It’s a great weeding-out tool: “here at Hooli, we get far too many yahoos applying and we need a simple and consistent way to halve the middle of the funnel.” This I can’t argue with. If you have too many applicants and just need an efficient way to politely send scores of them away, while making it look like meritocracy, then this loop design does in fact meet those requirements.


The next, and sometimes final step of the process tends to be an interview panel. It’s kind of like a death panel, except it’s very much real, and only decides your professional and financial future. It’s a holdover from the days of in-person interviews, which you can tell because it’s often called a “virtual on-site”.

Back in the before times, companies used to fly people over to their headquarters, even across country, for a gauntlet of interviews held over a full day. You’d talk to 3-7 people across ~4 interviews — because sometimes they’d double up, or you might even have several people in one interview. Like much in-office culture, most companies have lifted and shifted this process into the Internet, and now offer the same experience, but worse because it’s over Zoom. Also better though, because you don’t have to fly to every interview.

The idea of having several people interview a candidate is obviously fine. The problems here are more around how they’re structured, and how much freedom the candidate has to tailor this step so they can make a good impression. Because to a lot of people, if not most, this step is very stressful. Some would rather do it in one day and get it over with (even if they’d perform better otherwise) and some would rather split it into two or three days. Companies that don’t care, also don’t care about candidate’s preferences here.

The more general problem though, that many companies are guilty of, is not adapting this step to the modern world. It’s not great for the company to sign up for more than ten people-hours of time (adding in the prep, write-up, and debrief time) just based on the one aptitude screen. Especially if it’s a low-signal one, like leetcode. Data is king here, but I would bet the panel pass rate at most quality companies is not that high, and so that adds up to a lot of waste all around: effort and stress for the candidate, plus time and energy for the interviewers.

The last thing to mention here is the companies not investing in quality tools, especially for the technical interviews: having the candidate write code in something like Google Docs instead of a purpose-built thing, like CoderPad or HackerRank. Same goes for whiteboarding, for like system design exercises. And using Microsoft Teams.

Final Interview

This step, mirroring the recruiter screen, tends to be more of a formality or sales tool. By now, the candidate has passed the panel gauntlet, and this final hurdle gives good candidates the opportunity meet with a senior leader in the organization, while also giving that leader the opportunity to meet with promising hires before the offer goes out.

Where this is done poorly, it’s either not done at all, or it’s a very rigorous interview. Not doing one at all can be fine, but — especially in a more talent-driven job market — having this step can sway a candidate with multiple options. It’s also great, from an organizational health perspective, for the most senior leader to either be part of the interview loop, or schedule a 1:1 shortly after the candidate starts; doing both is even better.

This interview should be real, of course, with questions that give the interviewer insight into the candidate; but if it’s too much more than formal, it can both put the candidate (who has already been through the gauntlet) off, while also causing an internal problem if the candidate fails a hard interview, after having already passed the panel. Can the more senior leader just override the panel? Because that doesn’t sound like a healthy org.

Surprise Step!

Sometimes, panels are split. Lots of soft thumbs up and down. And the job req has been open for months, so they don’t want to pass on a potentially good candidate. And so they decide to tack on another interview — or maybe repurpose the Final Interview above — in order to break the tie. This is a smell of a poor interview process. One that is not designed to produce strong yeses and nos. Or one that is, and has failed. Most well-run processes will explicitly forbid this, and for good reason: an 8th person won’t add that much value to the chorus.

The Good Loop

Now, for the fun one! Sorely few of the interview loops I’ve experienced have been good, but for the ones that have, it provided so much signal that it would’ve been worth taking a lower offer. A lot of the anxiety of committing to a role comes from the company’s culture, and a great culture usually comes out through the interview loop.


Great sourcing is genuine. Also the things I said earlier about valuing the candidate and the company’s image. But great places to work do it with care. The messages are personal, not form- or AI-generated. They show that the sourcer has actually read and understand your resume, as well the job description. And they make a good case for why they think you’d be a good fit and choose to go through their particular interview gauntlet.

But good sourcing is hard. To do this well, it obviously takes a lot more time, and therefore people, and therefore money. So it’s not a bad signal if a company is just okay at this. BUT, if the sourcing really is great, that’s great signal: it shows the company has committed actual dollars to candidate experience, because they care about attracting the best employees into a great environment.

Recruiter Screen

A great recruiter screen gives the candidate a positive glimpse into what the rest of the process will be like. The recruiter is prepared, understands what the candidate’s resume or LinkedIn says, and has good follow-up questions. The chat is friendly but organized and the run-down is provided up-front: “first I’ll give you an overview of the company, talk a little more about the role and give you a chance to ask questions, and then talk about your experience and how it might be a good fit.”

It’s mostly proforma, but value comes out of it: the recruiter gains more clarity than the resume provides, makes sure the logistics are lined up, and the candidate asks questions about the team/org, responsibilities, tech, etc that aren’t in the job description. Even if it turns out that something doesn’t line up, like salary expectations, everyone leaves feeling like it was a productive call.

General Aptitude Screen

An effective aptitude screen tests just that: how well this person is suited for this role. This means that the defining characteristics of the role have been thought through and distilled into discoverable skills; and then questions have been created which are good at discovering those skills. But an important piece of the puzzle is where the confidence comes from, that those questions illuminate the correct things. In most places, it’s just someone’s gut feeling.

We make social networks here at Hooli, so we need to make sure people understand Dijkstra’s algorithm. We’ll ask a question where the obvious solution is the algorithm, and pass people that either use it in the solution; or they mention it, have a good reason for not using it, and still come up with a good solution.

Hypothetical, but I’m sure someone somewhere has written interview questions like this.

Some problems with the above:

  1. Do successful employees really need to understand Dijkstra? Is there a correlation between that and job performance? Is it important for this particular role and experience level?
  2. To most candidates that are familiar with Dijkstra, is it obvious that it’s the right solution here? Does the wording and the problem statement lead them there? Or is it just obvious to the people that came up with the question?
  3. Can the question actually be favorably solved in another way? Are we too narrowly targeting a particular way of thinking?

Good questions have been purposely designed all the way through. Their goal has been validated by data as being good predictors of employee performance. The questions themselves have been vetted by trying them out on actual people, getting their feedback, and looking at false positives and false negatives. Ideally, those people aren’t just employees — though this, of course, is difficult.

The above applies to questions asked as part of the panel below. The essence here is that a great aptitude test is effective at predicting job performance.


I’m not sure what percentage of rejections in the interview process are false negatives — meaning a perfectly good employee has been turned away — but I would bet a small amount of money that it’s significant. As in, more than a quarter. And I’d also bet that a lot of that has to do with the candidate being nervous. And I’m not alone.

Chris Parnin of NCSU has a great paper in ACM on the matter, pointing out two things we all already know, intuitively:

  1. An estimated 40% of adults suffer from performance anxiety, and
  2. The typical software interview “has an uncanny resemblance to the Trier social stress test

For those of you who haven’t clicked on that link, that stress test was created to induce change in physiologically measurable stress markers, like heart rate, cortisol, and other things. The test makes those markers increase, in most people, anywhere from 30-700%. And how does it do that? It’s a simple, three part process:

  1. You have 5 minutes to prepare a 5 minute presentation for a job interview
  2. Ok, it’s time for the presentation, but: surprise! Your presentation materials have been taken away.
    • Also, the judges will be silent and maintain only neutral expressions the whole time
  3. Count down from 1022, by 13s. If you make a mistake, start again from 1022.

“Uncanny resemblance” indeed, right? It’s basically the same thing as software interview, which gives the candidate a complex problem — much more so than counting down by 13s — except they don’t even have time to prepare! And the typical interviewer is cold and silent. (As an aside, please don’t be that interviewer. Read my Art of Interviewing, if you are.)

What’s more is that he says “scientific evidence finds that women experience disproportionately more negative effects from test and performance”, and while I’m not saying that this is the only reason tech might be so heavily male-dominated, I wouldn’t be at all surprised if it turns out to be a big factor.

So what’s a company to do? The above paper has one suggestion — and I think it’s a special case of a more general solution — which is the “private interview”. That is, rather than give the candidate a problem and watch them solve it, give them the problem and go away, letting them solve it in private. When they ran this experiment, the results were undeniable:

In the public setting 61.5% of the participants failed the task compared with 36.3% in the private setting. […]

Interestingly, a post-hoc analysis revealed that in the public setting, no women (𝑛 = 5) successfully solved their task; however, in the private setting, all women (𝑛 = 4) successfully solved their task—even providing the most optimal solution in two cases. This may suggest that asking an interview candidate to publicly solve a problem and think-aloud can degrade problem-solving ability.

from “Does Stress Impact Technical Interview Performance?”, in Association of Computing Machinery

The private interview is a great idea, and an improvement on the “take home” challenge, which suffers from the fact that some people will spend an hour on it, and others will spend ten, with two friends helping. But, there are problems with it. One that the paper mentions is that some people were frustrated by not having access to the proctor, to ask clarifying questions and be kept from drifting. So it’s not a one-size-fits-all solution.

But what’s the philosophy behind the private interview? That most people get too stressed out by a stress test an interview, and doing it in private helps. Most people though: not all people. And that, I think is the key insight: what the paper calls “provide accessible alternatives”.

Great companies will tailor their process to help the candidate succeed. They will allow the panel to be split up over as many days as there are interviews. They will allow the candidate to pick the programming language. They will have a “live” option and a “take home” option — I have yet to see the “private interview” option, though it’s by far my favorite. They will train their interviewers to be kind, to be expressive, and to focus on the person, not the question.

In short, as with all good science, they will make sure they have adjusted the process to eliminate as many confounding factors as they can, in order to minimize false negatives and false positives, and give everyone a fair chance.

Final Interview

There’s not much left to say about the final interview other than that a good leader will leverage them to motivate a promising candidate. So instead, I’ll summarize the two loops:

  • bad: disinterested people going through the motions of a process that will usually produce some result
  • good: thoughtful, kind, well-trained people assessing how well this candidate might fit into this role

The Empathetic Metamorph

There’s an old episode of Star Trek: The Next Generation about Picard falling in tragic love with an alien woman. Okay, there’s a bunch of those, but there’s one in particular in which the alien has the ability to change her personality to perfectly match her mate, and become the ying to his yang, being precisely what he needs in a mate. I never liked that episode, or others of its ilk, because it was slow and mushy, and I watch sci-fi more for the “sci” than the “fi”. Plus, I don’t have the attention span to deal with an episode full of dull romantic scenes.

Kamala, the empathic metamorph, with Data

I don’t think I’ve seen that episode since I was teenager, but the concept of the empathic metamorph, which is what Kamala was, really stuck with me. Having empathy is naturally important to the success of most, if not all relationships, but this concept goes a step farther because it’s not just understanding the other person’s point of view, but also changing how you interact with them, in an intentional way. And not superficially, saying the right things in a difficult conversation or knowing which buttons to push to make them happy, but understanding them deeply enough to know what they need in the short term and the long term, and understanding one’s self deeply enough to know how, with the skills available, to help them achieve their goals.

It’s clear that this would be an incredibly complex process, and it takes a lot of time and investment, but I think that the most motivational leaders — the ones that elicit trust, loyalty, and high performance — are essentially empathetic metamorphs. Not in the Star Trek sense, (and not empathic, like Betazoids are), but in the sense that they can change their behavior to bring out the best in a given situation, with the given audience. Sociopaths probably can too, though of course they would do it for their own gain, rather than the other person’s.

The Dimensional Space

The word “metamorph” literally means “shapechanger”: in Greek, “meta” is “change” and “morph” is “shape”. I don’t think an official antonym exists, but let’s call it a “statimorph” — those who can’t change shape. If we look at leaders along the two binary dimensions of having empathy and being able to change themselves in response to it, we get a simple K-map:

ApatheticBad leadersSociopaths
EmpatheticGood leadersGreat leaders

We can try to understand the empathetic metamorph better by looking at the other types of leader.

The apathetic statimorph is the manager that doesn’t care. They’re not a people leader: they’re a true manager. It’s probably not that they don’t want to be better, but for whatever reason, they just don’t have the ability to understand people. They probably rely a lot on process and rules and have a “no exceptions” policy, because rules and rules and you have to be fair. And those rules are probably conceived, written, and implemented by the apathetic statimorph without much input from anyone else.

The empathetic statimorph is, I think, most leaders. They care about their employees and want to see them succeed. They listen to suggestions and complaints and try to improve the environment and processes and mentor their employees so that they grow and in their careers. But they themselves cannot change. They are the same boss to the entry-level engineer as to the principal engineer, to the introvert and the extrovert, to the Builder and the Hacker. They understand how each employee feels and what motivates them, but they aren’t able to be warmer and more encouraging to the entry-level engineer, to afford the principal more intellectual leeway, to be an introvert with the introvert and an extrovert with the extrovert, and to talk about vision with the hacker, but goals with the builder.

The apathetic metamorph, I don’t think I’ve seen in real life. They would be like The Talented Mr. Ripley though. Or Anna Delvey. Able to show whatever version of themselves other people need to buy in order for the metamorph to get ahead. I think people like this are rare, both because most people are good at heart, but also because apathetic metamorphs can probably fake being empathetic until they actually become empathetic, and simply go down this path because they realize selfish people tend to not be that successful.

So finally, the empathetic metamorph is what all the others aren’t. They’re the boss that learns chess in order to better bond with a chess-obsessed employee. The one that, before finalizing a team decision, makes sure to ask the new engineer about it in private, realizing that they might not have been comfortable enough yet to have spoken up during the team meeting. The one that uses metaphors from The Office even though they don’t really like The Office, because they know most of the team loves The Office. The one that knows which employee wants to hear the bad news coming a mile away, and which one would get paralyzed with anxiety about it — and waiting until the last minute to tell them.

I’m not sure exactly how to become one, though I do think it can be learned — probably with lots and lots and lots of practice. Years of deliberate practice in expanding one’s knowledge and skills in order to be able to meet everyone around them on their own terms. Whether that’s worth it probably depends, like most things, on the cost-benefit analysis: how hard it is for the leader to do, vs how important it is for them to be that good.

My Slack Tips, 2023 Edition

(This is a direct copy of the 2021 Edition, with the salient changes highlighted in green.)

You know those recipe websites filled mostly with the backstory of how they discovered this amazing cookie recipe that changes their lives, and what joy it brings their three free-spirited, yet precocious children every time they make the cookies? This is kind of like that, so scroll down to “The How” if you don’t care about “The Why”.

The Why

I love Cal Newport‘s ideas about deep work. He writes a lot about how to protect your time so that you focus for stretches measured in hours instead of minutes, with no distraction. For Joel Spolsky the same concept was “getting in the zone”:

We all know that knowledge workers work best by getting into “flow”, also known as being “in the zone”, where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done. Writers, programmers, scientists, and even basketball players will tell you about being in the zone.

from “Where do These People Get Their (Unoriginal) Ideas?

Joel followed that up by explaining how task switching is bad, Jeff Atwood echoed him, Rands showed us how he kept distractions at bay and years later, after Slack arrived on the scene making this problem worse, how he used Slack. It being such an important topic — especially with the boom of remote work making Slack even more common — this is my version of that Rands article, with a few years of more Slacking under the belt.

For programmers, standard Slack policy should pretty simple, flowing out of the principle of guarding “the zone” fiercely:

  1. Turn off all notifications, except for when you’re being specifically @ mentioned
  2. Specific mentions aside, only read Slack during your downtime — between meetings or while your VM is rebooting or whatever.
  3. Don’t feel like you have to read everything in all the channels. Start with your favorite channels, and go as far as your downtime will let you.

For engineering leaders though, especially manager- and architect-types, it’s not so simple. The broader your scope is in the organization, the more pertinent info you will learn via Slack discussions, and the more value there is in being in a lot of different channels, so you can glean stuff like team Aardvark being in trouble because they’ve been complaining about their infrastructure daily for a week; or you can quickly unblock Bob who can’t remember where the functional spec is for the login screen, or focus a search API design discussion between Carol and Dave, and then build rapport with the other foodies in #random-food.

At the bottom of it all, underneath all of the turtles, programming is about communication between people. The source code isn’t for the processor, but for other programmers. And every successful team is a well-oiled communication machine. The best teams are the ones that have shorthand and can finish each others sentences. You just know that a team like that — and full of smart people — is going to be off the charts with productivity.

So as an engineering leader, possibly the single most important job we have is to do everything in our power to facilitate effective communication. Which these days, includes being a Slack Jedi. But, we also need to get some actual tasks done, so we can’t let it take over the whole day (at least not every day), which means there’s a balance to be found. Below, is how I found that balance.

The How

I’ll caveat these 11 weird tricks with the fact that this is what works for me, in my situations so far, and while your mileage will probably vary, at least some of this might be helpful. So:

  1. Tweak the in-app Slack notifications. The default notification settings inside Slack are mostly fine: it notifies you for DMs, mentions, and keywords. But I do make a few small changes:
    • I usually add a few keywords, like my name, team names, or project codenames. I go into the notification settings of any “system down” or other emergency situation kind of channels and set them to get notified of all messages in those channels.
    • I go into any channel in which I don’t want to miss conversations buried in threads, attached to messages I’ve already scrolled by, and check the “Get notified about all replies and show them in your Threads view” setting
      • This is kind of buried in the channel details, in the notification settings, in the “more notification options”. For channels you really care about, it’s the only way to deal with threads
  2. Turn off all notification types from the OS, except for the badge icon. Not inside the app, but in the Notification settings of macOS/iOS/Windows/whatever, I have the Slack app set to not be allowed to make a sound, to never pop anything up, to never bounce anything — to never do anything at all, for any reason
    • Except, on Apple OSes, to make the badge counter appear over the app icon
    • In the macOS desktop Slack app specifically, in the “Notifications” section:
      • I uncheck the “Bounce Slack’s icon…” checkbox
      • If I’m in multiple workspaces, I also uncheck the “Show a badge…to indicate to activity” checkbox. Which keeps the badge icon from getting a red badge everytime someone says anything in any other workspace.
    • That way, I can get in the zone when I need to, and won’t get distracted by Slack, but can still glance at it, once in a while, at opportune moments to make sure it doesn’t have 57 urgent messages for me.
  3. Low barrier to enter channels. I’ll enter most channels as I come across them, because in the bad case, I don’t read them and in the worst case, I leave.
    • Slackbot helpfully and periodically reminds you about channels you might wanna leave, also.
  4. Low barrier to create channels. Whenever a topic starts getting enough airtime, or when there’s any other reason for it to have a dedicated channel, I create that channel. The more the merrier.
    • You can go overboard, but as long as the reason for the channel is clear, go ahead and create it. You can always archive it later if it was a mistake.
    • “But I’m in too many channels!”
      • Creating more channels is unlikely to result in more messages. The only difference will be that those same number of messages will be more organized! So more likely, you’re in too few channels now, and conversations are a hodge-podge, instead of narrowly topical
      • Channels are free and an organized Slack is a happy Slack, because it is way easier to find that important conversation about search API design if you know it happened in #dev-search.
    • If you actively want to foster conversation in a channel, consider making it private. People feel safer talking in them, and the Slack stats I’ve seen bear out the fact that the large majority of messages happen in private channels or DMs.
  5. Organize the channels in groups and make them disappear. You can create channel groups in the sidebar, order the groups however you want and for each group, show all the channels or just the unread ones. You can also order the channels within the group in different ways.
    • I order them by priority within each group and show just the unread ones.
    • My groups are about related topics, ordered by how important they are to my role: my team’s channels and DMs in the first group, various dev channels in the second, then management channels, field support, water cooler, etc
  6. Embrace speed reading. The human brain is great at patterns, and you can quickly pick up the importance of a conversation from some key signals like what channel it’s in, who the participants are, how long the conversation goes for, key words being repeated, emojis being used, etc.
    • In some channels, like ones about nascent projects or downtimes, I read every word carefully — but that’s very much the minority.
    • For the majority, I scroll through at a good speed, slowing down only when my spidey sense tells me I’m speeding through something worthwhile. It works well — false negatives are rare.
  7. Embrace emojis. Not only do they make conversations fun, but they also convey tone (which is super important) and, as reactions to messages, they serve as pithy replies that add value to the conversation without also adding volume to it.
    • e.g., using a check mark to indicate that you read something without indicating judgement, or a dart that their message was right on the money, or an up arrow as an upvote — they’re small, very helpful gestures that significantly improve the conversation.
  8. Use reminders. When I do go into Slack, I like to at least clear all my mentions that turn channels red. But sometimes I can’t actually service a message. Maybe that’s because it’s asking a question I have to research, and I’m in the middle of something. Or I want to make sure that I follow up on a conversation after my current meeting. Or someone wrote a 7 paragraph message and I’m between meetings and don’t have time to grok it. I use reminders for all of that.
    • Most of my reminders are set on existing messages that I need to do something about
    • Some of them, I create with the /remind command
    • All of them, I snooze extensively until the time is right to work on them
    • It’s important to note that this can go overboard and you can end up with 37 reminders you snooze every day because it’s turned into your to-do list. And Slack reminders are a terrible to-do list.
      • Only use reminders for things you need to do in the app
      • For general to-do items, port them to your to-do list app
  9. Don’t use threads. It’s quite possibly the worst feature in Slack. Some people love them, but don’t be those people. Terrible UX aside, all threads are is a way to hide conversations for no good reason. If you want to read every word in a channel, and you’ve read them all, but then Ethan decides to start a thread off of someone’s message 50 messages up, guess what? You’ll never even know. Unless you have that setting from item 1.
    • Some people use threads because they came too late upon a conversation that’s scrolled away, and want to continue it; there is a better way: share the message into the same channel, and that posts it at the bottom, with your comment attached.
    • “But I don’t want to bother everyone in the channel with my thread”. The conversation either belongs in the channel, or it doesn’t. If it doesn’t, create another channel or group DM. But there’s no reason to hide it.
    • If you just can’t get away from threads because of <reasons>, every so often, please do a courtesy checking of the “also send to #<channel>” box, so that people realize the conversation is still happening.
  10. Nudge people toward the proper channels. Most people are trying to get their job done and aren’t so worried about keeping Slack organized, so when they start a conversation about lunch in #dev-breakfast, it’s probably because they didn’t even know #dev-lunch existed, or because they had been talking about brunch and it morphed into lunch.
    • The point being that, as is generally the case, people don’t intend to break the rules and usually just make mistakes.
    • I, as Slack Jedi, have to minimize the chance of mistakes by establishing good channel-naming conventions and keeping topics narrow, but when mistakes do happen, it’s not the end of the world, and an innocent mention that the lunch channel exists is normally the only thing that needs to be done.
    • Be super nice and soft about it, because creating anxiety about the proper channels is kind of antithetical, since it then stifles communication.
  11. Try to mark everything read by the end of the day. I don’t always clear my email inbox every day (or longer), but I almost always clear my channels, to give me peace of mind.
    • On days filled with meetings, it’s harder, but I keep up with the @ mentions and high priority channels as I can, and then quickly scroll through the less important ones, just to make sure I’m not missing anything important.
    • Yes, the Slack FOMO is strong with me, but the clearing doesn’t take long. I’m in dozens if not hundreds of channels, in a Slack with hundreds of people and — active conversations aside — I can clear a day’s worth of chatter in about a half hour, knowing then that I’m on top of things and won’t be surprised the next morning by Fiona with a “so what are we gonna do about that nasty database locking issue?”

So that’s how I stopped worrying and learned to love the Slack: by putting it in its corner and giving it attention on my terms — no more and no less attention than is beneficial.

The Art of Interviewing

The job interview process is a high-stakes dance that’s notoriously difficult and full of missteps — on both sides. (At least, in software engineering it is. If you care about another industry, <Jedi hand wave/> this is not the article you’re looking for.) Avoiding the missteps is an art, just like it is in dancing; we get better with practice, just like dancing; it’s a lot of fun when it goes well, just like dancing; and leading with dexterity is paramount. This article focuses on that last bit: not on what questions to ask or how to structure the time, but rather on artfully leading an interview. First, however — because it’s a crucial part of the interview context — we should start with how the job candidate views the process. Just like dancing.

Because while for Larry the Interviewer, it’s just a small fraction of his day — maybe an annoying one that pulls him from the dynamic programming problem he was definitely not working on — for Jane the candidate, it’s one of the most important events of her life. The median job tenure is about 4 years, so people will only have somewhere around 10 jobs their whole professional life. And each of those jobs is going to significantly affect Jane’s life both during, and after, it. The CV builds on the shoulders of the previous job, yes, but work is also where we spend the bulk of our weekday, make friends, and develop a large part of our identity.

So what does Jane go into this high stakes dance equipped with? If the company is large, maybe she can get a sense of the general culture from Glassdoor, or news articles, or social media. Otherwise, maybe a sense of how it wants to be seen, from its marketing. But even then: what will her daily life be like there? Would she like her new boss? Her teammates? The bureaucracy? The tasks she’ll be working on over the next year? Four years? Unless she knows someone on the inside, these questions mostly won’t get answered until after the start date. Sure, she’ll get to peek in now and then through cracks in the process and through the five minutes of each interview in which she can ask stuff, but: largely unanswered. Which means it all adds up to a big gamble.

The gamble is better for more mercurial and adaptable personalities than those more averse to change, but your resume can only tolerate a couple of quick stints before it starts getting tossed aside. So it’s a big gamble for everyone. Or rather, I should say “for every job searcher”, because well… it’s a small gamble for Larry the Interviewer. Worst case for him? It isn’t even giving his “thumbs up” to a terrible employee — because someone truly terrible will get fired before too long. No, the worst case is that he gives his thumbs up to a really annoying Taylor that ends up on his team and really annoys him for the next 4 years. And also produces mediocre work that Larry then has to constantly deal with. But Taylor isn’t annoying enough and the work just isn’t bad enough to get fired or even be put on a PIP. Taylor kind of coasts just baaaarely on the good side of the policy. And in doing so, makes Larry’s work life feel like a chirping smoke alarm that he can never find. That is the worst case for Larry.

The best case? He ends up with a really awesome coworker. Which, depending on the existing coworkers, may or may not matter a lot. But in any case except for the narrow and unlikely one of Taylor, the stakes for Larry are much, much, much lower than for Jane. And yet, this is who decides Jane’s fate.

The irony is that all of us will — at different times — be the Jane, and most of us will also be the Larry at other times. And when we are the Larry, interviewing a job candidate, do we act as the interviewer we’d like to have interview us, when we’re searching for a job? When I grudgingly leave the house, I usually drive, and sometimes I walk, and sometimes I bike. And what I find fascinating is that when I’m a driver, I get mad at other drivers, when I’m a cyclist I get mad at drivers, and when I’m a pedestrian, I get mad at everyone. Even though I know exactly what challenges they’re all going through.

Through this lens, how should Larry — all of us, when we are the Larry — conduct his interviews? How can we be the Larry we want to see in the world?

Larry does have a concrete output from the interview; he needs to answer one simple question: is Jane likely to be successful in this role? “But wait,” you ask, “can Jane’s ideal Larry even answer that question? Can he both be empathetic and figure out if she should be hired?” To which I say: “not only can he, but that’s the best way to do go about it!” Let me illustrate by looking at some questions Larry should not be trying to answer:

  1. Did Jane answer all my questions correctly?
  2. Was Jane quick on her feet and graceful under pressure?
  3. Did Jane pick up on my algorithmic hints?
  4. Was her solution complete?
  5. Do I like Jane, as a human being? Could we be friends?

Those are certainly signals one can get in an interview, but are they relevant to her success in the role? Does her answering all of Larry’s questions correctly mean there’s a good chance she’ll get a high performance rating next year? It depends on the questions, right? Well then, what about the inverse: does getting answers wrong correlate with bad performance reviews? Or does it correlate with nervousness? Or miscommunication? Or ignorance of a concept that Jane could learn in the first week on the job?

The hard truth is that while most interviews test something, that something is more likely to be “ability to pass our interview” than “ability to do the job well”. Which — if the abstract interview is indeed a conscious choice, and more often it is not — then the argument for it goes like this:

We want people that will do what they need to in order to succeed; if they study hard for our arbitrary and irrelevant interview process and pass it, it means they (a) really want to work here, and (b) can do the same for a real-world project

Even for well-known companies that can attract enough people to run their gauntlet, this is of dubious value, because they’re filtering for a specific criteria: studiousness; and simultaneously filtering out many that are at odds with it, starting with people that don’t have the luxury of time to study for their interview.

Instead, I think interview questions should be relevant for the job and tailored to illuminate whether Jane will be successful in it. Does it matter if Jane came up with a coding solution in 10 minutes versus 20? How much time would she have in the course of her job for that problem? Does the job require her to be well-versed in algorithms? Even if Stack Overflow didn’t exist. Is it relevant that she didn’t finish the last part, even though she explained how it would work? Are the people we like more productive than the ones we don’t? Do these rhetorical questions make my point?

Which is that the interview is not a test and it’s certainly not an interrogation. It is above anything else a conversation, ideally between equals, in which both parties are trying to figure out if an employment arrangement would be a win/win scenario. And as the interviewer, by leading it with empathy, you accomplish two things:

  1. Have a much better chance at arriving at the truth
  2. Leave the candidate with a great impression

So how should Larry approach the interview? I like to think about role models, because we humans are great at mimicking, and in this case I think the right archetype is a podcaster. They have a guest on their show and they generally try to make the experience pleasant, to keep the content interesting, and to really get at what makes their guest tick in their particular way. If they have an actor on, they try to figure out what makes them a great actor; if it’s a business tycoon, what makes them great at business; if it’s a scientist, what makes them great at science. And similarly, Larry’s job is to figure out what makes his guest great at programming.

In order to successfully do that, the guest has to first and foremost be comfortable. People won’t let you in unless they’re comfortable. And if they won’t let you in, it is a giant barrier to understanding them. So Larry should spend some time in the beginning of the interview breaking that ice. He should make small talk — genuine small talk, not awkward conversation about the weather. Tell Jane a little about himself: what he does at the company, what he’s passionate about in his work, and what his background is. Things that will give Jane an understanding of him and help them find common ground. It’s not only worth the investment, but it’s what makes the rest of the interview worthwhile.

Once a good rapport has been established — or Larry’s given up on such happening — only then should he go into the topics he needs to cover. And he should always remember to treat Jane as he would a valued guest in his home. To be kind, to be forgiving, and to give her the benefit of the doubt. If she says something that sounds incorrect, he should make sure it’s not due to a simple mistake or misunderstanding. He should ask polite questions that help him understand how much Jane knows and understands about graph traversal or whatever — not merely that she does (or doesn’t) know that those words are the answer to his question. Because in the end, Larry’s job isn’t that of a proctor, to simply grade Jane on her performance as if this were an audition or a midterm exam. Larry’s job is actually much more difficult: it’s to create, in 45 to 60 minutes, a sorely incomplete mental model of Jane from a certain angle — be it programming ability or cultural fit or leadership style or what have you — and to then decide if that mental model is a good match to fill the open role.

Again: this is hard and it takes a lot of practice to do well, just like dancing. But taking shortcuts will just lead to lots of false positives or negatives. Tech companies love to industrialize the process and create complex questions with “objective” answers and rubrics that tally up those answers into a simple pass/fail exam and then also point to that process when talking about fairness. But in truth, there’s nothing fair about standardized tests — which is why higher education is finally moving away from the SATs.

They add a veneer of objectivity on top an industrialized process that answers not so much what a person understands, but what they’ve been exposed to and what they can quickly recall in a stressful situation. It’s like being tested on “ticking time bomb diffusal” for a job making watches. And the opportunities for subjectivity still abound: from how comfortable or nervous the candidate is, to how many hints they’re given, to how much sleep they’ve gotten the night before, to whether they’ve seen a similar question recently, and to the leniency of the proctor.

What sets Oxford and Cambridge apart from most other universities is that they use the Tutorial System, in which students learn the subject matter in whatever way makes sense, meet in very small groups with a tutor and have a discussion — in which it will become readily apparent how well they understand the subject.

It has been argued that the tutorial system has great value as a pedagogic model because it creates learning and assessment opportunities which are highly authentic and difficult to fake.

from Tutorial System at Wikipedia

Software interviews of a similar ilk have the same value.

But the one thing to remember is that regardless of the questions we ask when we’re being the Larry — no matter how good or fair or comprehensive they are — the questions are just a means to an end, and not the end. They are a conversation starter, and it’s up to us to guide that conversation in such a way that will allow us to understand our guest to such a degree that we can answer the only question that matters: “is this person likely to be successful in this role?”

Lessons Learned on the Job Search

I’m starting what I’m sure will be a great new job at an awesome company soon, but before I got that offer (and four other great ones), I was on the interview circuit for a few months. When I started looking, I figured that in this boiling job market (of mid-late 2021) it would take like around a month. For better or worse though, I hit a string of bad luck on what turned out to be an already overly-optimistic timeline: two hiring freezes (both at the offer stage in the interview loops) and a conflict of interest (after an offer).

I say “for better”, because this bad luck did come with a silver lining: it prolonged the process enough that I ended up in a lot of interviews: 79, over 34 interview loops. Which, on the one hand, was exhausting; but on the other hand, it gave me huge exposure to the current landscape of interview processes for engineering leadership at tech companies. I’ll write another article on how I think these processes can be improved, but this one is about how to deal with them — as a job seeker. Specifically, an engineering leader, such as management, or staff-level IC and higher.

This experience was so radically eye-opening for me, because aside from a random interview now and then, I’d never been exposed to it on this level. And in the pre-pandemic world, this gauntlet would’ve been impossible, due to the travel alone. My previous jobs came to me — which is a completely different dynamic — so I’m writing this to share what I’ve learned, mainly for others that are looking for a change, and are as unfamiliar with the landscape as I was.

The main takeaway: interviewing is a skill onto itself. It’s largely unrelated to the day-to-day of the job, it matters a great deal, and it’s absolutely something that can be learned. It’s like dating that way: the first few dates are crucial, and making a good impression is paramount, but the impressions you get on those early dates contain a lot of not only false signals — like being nervous or interested in everything your date loves — but also superfluous signals, that don’t matter a year into the relationship — like your favorite band.

And unfortunately, both dating and interviewing are optimized for charming (and good looking) people, who will always win out if everything else is equal. Fortunately, being a polished interviewee is a skill that I think most people can master with practice. Not unlike the premise of the movie Hitch. Because you could be the most amazing engineering leader in the world but, if you don’t interview well, no one will ever know that — aside from the people who’ve worked with you.

Before we get into the details, some more context: I interviewed with a wide range of companies, from Big Tech like Meta/Facebook and Amazon to smaller ones like like Etsy and Reddit, to startups at various stages. I applied for only remote roles, roughly evenly split between Engineering Manager (EM), Sr. EM, and Director — with a few senior IC roles added in — and all my interviews were remote, via Zoom or the like.

And I kept statistics.

But don’t worry, there’s a TL;DR section at the end that you can always skip to, if it gets too dry.

Anatomy of a Loop

Most companies have a 4±1 step process:

  1. Screening call with an internal recruiter
    • 30% of my interview loops skipped this step
  2. A first round, which is generally a role-fit discussion with the hiring manager (HM)
    • I consider this to be the first round of the loop
    • For me, 76% of the 25 of these I had were just a casual discussion with the HM
    • Of the remaining six:
      • four were a panel (two of which had technical components)
      • one was a coding challenge
      • one was a design challenge
  3. A second round, consisting of one or more people, sometimes as a panel (meaning multiple interviewers are in the video call), sometimes as a string of video calls with individual interviewers. But all pre-scheduled together.
    • Eight of the 13 I went through had a technical bent:
      • four included a technical chat
      • three design challenges
      • one coding challenge
    • Six of them were also panels:
      • one of the design challenges, and three technical chats
  4. A third round, which is usually the opposite of the second: if they did a panel first, then this is not a panel; and vice-versa. If they have a technical screen and the second round was technical, then this won’t be; and vice-versa.
    • 29% of my interview loops skipped this step
    • Three of my five were panels, and one of those was technical
    • The other two were informal chats with a CTO or VP
  5. Hopefully, an offer
    • Most of my interview loops skipped this step 🙂
    • All five of the offers I got had a fairly consistent base, plus 10-20% bonus, but big variation on equity — not just in terms of value, but also type: from no equity to private options to public RSUs. Equity ended up being the deciding factor for me.

For me, the average time from resume submission to some kind of decision was 32 days, but the max was 93 days, and 18% took over 45 days. Most of this was just waiting a long time for the initial response, and then for the next steps to be scheduled, each of which generally took a week or two.

The average time from resume submission to some kind of response was 17 days, but the max was 70.

What Worked at Each Stage


As you can see above, this was the biggest rejection stage, by a Grand Canyon-sized margin. A full quarter of my resume submissions were rejected, but even more (39%) were ghosted, and I never heard anything about them. I put a lot of work on my resume — and it always helps for it to be short (one page, if possible, because no one has time to screen them), to include the buzz words you’re skilled with and interested in (to make it past recruiters), and to be visually attractive (you can find lots of great templates for Google Docs on Etsy, for under 5$) — but that’s not what made the difference, as I found:

It basically all hinged on referrals, which increased my chances of getting to a recruiter or hiring manager from 5% to a whopping 63%! That’s a 12x improvement, which is impossible to get any other way. I’m sure it helped that my resume looked sharp, but I’m also sure that even with my previous, plain and long-ish resume, the referral success rate would still be multiples of the cold application.

So if you’re looking for a job: tell everyone you know in the industry, and hopefully they know of an opening or know someone who does and even if they can’t vouch for you personally: if all they do is pass your resume along with a “this person might be good for this role”, that itself does wonders. Also, if you find a job posting you really like, try to find a recruiter or someone in your network from that company. If you don’t know anyone directly, look at 2nd connections on LinkedIn, and then ask your mutual connection to introduce you.

Ironically though, in spite of all of this, I ended up at one of the 3 companies in the bar graph above where I had no network connection. And for all three of those, I wrote a thoughtful message in the freespace of the application. Now, I also did that for probably at least a dozen others, so not a great success rate there, but: if you’re applying cold, it definitely helps to write the recruiter a note about why they should choose your resume out of the pile. None of my other cold applications made it out alive.

The other big category is getting recruited. Most of the ones that reached out to me were either Big Tech companies or startups I’d never heard of, and most of those were clearly automated messages. I entertained almost all of them, if for nothing other than the interview experience. And here, LinkedIn is key: using the appropriate phrasing and buzz words and highlighting the experience relevant for the kind of job you want. Recruiters use various tools to search through the hundreds of thousands of potential candidates, and a little SEO goes a long way.

Recruiter Screen

At this stage I did pretty well: 3 rejections out of 18 conversations, for a pass rate of 83%.

What the recruiter wants is to make sure you’re not going to embarass them. They’ve seen your resume and decided that based on your history and so forth, you’d probably be a good candidate. So all you really have to do is sound competent, back up your resume with your voice, and fit within the logistical parameters for compensation, location, time zone, etc.

Hiring Manager Screen

Of the 19 HM screens I was in, my pass rate was 68%, and I have to say that most of the rejections at this stage were surprising to me. I thought almost all of them went well, and in fact I had at least two of the HMs tell me they’d like to move me forward, only to get a Dear John email a few days later from the recruiter. I imagine they talked to someone they liked more the next day, but one of the more frustrating parts of the process is that rejection feedback is exceedingly rare. The unfortunate truth is that the overwhelming majority of companies do not provide feedback, largely for legal reasons.

But like the recruiter, the HM largely wants to make sure the person behind the resume is a solid candidate, that they seem personable and would fit on the team, and at least part of this conversation is them selling you on the job. They ask more poigniant questions than the recruiter obviously, and have more details around the position and what they’re looking for and have a better sense of how you’d succeed in the position, but largely this stage shouldn’t be hard, and if you don’t pass, you probably wouldn’t have fit in on the team anyway — not that you’re not qualified, but team dynamic is a real thing.

Behavioral Interview

The vast majority of leadership interviews I was in were what has become the norm the tech industry: storytelling. I can’t tell you how much PTSD I have over the phrase “tell me about a time when …”. The idea is that an experienced manager can (a) demonstrate that experience by recollecting tales from a storied career, and (b) knows to emphasize the parts that the interviewer wants them to emphasize. In the beginning of my job search, I would fail miserably at the second part with the mind-reading, because I didn’t realize there was a hidden agenda to the question.

So they would ask me to tell them about a time when I promoted someone, and I’d literally tell them about such a thing and move on. When in fact, they also wanted me to tell them about my philosophy (a.k.a framework) around promotions, and maybe even around career development in general, and emphasize how that’s an important part of a manager’s job, etc, etc. And because of that, my pass rate in the second round (where behavioral interviews would often take place) fell to 54%, and was my lowest. And that’s after I cracked the code. It helped when reviewers would nudge me in a certain direction, but that didn’t happen often, and certainly not with Big Tech, where the rubric is king.

If you’re interested in seeing examples of this tactic, watch some videos on ExponentTV (and paying for Exponent is probably a great return on investment as well) to get a better sense of the kind of answer interviewers are looking for. And make a list of stories you can cycle through quickly, on the spot. The answers are supposed to given in the STAR format. There are various lists out there, but here are some of the ones I actually got:

Tell me about a time when…

  1. you had to manage someone out
  2. you had to deal with a difficult employee
  3. you had a disagreement with a peer
  4. you promoted someone
  5. your project was late
  6. you failed
  7. you dealt with a DEI issue
  8. you motivated your team
  9. you affected change without using authority
  10. you were part of a large undertaking

One prevalent theme was essentially #6 above, and that was also something I wasn’t quite sure how to answer. It felt like the old joke to me, about the interviewer asking what the candidate’s greatest weakness is, who replies that he works too hard.

But the reason people ask this is twofold: first, because experience implies failure, since no one has a perfect batting average. And so the idea is that if you’re able to talk about a time you’ve failed, you’re more likely to be an experienced leader. Second, being able to talk about failure in a way that shows humility is important in that it demonstrates some good leadership qualities such as empathy, the ability to learn from mistakes, and the ability to turn things arounds.

And herein is why getting good at interviewing and storytelling is so important, because it’s not enough to merely have those qualities: you have to realize when you’re being subtly prompted to reveal them and to do so in a tactful way. But again: it’s definitely a skill that can be learned.

Technical Interview

Only 20% of my post-recruiter-screen interviews were rigorous technical ones, and I think about half of the loops had such a stage. So it’s definitely possible to get a job without running into one, but they’re very common in Big Tech.

Of my nine technical interviews, five were design questions — and one of those was a take-home exercise. Another two were presentations of a project I’d done in the past, and the last two were coding challenges.

One of the coding challenges consisted of two leetcode-type algorithmic/data structure questions. There’s nothing to be done about this except grinding on leetcode until that part of the brain that lay dormant since your CS college courses has sprung back awake. Yes, it has nothing to do with anything in a real job, but such is life.

For the presentations, I gave essentially the same one both times: passed one, failed the other.

The design questions generally ask you to design some kind of system you’ve likely dealt with before: a search engine, a messaging service, a URL shortener. Here, it’s important to start at a high level — the main components and connections — make sure you’ve covered enough there (and take hints in the form of questions from the interviewer) and then go a level lower and talk about technologies, protocols, tradeoffs, and so on. Anything to make the interviewer feel comfortable that you have the technical chops to design things. Again, Exponent has some great videos to give you an idea of what a good one looks like.


Interviewing processes are very broken at most companies, and as a job seeker, there’s not much you can do about it except refusing to take part in broken processes — such as live coding challenges — if you have the luxury of doing so. For many people though, especially in the new remote world of SF/NY salaries, this would mean walking away from a significant pay raise.

Otherwise, the things I wish I had known at the onset:

  1. It will likely be a multi-month process, due to the speed at which most companies operate; I would prepare for roughly 3 months
  2. The best thing to get your resume noticed is to make use of your network
  3. When you do have to cold apply, writing a paragraph to the recruiter is worth the investment
  4. A polished, short, beautiful resume is worth the investment
  5. Optimizing your LinkedIn profile for recruiter searches is worth the investment
  6. Feedback, about why you were rejected, is an endangered bald eagle: rare to come across and wondrous to behold
  7. Practicing storytelling for the behavioral interview, and looking for the question behind the question, are both crucial
  8. Practicing leetcode is needed for many interviews at Big Tech companies
  9. Good PowerPoint skillz might come in handy
  10. There are lots of mock interviews available to watch — both behavioral and design — and they help a lot

All of this can be boiled down to, as Patrick McKenzie put it, “becoming aware of how power operates (versus how virtue is generally described) and choosing to join it”. Like with many concepts in the tech world (microservices, cloud computing, Agile project management) this fairly narrow interview process has evolved and largely caught on, and being familiar with it is a shibboleth that’s as important as anything else in a job search, in the early-2020s tech world anyway.

The 8 Engineers You Might Know

A few years ago, I was asked to give a talk to a class of early engineering college students about the characteristics of a “good” engineer, so that they may begin to emulate those traits — or, presumably, drop out of the program if the very thought of such a thing made them ill. But as I was thinking about what these characteristics might be, I realized that there’s no such thing as a model engineer. In thinking back through my career I could identify, at least, several pretty distinct kinds of engineers, each with their own special sauce that made them great at different things. But there was no kind of Renaissance engineer, at least in my experience, that could simply excel at everything. So I started the presentation with this quote:

Everyone is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.

of unknown origin, but not Einstein
The case of fish climbing the tree...

I know brilliant developers who cannot work well in a team, but can debug a field problem like no one else. And amazing architects that come up with the most elegant designs, but who can’t stick to one thing too long. Or the opposite: deep tinkering thinkers who would give you the death stare if you tried to pull them off their passion project.

For the college kids, I came up with some examples, like the above, of great engineers and the situations that they excel in, but I couldn’t let go of the notion that there probably exists something like the Meyers-Briggs Types or the classical Four Temperaments, but for engineering. Pseudoscience yes, inasmuch as people don’t fit into nice and tidy boxes like that, but still helpful in thinking about strengths and tendencies.

Googling didn’t turn up anything like this existing in the tubes, and since the idea wouldn’t leave me alone, dimensions that seemed useful to me eventually took shape:

  1. Teamwork: collaboration vs vision. Do they value teamwork more for its own sake, or for seeing their vision be realized?
  2. Focus: design vs goal-oriented. The journey, or the destination? The means or the end? The architecture or the building?
  3. Scope: broad vs specific. Do they like working on lots of different things, either at once or in fairly quick succession, or to focus on one thing for as long as it makes sense?

Three dimensions with two values each means 23 = 8 broad kinds of engineers, which also felt like a good number for this sort of thing. We’ll take a look at those eight kinds next, and I’ll leave the more dry discussion on methodology until the end, but before we go any further, a caveat:

If you use these concepts for anything, it should be either for fun or as a thinking exercise, but because this is in no way scientific, it should NOT be used for anything serious -- just like the MBTI should not be. Please don't try to create questionnaires out of this, or use it to justify leadership decisions, or anything of that sort. 
What I'm hoping for is that this framework provides some insight for engineers to maybe think about themselves and their career path, a shorthand for talking about certain behaviors, or probably more than anything: just as a fun lunch conversation. 

All models are wrong, but some are useful

of unknown origin

Hopefully this is somewhat the latter. Okay: with all that out of the way, here is the creamy center.

The Types

The Generalist

This persona is collaborative, design-oriented, and has broad interests. They love to hear and incorporate people’s ideas and feedback, they love to produce a beautiful cathedral of a codebase, and they don’t really care what they work on as long as it’s interesting. Design and requirements meetings are something they enjoy, and many times organize, and they take no particular pride in the cathedral because they see it as the group effort it really is, with their role as a facilitator more than anything.

I think of this persona as the typical systems engineer or spec writer, or sometimes architect. Often times, because of their people skills, they end up in leadership positions.

The Specialist

Same as the Generalist, but with specific interests. Not necessarily narrow, but they enjoy being an expert in some small number of things, and then working with teams that need that expertise. They love deepening their knowledge on the subjects they master, and they love putting that knowledge to good use to efficiently and elegantly solve challenging problems that, without their expert advice, might otherwise take the team twice as long to create something half as good. They’re often the special guest in the meeting, because they’re an expert in authentication or databases or whatever, and this team needs some authentication or database know-how imparted upon them.

The Builder

Collaborative, goal-oriented, with broad interests. They just love to build things. They have preferences of course, but by and large are open to working on a wide variety of projects. They don’t really care what it is or what the tools involved are or what the platform or the frameworks are — they’ll learn it all and make it work, and work well. If there’s a design in place, they’ll follow it, but if there isn’t, they’re happy to make one. They work very hard, will do everything possible to hit a deadline, and will deliver as good a product as you can expect.

Because of their broad interests, they gain broad experience, and because they’re goal oriented and get a reputation for meeting those goals, they tend to end up in leadership positions also.

The Conductor

Same as the Builder but with specific interests, the Conductor loves to get a particular thing done. But she’s collaborative, and the combination of traits here means this is almost always someone who quickly gravitates toward leadership. Project management, if that position exists, or whatever other role fulfills that function: tech lead, manager. The important thing is to be able to work in a team and motivate that team to do the thing well. Before she was in leadership, she got frustrated time and time again when the goal wasn’t met, and vowed that she could do better.

She doesn’t get involved in all the details of how every component works, because what she cares about is her specific role in it: to orchestrate all the moving parts so that the thing will ship on time. But she’ll get involved in whatever is required, do whatever it takes, and set up as many meetings as is needed to make sure someone will fix the situation so that the thing will ship on time.

The Architect

Now we’re in the Visionary half, where other people are, at best, a necessary evil to accomplish the vision, and at worst, something to be avoided as much as possible. The Architect, like the Generalist, loves building cathedrals; the difference is that they have a specific cathedral in mind. They don’t necessarily want your input, but the good ones will fight that urge and still consider it, if for no other reason than to improve their future visions.

With broad interests, they’ll work on pretty much anything they can, as long as the work is interesting, and they can put their spin on a beautiful design that will be implemented to the letter. Even if they have to implement it all themselves. Even if it takes 3x the allotted time. Even if the technology doesn’t exist, and they have to invent it themselves. Perhaps especially then. This is someone you want to take the thing to the next level. Depending on many, many things you might end up with the Wardenclyffe Tower or the Taj Mahal.

The Artisan

I would bet a small sum of money that the guy who maintains ntpd is an Artisan; probably the two that maintain OpenSSL, too. Unsurprisingly, they have a specific interest: working in some domain or in some technology or theme or whatever else is the singular thing that drives them. They love improving it and polishing it and crafting it into a beautiful creation that is their life’s work. They are watchmakers. They have a vision, often very specific, and will work tirelessly to see it come to life and possibly be successful — the latter is less important. What’s important is the act of creation.

Artisans are the developers that you talk about going into a cave and emerging with The Work some months later. The great ones do it mostly to spec, but creative license is something you generally have to deal with here, because that mode of operation is how Artisans produce the best work. Put them on a sprint team working on random tickets off the queue and they’ll wither and disengage. Give them a challenging problem with a corresponding amount of freedom, and they’ll make sure to wow the whole team.

The Hacker

A clarifying point for those not of the software industry, who may be reading this:

A computer hacker is a computer expert who uses their technical knowledge to achieve a goal or overcome an obstacle, within a computerized system by non-standard means. Though the term hacker has become associated in popular culture with a security hacker – someone who utilizes their technical know-how of bugs or exploits to break into computer systems and access data which would otherwise be unavailable to them – hacking can also be utilized by legitimate figures in legal situations


But that’s a great definition for our purposes here too: a goal-oriented, singular visionary with broad interests. Could also be called a “fixer”. They have a lot of confidence to learn what they need to and figure out the situation, through whatever means necessary, irrelevant of pressure, to get the thing done. They don’t really care if the thing is duct taped together so much — as long as it works for now. It can always be done properly later, but what’s important is that the goal was met, the crisis averted, and the mountain was climbed swiftly.

This is the kind of person you want on a diagnostics/field-support team. Or on a critical release that can’t be late. Or on a proof-of-concept that might create a lot of value, if anyone could get it to actually work somehow. Just don’t saddle them with process and red tape, and let them hack the planet.

The Marshal

The counterpart of The Conductor, the difference is that The Marshal has a vision for how the goal will be achieved. Much like The Architect, they don’t necessarily want input, but the good ones will know that they’ll get a better success rate at achieving the goal by getting the council of knowledgeable people they admire. Unlike The Hacker, they have no interest in working on different things — they have one goal, and it’s usually a sizeable one. Like freeing Europe from Hitler’s grip. Though the term “General” is too… wait for it: generic.

Marshals are great at leading focused efforts of outsized value. Because they are laser-focused on delivering, they don’t want to deal with too many personalities or process, and so need the right kind of team around them, in the right kind of environment. And under those circumstances, they lead with passion that energizes the team and they swat away all distractions, jump in and pull the weight of three people, and lead the crew to defeat Khan against all odds.

The Disengaged

There’s one more type of engineer, and this one’s not described by the model. If you’ve tried to figure out what motivates someone who is smart and capable, but their performance is consistently at best mediocre, and nothing really works out well… it might just be that they’re not interested in the work.

Maybe they don’t like the environment (the team, the project, the company, etc) or maybe they’re distracted by bigger problems in the real world or maybe they don’t like engineering and ended up doing it because someone told them it’s a good job. Whatever the reason, some people are just there to work for eight hours a day because they can’t get much enjoyment out of the work.

Passion Pushers - Why Doing What You Love Is Bad Advice

And that’s okay. Most people need a job, and if they bring value to the team, there’s a place for them. There’s always too much work for someone that has a clear niche, there’s always enough on the backlog that no one wants to do but that still needs to be done, and there will always be emergent situations that someone needs to attend. The Disengaged can be great for essentially doing whatever the project requires at that time, without having to worry about what motivates them — because nothing might, except more time off, or more money so they can retire earlier.

Cheat Sheet

SpecialistCollaborative Design Specific
BuilderCollaborative GoalBroad
ConductorCollaborative Goal Specific
ArchitectVisionaryDesign Broad
ArtisanVisionary Design Specific
HackerVisionary Goal Broad
MarshalVisionary Goal Specific
8 Engineering Types


The most accepted model for personality traits is the Big Five, which has five binary dimensions:

  1. Extraversion (outgoing/energetic vs. solitary/reserved)
  2. Agreeableness (friendly/compassionate vs. critical/rational)
  3. Openness, to experience (inventive/curious vs. consistent/cautious)
  4. Conscientiousness (efficient/organized vs. extravagant/careless)
  5. Neuroticism (sensitive/nervous vs. resilient/confident)

To me, these seem like great dimensions on which to differentiate personalities in general, but not as useful in terms of engineering. In our industry, being extraverted and agreeable only matter insofar as they’re important for working with and leading others, so I collapsed those two into “teamwork”. Similarly, “conscientiousness” and “neuroticism” aren’t as meaningful on their own, but when looking at them through the lens of what drives people, these two made more sense as a single “focus” dimension, where we differentiate between the journey and the destination. Finally, “openness” seemed an important trait on its own merit, but through my engineering lens, it became “scope” — “broad” being “curious” and “specific” being “consistent”.

But besides the Big Five, I also looked at the Four Temperaments, which is the classical view of personalities, and which is not too far off base — and is probably why it survived the centuries. It defines four personality types:

  1. Sanguine: extraverted, social, charming, risk-taking
  2. Choleric: extraverted, decisive, ambitious
  3. Phlegmatic: introverted, agreeable, philosophical
  4. Melancholic: introverted, detail-oriented, perfectionistic

If you look not-all-that-closely, two dimensions of the Big 5 are mostly at play there as well: extraversion and conscientiousness. In terms of this engineering model, you could say:

  1. Sanguine: collaborative and design-focused, the Generalist and the Specialist
  2. Choleric: collaborative and goal-focused, the Builder and the Conductor
  3. Phlegmatic: visionary and design-focused, the Architect and the Artisan
  4. Melancholic: visionary and goal-focused, the Hacker and the Marshal

The Four Temperaments were also used to seed the Keirsey Temperament Sorter, which expands them and maps them onto the Meyers-Briggs Type Indicator. Like the MBTI, it defines four dimensions:

  1. Concrete/observant vs abstract/introspective
  2. Temperament: cooperative vs pragmatic
  3. Role: informative vs directive
  4. Role variant: expressive vs attentive

They map into 16 personalities, but I had trouble mapping these to anything useful in the engineering world. On its face, going backwards from the 16 personalities, the dimension that seemed not important (aside from management and QA) was cooperative vs pragmatic, but of course that trait is very important in other roles too, so it just doesn’t seem to be a good model for our domain.

The three dimensions I ended up with, to me, highlight the most important differences in engineers: those who love to work with others vs those who love to go into the cave; those who love to release code vs those who love to create beauty; and those who love to work on anything as long as it’s challenging vs those who have a particular passion.

Again, this is all such super-soft methodology that would put whipped cream to shame and shouldn’t be used for anything serious — not only from recognizing the many shortfalls of a model like this, but also the fact that people change all the time, and that they don’t fit neatly into one or two or even eight boxes.

But I do think that, especially as a people leader or as an introspective individual contributor, being aware of these sorts of inclinations can help with knowing what kind of work makes a person happy, which is very important because of the old proverb: “do what you love, and you’ll never work another day in your life.” A happy employee is the most productive they can be.

Let me remind you of General Yamashita’s motto: be happy in your work

General Saito in The Bridge on the river Kwai (1957)

On Engineering Consensus

The main reason science works is, of course, the scientific method. It forces rigor into the process and it’s what began to fork hard science from philosophy. Engineering, while not a science per se, is a sibling discipline. It too benefits from the scientific method (though more so in the realm of testing) and the engineering method is similar:

1Ask a question“Does fire destroy matter?”Consider a problem“How can I seal a container?”
2Form a hypothesis“Burning stuff in a sealed container should tell us”Design a solution“Maybe putting silicone on a jar lid will do it”
3Make a prediction“If the container weighs the same, the matter just turned to gas”Implement the solutionPut the silicone on the lid
4Run a testBurn some stuff in a sealed containerRun a test Burn some stuff in the sealed jar
5Analyze the resultsSee if it weighs the same or is lighterAnalyze the results See if any smoke got out, and if there were any bad side effects, like melting

We can generalize them both to something like:

  1. Have an idea
  2. Figure out what to do about it
  3. Do that thing
  4. See if it worked

In any case, they’re related. And one overlooked aspect of science is that it’s not all that cut and dry. Rare is the experiment that produces unequivocable results that are obvious to any layman. Lavoisier’s experiments on the conservation of matter seem straightforward to us now, but the test could’ve been wrong in a lot of ways: the sealed glass vessel could’ve had a microscopic leak, the scales might not have been sensitive enough to detect that some matter was destroyed, and Lavoisier himself could’ve had his finger on the scale!

Antoine Lavoisier

This is why science relies on not just the method, but equally so, also peer review. Other people had to read about Lavoisier’s experiment or maybe observe it in person. People that were experts enough in the field that they understood all of the details about creating a sealed vessel and about the accuracy of scales and other aspects of the experiment. And eventually, other people recreated his experiment and got the same result, and only then was the scientific consensus attained that no — matter cannot be destroyed.

Good science works because regardless of the prestige of the scientist or the seeming quality of the experiment, the certification of the finding is independently verified by other experts in the field, peers who know what to look for and what notions to accept as scientific fact.

Good engineering works the same way. Instead of peer review of research papers we do peer review of design documents and pull requests, and if that step of achieving engineering consensus is missing, the quality of the work suffers.

“But,” you say, “the testing will prove the quality of the work!” Except that there’s a fine distinction there: tests will prove that the solution works as intended; it says nothing about how well it’s built. It could be held together by duct tape, it could be an overly complicated Rube Goldberg device that’s impossible to maintain, or it could be a pile of spaghetti that’s impossible to refactor. In engineering as in life, the ends rarely justify the means. And so we need consensus on whether those means are good.

Cosensus is a tricky thing though. I dread submitting my code for review, even when I’m very happy and confident with it, because I know there are things I might have missed, and as much as I want to embrace learning from my mistakes, I really don’t like to make mistakes. However, when certain smart, experienced people are on vacation, I don’t mind it so much, because I know I haven’t made any mistakes junior developers are likely to catch, and I can make a good argument if there are questions on my approaches. But those arguments might not fly past more senior developers, who might have insight that I’m lacking and the experience to know what works and doesn’t to back their stance with.

So it’s not enough to just get the consensus of any two people: for quality consensus, it has to be two (or more) of your peers. Developers operating on your level or higher, who not only have the general experience and skills to recognize whether your work is good, but also have the specific experience with the surrounding landscape — be it the type of thing you’re designing, if it’s a design document, or the codebase that you’re changing, if it’s code.

And it should be people who aren’t afraid to speak up. Some talented engineers that would otherwise be good peer reviewers might be intimidated by a Bob that’s less talented. Maybe this Bob is higher up the totem pole, or maybe he bullies, badgers, or simply exhausts all opposition.

The choice of peer reviewers should be truly peers. People who are:

  1. Technical equals
  2. Organizational equals
  3. Up for a debate
  4. On good rapport

(That last one is to avoid a situation where Chelsea always nitpicks Bob’s code because she thinks he’s the worst.)

This is a kind of ideal engineering consensus to strive for, and for better or worse, in practice there are two reasons why it wouldn’t happen all the time:

  1. Most teams are small and there’s no equal to the tech lead, either technically or organizationally.
  2. Most things aren’t important enough to spent a lot of time reaching consensus

Which exposes an important point in technical leadership: one of the reasons having good leaders matters is for the times when consensus matters. Good tech leaders have good rapport with the team, they mentor and build expertise in others, they set high standards for excellence, and encourage healthy debates from the team members. Over time, good leadership results in exactly the kind of savy, comfortable team that generates worthwile consensi. Consensuses? Consensi. Please excuse me while I look up the consensus on this matter.