DRY Code, WET Comms

The principle of DRY code is probably one of the most important bedrocks of professional programming. It’s a hallmark of what separates an amateur coder from a legitimate engineer: thinking ahead; designing your codebase; making it flexible, modular, and maintainable.

If you haven’t come across the acronym before, it stands for Don’t Repeat Yourself, and it means that pretty much any time you find yourself duplicating blocks of code or logic, you should think about pullling it out into its own function or class or whatever, that would then be called from both of those places.

xkcd #2347

Why? Because when you later find a bug in it, you can just fix it in one place, instead of fixing it in one place and forgetting to fix it in the other. Or worse, not knowing there even is a second place to fix it in, because you didn’t write the code in the first place, or you did, but it’s been more than three days and you forgot. It also helps you separate your concerns and (HEY WAKE UP) construct a codebase with purpose, that’s separated in a logical manner that others can learn and navigate. Sorry this paragraph was a bit dry — pun intended.

This, being the opposite of copying and pasting the same for loop all over the place, because you’re just learning how to program and you’re not sure how to import modules, or really even what a function signature is. That style of programming is WET, because you Write Everything Twice. It’s amateurish and it’s something that a lot of good engineers learn to avoid on their own, after having to fix everything twice, or refactor everything twice, or three times, or six times.

However, people are not CPUs. In fact, we built CPUs, because we’re so bad at being CPUs. For us, seeing the same thing more than once is boring. It’s something we filter out, because boring is not dangerous. The motionless green grass won’t kill us, and so we hardly even notice it. But if it moves in a funny way, suddenly we think it’s because of a snake, and it’s the only thing we can see. We are neural networks. We are built to recognize patterns, to flag any dangerous ones, and to recognize them with high urgency. And so, we are not having fun adding a long list of numbers. Or cross-referencing scientific texts. At least not most of us.

But there’s a tendency, especially in the technical manager, to still think DRYly. “From now on, we’re going to use RabbitMQ for all asynchronous messaging, unless there’s a good reason not to”, you write in the the #dev-general Slack channel, adding that it’s already part of the deployment anyway, that it’s very performant, that it usually doesn’t make sense to reinvent the wheel, and that it’ll be easy to adapt existing code to use it. When you’re done typing this pithy paragraph, you tag it with @channel, giddily dismissing warnings about how many people in what timezones will be getting dinged. “Good! Ding them!”, you cackle. You handle the handful of questions and complaints that arise, explaining that they can still use JSON, that you don’t have to rewrite the project in Erlang, and that yes it’s weird that Dell used to own RabbitMQ. And VMWare. And SpringSource. And that VMWare still owns it. Yes, it’s literally a list of companies you wouldn’t expect… can we move on?

Satisfied with how you navigated that, two weeks go by and you’re in a design meeting and someone pipes up asking why RabbitMQ is now part of the design diagram. Your ears prick and you jump in “Hey Frank, did you not see my Slack announcement a couple of weeks ago?” As soon as you say it, you realize that no, he did not — for he was on vacation. You quickly acknowledge it and say you’ll send it to him, and you search for it because you forgot to pin it in the channel, but that’s fixed now.

Two more weeks go by and you come across a defect ticket being worked, on the old, homegrown asynchronous message service. “How did this even make it into the backlog??” You look at the names on the ticket and DM them “hey guys, there’s some kind of disconnect, because I see this ticket, but we were supposed to stop using the old messaging service.” It’s news to both of them. They remember seeing the Slack announcement, but didn’t realize they should start using Rabbit right away. They thought it was just for new development. “Hmmm… I guess the announcement could’ve been clearer”, you realize.

Two more weeks go by and you’re looking over a giant pull request from Stan when you notice … no … it can’t be. He’s added a whole mechanism for pushing messages over WebSockets! You look at it again, and it’s true. It’s exactly what that cowboy is doing. He just completely ignored the announcement and spent weeks working on this giant component that replicates exactly what Rabbit is supposed to do for us, and fragmented the architecture in the process.

"What Slack announcement is this?"
<you send him the link>
"Oh. Sorry, I don't remember seeing this. I don't normally pay attention to Slack unless someone DMs me."
"Seriously?"
"Yeah, can't concentrate with it dinging me all the time, so I turned notifications off."

After you’re done slamming your head against the desk, he tells you that he actually considered RabbitMQ on his own, before dismissing it because he already needed the WebSocket connection for bidirectional communication and it would’ve made the whole message path more brittle to go over two connections. All of which are good arguments, and fits into the “good reason not to” clause.

The story has a happy ending because I’m not GRRM, but it easily could have not. And a fourth one might not. The point here is that there are all kinds of opportunities and good reasons for people to not internalize things. Here are some:

  1. They didn’t get the memo, for various legitimate reasons ranging from being out that day to technical glitches
  2. They didn’t understand all or part of it
  3. They skimmed it, didn’t think it affected them, and promptly dismissed it
  4. They don’t comprehend things well in that medium: some people need to see pictures and read things, some people need to hear them, some people don’t read Slack but do read email, some are the opposite
  5. They heard it, understood it, and forgot it anyway

As a leader, I feel like it is absolutely part of the job to be the positive version of Marty McFly’s 2015 boss, that tells him he’s fired via TV and three different faxes, for some reason. Send that Slack announcement, but also send it in an email, and say it out loud in three different meetings with slightly different people, and iMessage that one guy that doesn’t pay attention to anything but iMessage, and paste it in a document and file it in a good place in your document hierarchy and make sure it uses good, searchable words, and add a joke or a pun or something to make it memorable. And repeat it every time the subject gets anywhere close to it, until people audibly start groaning in your direction. That’s when you’ve succeeded.

In other words, for the important stuff, do whatever is necessary to make sure it gets through to the right people. Go to them. Meet them on their own terms, in whatever way they process information. Write everything five times. People are different, both from themselves and from machines. Embrace those differences, because it’s what makes a team a powerhouse of creativity. So you have to repeat yourself — it’s well worth it.

The Best Testers Are Scientists

It doesn’t take long to appreciate a great software tester. And it doesn’t matter if she’s a manual tester or writes automated tests, because what really matters are the types of tests being run: curious tests. Tests that don’t just discover a bug and quickly document it away in a ticket, along with the state of the whole world at the time of discovery. But instead, tests that try to find the exact circumstances in which the bug occurs.

The more defined those circumstances, the more helpful the ticket is to the developer and, ideally, will take their mind right away to the exact function that is responsible for the bug. In those cases, you can almost see the light bulb go off:

Tester: I’ve only seen the bug on the audio configuration screen, and it usually crashes the app after single-clicking the “source” input, but I’ve seen it a couple of times from the “save” button too. And it seems to only happen after a fresh install on Android 10.

Dev: ohhhh! That’s because the way we handle configuration in Android 10 changed and the file the audio source is saved in doesn’t exist anymore!

This is exactly the kind of dev reaction you want to a bug report. It’s an immediate diagnosis of the problem, which was only made possible by a very well-researched and described bug. But notice how that description could, with changes only to the jargon, have been written by an entomologist:

Entomologist: I’ve only seen the bug on a tiny island off the coast of Madagascar, and it’s usually blue with green spots, but I’ve seen a couple of them with yellow spots too. And it seems to only come out right after sunset in the wet season.

Which is kind of obvious when you think about it, because what do scientists do? They test the software that is our reality. Galileo’s gravity experiments is one of the more famous in history (and likely didn’t happen), but what is it, in software terms? He wanted to know if the rules of our universe took weight into account when pulling things toward the Earth. A previous power user, Aristotle, figured that the heavier a thing was, the faster it would fall. But that user failed to actually do any testing. So thank God that talented testers, like John Philoponus and Simon Stevin, came along and figured out that things mostly fall at the same rate through air, and then bothered to update the documentation.

What Aristotle did was assume the software worked in a certain way. Granted that he didn’t have the requirements to reference, but he probably noticed that you have to kick a heavy ball harder to go the same distance as a lighter ball, and he figured that the Earth kicks all things equally hard. That’s the equivalent of our tester above seeing the “source” input work on Android 9 and not bothering to test it on 10. Or seeing that it worked on the video configuration screen and not bothering to test it on the audio one too.

And that’s okay, because Aristotle was not a tester. He was more like a fanboy blogger. But what testers should be, is bonafide scientists, like Simon Stevin, who follow the scientific method:

  1. Ask a question
  2. Form a hypothesis
  3. Make a prediction, based on your hypothesis
  4. Run a test
  5. Analyze the results

In our example with the “source” input, after the tester saw it the first time, she probably did something like this:

  1. “why did it crash?”
  2. “maybe it was because I pressed the ‘source’ input”
  3. “if so, that’ll make it crash again”
  4. Relaunched the app, tried it, it crashed again.
  5. “okay, that was definitely the reason”

Aristotle might stop there and file the bug: “app crashes when using the ‘source’ input”. And the developer would try replicating it on their Android 9 phone and kick the ticket back with “couldn’t replicate”, and that whole cycle would be a waste of time. But our tester asked another question:

  1. “does it crash on this other phone?”
  2. “if it doesn’t, it’s a more nuanced bug”
  3. “I think it’ll crash though”
  4. Tried it on the other phone: it didn’t crash
  5. “what’s different about this phone?”

And she continued the scientific process like that, asking more and more pertinent questions, until the environment that our bug exists in was fully described. Which is exactly what you want in a bug report, because anything less will, in aggregate, be a productivity weevil, wasting both developer and tester time with double replication efforts and conjectures about the tester’s environment and back and forths. A clear, complete bug report does wonders for productivity.

So then, why not just teach all your testers the scientific method? Because it doesn’t work in the real world. We all learn the scientific method, but few of us become scientists. And I imagine that, just like in any profession, a not-insignificant number of scientists aren’t good scientists. Knowing things like the scientific method is necessary, but not sufficient to make a good scientist. You also need creativity, in order to ask the interesting questions, and more importantly, curiosity to keep the process going until it’s natural conclusion — to uncover the whole plot.

Tangentially, curiosity is a hugely important trait in great developers, too. But for testers, even more so.

Be The Lorax

I am the Lorax. I speak for the trees.

“The Lorax”, 1971

This was Dr. Seuss’ favorite of his books. If you haven’t come across it, it’s a great fable about a woodland creature who keeps warning an industrialist to stop cutting down all the Truffula trees; but the guy doesn’t listen and proceeds to destroy the ecosystem.

The Lorax book cover

From time to time I feel like the Lorax, but instead of the trees, I speak for the developers, who are every software company’s most precious resource1. And I think this is a crucial, but often overlooked part of the software manager’s job: litigation.

At times the plaintiff, other times the defendant, but usually in opposition to someone named Taylor from another department, like finance or HR, who doesn’t understand engineers. Sometimes Taylor is someone very high up, who used to be an engineer a lifetime ago, but they’ve forgotten what it’s like, now that their days are filled with meetings about sales projections and synergy. And here you come, Sr. Cat Herder, with a request to change the new dress code policy.

“You might not have heard, but we had a bit of a problem in Marketing, so we had to institute the dress code. We didn’t want to, but you know Legal.”

You see, Taylor’s not a villain. Almost nobody is. They’re just trying to do their best, and one thing people outside of Engineering are pretty bad at doing is putting themselves in an engineer’s shoes. It would be easier for most people to pretend they were a parakeet. But this is where you come in — you who have knowledge of the way of life in the dark cave of Engineering.

So you use your nerdy charm and wit to show that having a policy is fine, but that it just needs to be tweaked, because while the developers have no intention of causing problems, roughly half of them will quit before dressing “professionally” at work. And a third of those, have not worn long pants or closed toed shoes in years — and won’t start now. Not when they can get a good job by just whispering the words “I’m looking for a change” into the ether that’s continuously monitored by eager recruiters. Because as it turns out, those developers are a lot of the best ones we have, and they’re past putting up with well-intentioned policies. And if even one of them leaves, it’ll cost us a ton in recruiting, on-boarding, shipping delay, and general business risk, which is surely worth tweaking the policy to allow cargo shorts and sandals.

“But it’s 45°F outside!”, Taylor exclaims.
“It doesn’t matter, because they’re almost never outside,” you reply, and then continue with a whisper, “and they’re really stubborn.”

If you do this well, the policy will get tweaked before the developers are even told what was in the policy email they deleted, and the builds go on without a blip.

Sometimes you have to be the change you want to see in the company, which is a bit harder. Convincing the powers that be to allow flexible hours, or to get Engineering more expensive laptops, or that the savings were not worth switching to Microsoft Teams — these are the kinds of things that might need a Powerpoint. A beautiful, well-researched, entertaining Powerpoint, which will take a lot of your precious little coding time.

But these are the things you have to do as a good manager and the sacrifices you have to make, because you are the Lorax, and you speak for the devs. Though… hopefully better than the Lorax, in that you actually succeed in preventing the collapse of the ecosystem.

Footnote

  1. Just to be clear: I was using “resource” metaphorically. As much as nature abhors a vacuum, I abhor calling people resources. Trees and laptops and StackOverflow articles are resources — people are not.

Bonuses Don’t Motivate Developers

First, let me reassure you that they don’t: 50 years of research have shown us that if anything, incentives demotivate employees. And not just developers, but any job that requires some thought beyond mechanistic, rote work like the assembly line.

This is succinctly explained in the most popular RSA Animate video so far (you can watch it below) — a speech given by Dan Pink, who literally wrote the book on motivation. In it, he explains how an experiment funded by the Federal Reserve and conducted by MIT, Carnegie Mellon and the University of Chicago showed that bonuses led to poorer performance for any tasks that required anything above “rudimentary cognitive skill”.

Increasing the bonuses didn’t just not do anything, it actually made people perform worse, and it held true for populations both in the US and in rural India. But this hugely valuable research is mostly ignored, despite being basically ancient by now:

  • Herbert Mayer wrote in a 1975 paper that

    “…  merit pay emphasizes the direct relationship between job performance and dollar rewards, thereby detracting from intrinsic motivation in the work itself. A system that would switch the emphasis to rewards for self-development and opportunities for greater responsibility would seem to serve both individual and organizational goals in a more effective manner.”

  • Alfie Kohn, author of another book on motivation, wrote in a Harvard Business Review article in 1993:

    “As for productivity, at least two dozen studies over the last three decades have conclusively shown that people who expect to receive a reward for completing a task or for doing that task successfully simply do not perform as well as those who expect no reward at all. “

  • Joel Spolsky, after quoting the above, wrote in 2000:

    “… any kind of workplace competition, any scheme of rewards and punishments, and even the old fashion trick of ‘catching people doing something right and rewarding them,’ all do more harm than good. Giving somebody positive reinforcement (such as stupid company ceremonies where people get plaques) implies that they only did it for the lucite plaque; it implies that they are not independent enough to work unless they are going to get a cookie; and it’s insulting and demeaning.”

  • Joel again, in 2006, writing about what he calls the “Econ 101 Management Method“:

    “But when you offer people money to do things that they wanted to do, anyway, they suffer from something called the Overjustification Effect. “I must be writing bug-free code because I like the money I get for it,” they think, and the extrinsic motivation displaces the intrinsic motivation. Since extrinsic motivation is a much weaker effect, the net result is that you’ve actually reduced their desire to do a good job. When you stop paying the bonus, or when they decide they don’t care that much about the money, they no longer think that they care about bug free code.”

So the rule is that money does not motivate, with two caveats:

  1. Rote, mechanistic tasks: more money works beautifully in that specific case. Which is why we have bonuses at all, because it worked so well in the factories where Henry Ford pioneered the concept of paying workers a better wage for better performance.
  2. Too little money: if workers think they’re not being paid fairly, it becomes a sticking point and all they think about is how they’re being screwed, which obviously prevents them from performing at their full potential.

Joel had another article in 2006, called “Identity Management Method“, in which he described how to create intrinsic motivation:

“To be an Identity Method manager, you have to summon all the social skills you have to make your employees identify with the goals of the organization, so that they are highly motivated, then you need to give them the information they need to steer in the right direction.”

Fast forward to Dan Pink’s 2009 book and 2010 RSA Animate, and he continues the same idea, breaking it down into three factors that do increase performance:

  1. Autonomy: as a manager, giving your employees autonomy is the best way to get them engaged in the work. He mentions Atlassian’s ShipIt Days as an example of how autonomy leads to great things, and should’ve mentioned Google’s 20% time as well.
  2. Mastery: “the urge to get better at stuff”. This is a big reason why the open source movement exists.
  3. Purpose: an important reason to do what you’re doing. The open source movement ties in here, and also crowd sourced efforts like Wikipedia, but increasingly, companies: Apple, Google, Facebook all have self-invented lofty purposes for their existence, and this inspires their employees.

Bonuses are related to none of those. The only reason to ever dangle bonuses in front of developers, is maybe as compensation for the rare big push requiring lots of overtime; and in that case, it’s just to prevent them from feeling exploited. Otherwise, bonuses will actually hurt productivity. And that’s a scientific fact. To get better performance out of your employees, hire smart people and let them be smart. Tell them the company story and why the job is important, then simply get out of the way and help them when they need it.