A Study of Cognitive Biases in Software Development

I recently read a few articles on cognitive biases, decision making, and expert intuition from the field of management research. Then an article popped up from the Communications of the ACM (CACM) dealing with cognitive bias in software development. The CACM article is a small field study that serves up some interesting and potentially quite useful conclusions about how to think about thinking in software development.

The Management Side

On the management/decision research side, I read the article “The Big Idea: Before You Make That Big Decision…”, from the Harvard Business Review (HBR), June 2011, by Nobel Memorial Prize in Economics-winner Daniel Kahneman, professor Dan Lovallo, and Olivier Sibony. The article provides a good introduction to the concept of cognitive bias, and discusses the risks that biases pose to business decision making. It provides a series of questions that decision makers should use to more deeply analyze business cases.

The HBR article is very much about big business decisions, which is quite a different world from the daily work of regular software developers. Still, the concept of cognitive biases can be applied to both worlds. In the end, software development is also about making decisions – how to solve an immediate problem, how to formulate a line of code, how to architect a system, which algorithm to use for a particular problem, deciding on which APIs to use, etc. Thus, studying the effect of biases in software development is reasonable.

Biases in Decision Making

The HBR article provides a good overview of the concept of cognitive bias. It is based on a cognitive model where thinking is divided into intuitive and reflective.

Intuitive or “system one” thinking is subconscious and automatic. It is used when you just do something without reflecting about what you are doing. It is what is operating when your brain just pops up an idea or a solution to a problem without you making a conscious effort to think about it. As you gain experience and expertise in a field, more and more intuition will be used. Cognitive biases is an unfortunate but also unavoidable side-effect of system one thinking, where the automatic system finds a pattern or creates a story that appears to make sense, but which might have flaws.

The interesting thing about biases is that they are fundamentally subconscious. Knowing that you have biases are not enough to overcome them. You need help from the outside.

Reflective or “system two” thinking is the slow deliberate thinking we apply when we work something through step by step. It is used when learning something new. When you are aware that you have spent real effort to work through some problem, system two has been in action. If we make a mistake using system two, we are likely aware of it. While a mistake made using system one might show up later to bite us… but we do not notice it at the time it is made. However, we could not really survive using system two for all decisions – it would take too much time and very likely result in analysis paralysis for anything remotely complex (or getting eaten by a lion).

The HBR article provides a series of examples of how intuitive thinking can influence the decision-making process and the final analyses of business deals or actions, resulting in poor decisions that might seem obviously broken to someone on the outside (while still appearing to make sense within the context of the proposing group). The main message is that when executives review recommendations and analyses, they need to look at the process by which the materials were produced and not just their content. By using a checklist of questions, they can discover the subconscious biases at work in the people preparing a proposal.

Example biases at work in business are saliency bias (making an analogy to other cases), confirmation bias (our old friend), anchoring bias, sunk-cost bias, and self-interest bias. All mental tools that we use to make sense of a messy world, but that can also lead us astray.

Biases in Software Development

The CACM article was “Cognitive Biases in Software Development”, from the Communications of the ACM, April 2022, by Souti Chattopadhyay, Nicholas Nelson, Audrey Au, Natalia Morales, Christopher Sanchez, Rahul Pandita, and Anita Sarma.

The CACM article is a field study where the authors are looking for the effect of cognitive biases in software development (i.e., programming). They identify a set of biases that can apply to software development and try to ascertain their effect on the work of a small group of real-world professional programmers. They interviewed the programmers and looked at code check-ins and changes. The study is admittedly small, but the results make sense and would seem to offer some real insights (which could by system one applying confirmation bias).

Software Development: Identified Biases

The CACM article groups the biases as shown in the image below. The set of biases is a bit different from those of the HBR article. There are many overlaps, but also some unique biases in each article that I think reflect the different nature of the problems being tackled. Making a big business decision is quite different from deciding how to write a line of code.

Table 4 from the CACM article.

The fact that we see effects like confirmation bias, overconfidence, and halo effect at work in examples in both articles indicate that the concept captures something universal in human thought and decision making. This is both fascinating and scary. It also shows the (potential) value in science of applying ideas from one field to another.

The CACM article serves to remind us that software development is more than just pure computer science. Obviously, people matter and there are human factors at play when writing code. Cognitive biases can have a real impact on code in a real-world setting.

The Impact of Biases on Code

A key part of the CACM article is the quantification of the impact of biases on code. Read the article for the full details, but in essence they look at code changes (“actions”) and conclude that programmer actions resulting from biases are typically worse than those that do not show bias. Bias-based actions tend to require reversal more often. In the end, the authors claim that biases causes programmers to lose productivity:

Therefore, biased actions lead to significant negative outcomes as participants lost approximately 25% of their entire working time.

The precise numbers might well be debated, but the overall conclusion makes sense. The article provides a series of examples of biased actions and their outcomes (and I would have loved it if there had been many more details provided). The first example they give is one of inadequate exploration:

For example, P4 needed a sub­set of data from a hashmap which required him to query the hashmap. As he was not familiar with the query inter­face and did not know how to construct the query, he decided that an easy-fix (CB6) was to instead manually collect the data.

I have seen things like this many times. Instead of doing something in “the right way”, programmers throw together something that gets a job done by in effect duplicating information or logic that already exists. Just because it is less effort to write a lot of new code than to understand how existing APIs and data structures work.

Asking the programmers in their study, they felt that the three most common biases were:

  • Memory bias
  • Convenience
  • Preconception

However, the researchers did not think that memory bias was that common in their analysis of the data. Thus, asking programmers about how they think they think might not provide the full answer. Worth thinking about.

One interesting (and obvious in hindsight) observation was that experience did not protect against bias. That makes sense to me – one way to think about building experience in a field and getting better at what you do is that system one thinking expands to cover more cases. Which means you get things done faster, but also that unconscious biases can creep in. You would expect a good seasoned programmer to quickly bash out code to solve a problem, for cases where a beginning programmer would have to stop and think and work things out explicitly (system two thinking).

Reducing the Impact of Biases

Both articles note that there is not really a way for an individual to avoid biases. The automatic system is subconscious in a very deep way. However, the negative impact can be reduced by applying methods, tools, and techniques. Ideally, allowing system one thinking to do most of the work while providing a check on its negative effects by forcing a switch into system two and more conscious reasoning.

The main content of the HBR article is really a checklist to be used to reduce the impact of bias on big business decisions. They advice leaders who receive a proposal from a team to:

  • Check for self-interest bias – pretty obvious what that means
  • Check for affect – has the team fallen in love with the proposal?
  • Check for groupthink – try to find dissenting views
  • Check for saliency bias – too much emphasis on analogies with previous cases?
  • Check for confirmation bias – is there sufficient attention paid to alternatives?
  • Check for availability bias – make sure that all data needed for the decision has been collected
  • Check for anchoring bias – explore the numbers in the proposal, where do they come from?
  • Check for halo effect – is there an assumption that success from one area applies equally in another area, without any real foundation?
  • Check for overconfidence – force the proposal team to think about the proposal from a different perspective, think about the reaction of other players
  • Check for disaster neglect – has the team really considered the worst case for real?
  • Check for loss aversion – is the team being more concerned about risks than gain?

This advice might not be particularly applicable to everyday programming. However, it is still relevant in a software context when it comes to making larger decisions with potential long-term impact on a code base. For example, deciding on which language, programming framework, or API to use. In such cases, it makes sense to have a group investigate the options, and the biases discussed in the HBR articular are just as likely to manifest themselves as they are in a business setting.

The CACM article does not offer a crisp checklist. However, it does offer some ideas on helpful practices to reduce the negative impact of biases in software:

  • Stepping back – such as spending time on explicit documentation days or taking clean-code training. Basically, making programmers step out of the daily grind to encourage more reflection.
  • Different perspectives – the advice here is really mostly to get other people into the loop. For example, using pair programming, talking to other teams or other functions (like discussing with a QA/testing department). It can also mean going back and looking at requirements and needs of users.
  • Systematic approach – applying methods that force feedback or reflection into the programming process, such as early design reviews. Thinking about compatibility or future maintainability of code. Come up with multiple solutions and compare them. I think this could also be seen as forcing the mind into system two thinking.
  • “RTFM” – read documentation and tips and best-known-methods before starting to code. Follow coding guidelines and coding standards. Basically, avoid just doing things the way they are always done. Understand why, check if existing coding patterns actually agree with the documented behaviors of APIs – get out of “ownership bias” and acknowledge that old ways might be wrong. This goes to a pet peeve of mine.
  • Processes –typical software engineering methods like agile, test-driven development, user stories. Coding standards and standard packages and libraries, applying the clean code methods that were part of the discussion in the first point. Applying problem-solving methodologies.

Stepping back, the categories appear to overlap a bit.  That is to be expected when trying to decompose a gnarly problem like this. In any case, the ideas make sense. Interestingly, none are really new – all are things that have been proposed over the years and many are part of known best practices.

Conclusions

It is quite interesting to look at these two article together, to get two different perspectives on the same psychological effect. Looking at how two rather different fields both encounter the same ideas provides for new angles, especially on the programming side.

Digging into the CACM article and programming, we end up with advice on how to become a better programmer that would seem familiar. Despite this, I think it is worth looking at the problem from this perspective. Thinking about how we think can unlock new insights in how teams work.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.