This is an important topic for us over at Lemma, because we believe that our experience in recognizing and dealing with cognitive and social biases in Software Development is one of our core value propositions as a company, that you just cannot get with a pure staff augmentation alternative, nor with an isolated outsourced engineering team.
As a team is heads-down working on a project and optimizing to make constant forward progress, a lot of things can become normal to them, part of the status quo. Small annoyances, inefficiencies or debt may be allowed as acceptable trade-offs at some point in time, but then grow and become issues that are never revisited or amended. If this is left unnoticed or unmanaged, the compound effect may end up crippling the team.
In this article we will explore some of the most common biases that can sneak up on teams and directly affect team morale and performance, and some of the techniques we employ to overcome them.
The following list is not exhaustive and if you’d like to learn more about our thoughts on this topic, we invite you to reach out.
Status Quo Bias
“It is what it is.”
Sometimes teams recognize that something is a problem, but believe it can’t be solved because of previous failed attempts. For example, “the test suite is too slow to run and everyone hates it, but Mark already tried fixing it and there’s nothing we can do about it.” This is a dangerous type of fallacy and (beyond having slow tests, which is bad enough) it can lead to a feeling of learned helplessness that will affect morale and stifle innovation in other places as well.
This problem gets worse and worse over time, as each occurrence compounds and reinforces behaviors in a kind of broken windows theory. Each time that the pursuit of craftsmanship and common sense is relinquished to the ruthless circumstances imposed by deadlines, technical debt, inflexible processes or other factors that the team perceives to be out of their control, it’s not just the codebase that takes a hit, but the team’s psychology as well, and that hit reverberates over time and manifests in new problems and new compromises, feeding the downward spiral of morale and quality.
Make no mistake, these external pressures (such as deadlines) exist for valid reasons and are not going away. It’s not that we should try to get rid of them, but more that we need to understand the trade-offs and impacts in a holistic way, for example when planning how and when to pay technical debt.
Bandwagon fallacy
“who am I to judge?”
Someone may recognize that there is a problem affecting the team or the code, but they may think they shouldn’t bring it up or do something about it because others don’t seem bothered by it.
For example, “The way we do engineering estimations doesn’t make sense, we don’t do enough research and backlog refinement beforehand and we end up changing our priorities and commitments all the time during the sprint... But others seem to like it so I must be wrong, this obviously seems right to everyone else”.
The risk here is naturally that others could be thinking the same thing, or that they haven’t seen things from that perspective and are missing out on valuable insight.
Availability Bias
“if you only have a hammer, every problem starts looking like a nail”.
Sometimes the availability bias that a team doesn’t realize they have will tip the scales artificially when weighing in pros or cons about designing and selecting a technology stack, or implementation decisions for new features, infrastructure, testing strategies, etc.
This bias can also manifest in more subtle but pervasive ways in software communities that share a common principle or belief, such as Ruby on Rails’ design mantra of “convention over configuration”. In our opinion, these types of proverbs are the food of the wise, but the liquor of the fool.
Consistency, minimalism and standardization in your technology environment can be great things, but not at the expense of choosing the right tool for the job. As with all other biases, the availability bias is a direct enemy of reasoning from first principles and can lead to unpredictable or suboptimal decisions.
Anchoring Bias
This is the reason planning poker exists; because the first data points one learns cement a sense of contextual boundary that lowers the malleability or creativity of one’s reasoning.
For example, if someone says “this task should take about 1 day to complete, right?” one may feel some unseen pressure to reply with “um no, I think it would take at least double that”, whereas if the original question had been “how long do you think this task would take?”, the answer could have been something like “about 1 week”.
This happens when we plan and estimate our work, when we design solutions, when we do anything, really. Think about how much your competitor’s decisions or features influence your own; how difficult it is to forge your own path instead of following others, and how crucial this is to innovate.
Not Invented Here
In our experience, NIH, which is also related to the IKEA effect, most commonly affects bigger enterprises as they start developing custom internal tools and services and gain a certain momentum to keep going past the point of diminishing returns.
The moment your organization invests in a center of excellence, a dedicated Platform team or a Tooling team or other long-lived (as opposed to specific mission-driven) initiatives of that nature, engineering organizations first see a wave of valuable low-hanging-fruit contributions to consolidate, optimize and enable, followed by another wave of slightly less impactful contributions, followed by another, and another, asymptotically converging on a certain type of impact which may not live up to the original impetus and expectations.
It’s sometimes in these cases when companies, for example, start developing “their own AWS Lambda”, instead of using AWS Lambda. There is a promise of lower costs, reduced vendor lock-in and increased flexibility, but this promise in practice may never be realized, meanwhile the real AWS Lambda continues to improve and become more cost-effective every day. Because of sunk cost bias, even in cases where it’s obvious to everyone that switching to AWS Lambda would be a better business decision, still some teams struggle for years with a worse, fragile, unfinished solution.
That said, this bias 100% also affects small teams. As a trivial but good-enough example: think about how many “date-picker” libraries exist out there, and consider how many should actually exist. Of course many are created for fun or for educational purposes, but still we’ve seen many engineering teams developing custom UI components, instead of solving business-domain problems.
Confirmation Bias
This one is particularly sneaky in that every time we think we are right about something, we may actually be completely wrong and yet each new thing we learn about the situation can still convince us more and more that we are right.
For example, one could think that a team is struggling because they are the only team not filling out sprint reports on time. As with all other biases, they combine, so due to the Halo effect one may also intuitively think that the team must be a low performer in general.
In practice though, it could be that whether those sprint reports are filled or not has no correlation with performance. But each time that report is not filled in a timely manner, we may be more and more convinced of just how poor their performance is.
Even in the face of new evidence that would indicate otherwise (e.g.: business stakeholders directly praising the work done by that team), confirmation bias makes it so that we are less likely to be receptive to it, and we may discard that type of data point as a fluke or a lucky exception or polite platitudes or something else.
Another common way this manifests is through Parkinson’s law: "work expands so as to fill the time available for its completion." – One could estimate that a task should take X amount of time, then execute that task in X time and think to oneself “I was right, it did take X time. My estimates are 100% accurate”, but actually it’s still entirely possible that the time the task should have taken was X/2.
We could (and would love to) keep listing biases, but we’ll wrap up with this one to keep this article readable. In a future entry, we will focus on other kinds of biases and cultural traits that affect the recruiting process and psychological safety of the team.
Treating Biases
These biases (joined by many others) are incurable, but treatable. There is no silver bullet or set of directions one can follow to treat them, but there are many things that can help.
Support Mechanisms
First of all, one has to actively invest in the performance and well-being of the team. At Lemma, our Engineering Managers coach and support our teams and have a long-lived goal of fostering a work environment and culture in which engineers can do their best work.
Recognizing biases is a key aspect of that mission, and one of our priorities in our interview process is to look for experienced and empathetic leaders that know what signs to look for and how to build trust and nurture a culture in which biases are less likely to run rampant and are treated when they are spotted.
Collaboration
We’ve talked a lot in other articles about Peer Review, and we think this is yet another case of how the right team synergy and collaboration can greatly mitigate unconscious biases.
Estimating and planning work together (following best practices like planning poker), designing and reviewing solutions together, getting help from others during implementation, reviewing the resulting implementation together, all of these things help reduce blind spots, enable debate and sharing of perspectives, erase knowledge silos and more.
Team “retrospectives” are also useful ceremonies to reflect on things that may be hindering the team. These can be complemented with other exercises and tools to facilitate individual and / or anonymized feedback. For example, things like 1:1 meetings, monthly anonymous surveys, etc.
Psychological Safety
We believe the topic of psychological safety deserves a future article of its own, so we won’t cover much here, but we couldn’t go without mentioning it in the context of this article either.
Psychological safety is not a direct or sufficient solution to all team problems, but it certainly is a prerequisite to being able to solve them. Without a culture where creativity, empathy, candor and craftsmanship can thrive, there are less guard rails to protect us against unconscious biases.
Institutional Knowledge
Institutional knowledge (as opposed to tribal knowledge) can arm teams to be better equipped to deal with problems.
Keeping a RAID log (risks, assumptions, problems and dependencies), for example, is a simple and effective way to be able to recognize and address problems that are not the highest priority right now, or that will have to be addressed over a long time, or under specific circumstances.
Similarly, creating GitHub issues to track technical debt and discuss it with the team can be the difference between “the tests will never run fast again” vs “here is our plan to fix the tests suite in the next 3 months”.
Last but not least, we’ve written an entire article before about the importance of documenting technical decisions made along the way. This not only helps us preserve our reasoning over time, but also helps us be rigorous and disciplined in our analysis, as well as make it part of the peer review process.
Citizen Innovation
In our experience, the people doing the work every day are the best source for uncovering ways to innovate and raise the bar for performance. In many cases innovation may already be there, ripe for the picking, going unnoticed.
Lemma teams are routinely asked questions such as:
- What are the pain points the team has?
- What would they change if it was up to them?
- Why haven’t they changed those things already?
- What have they tried changing before, and failed?
- What would they add to the project even if it’s not yet on any product roadmap?
- What would they invest in removing or simplifying?
- What are things that they deem as “important” but never quite “urgent”?
- What tools or mechanisms do they have to communicate those things?
- What are things that are working well today, but still we may keep improving on?
- What have they learned, read and experienced that inspired them to drive change in the project?
We then work with our clients to communicate any feedback or learnings uncovered and ensure the team and our client are set up for success.
Community
While Lemma has many independent engineering teams, as a whole we operate as a community of practice. We encourage teams to consult with each other (within the boundaries of NDAs and other compliance requirements) and to learn from each other about what processes or tools work best for them, how they overcome challenges they find along the way, etc.
We also host monthly virtual meetups, we frequently share news articles, podcasts, books, and other things to learn together and be part of a community. Needless to say, we also memorialize many of the things we learn and opinions we form along the way in articles such as this one and other places.
Any time that two teams talk to each other, that’s an opportunity for someone to say “wow, your test suite takes 45 minutes to run?! That’s crazy!” and many times, that alone is enough to dispel the enchantment that one unconscious bias or another had cast upon us.