Security breaches are no fun. There’s much advice about how to improve cybersecurity that boils down to: products, processes and priorities. No one talks about psychology. Now, what on earth does psychology have to do with cybersecurity?

Organizations are spending a lot of money on security products, to the tune of hundreds of billions of dollars. This spend is growing faster than the average growth in IT spend. 1 Yet, losses due to cybercrime are growing even faster, to the tune of tens of trillions of dollars. 2

Will increased security spending stop this trend? The Director of CISA doesn’t think so. In her keynote at the mWISE Conference 2024, she stated:

We don’t need more security products — we need more secure products. Technology vendors are building problems into their products, opening doors for attackers.” 3

How can technology vendors stop “building problems into their products”? Much has been written about how to get development teams to write secure software. Most suggestions boil down to:

  • Educating developers on secure coding.
  • Automating security issue detection with mandatory quality thresholds.
  • Elevating security as a CxO priority, at par with cost and speed of software delivery.

What is not written about much is that developing software is a social activity and social psychology of developers when they write code. Without understanding that, tools, processes, education and prioritization will not make software secure, because developers won’t have been co-opted in that pursuit.

Secure Software is Quality Software

According to NIST, 65% of reported security vulnerabilities were due to “avoidable, ordinary programming errors”, aka bugs. 4

That means we can reduce security risks by nearly two-thirds simply by eliminating bugs!

What about the other 35% of issues? They’d be more subtle interactions between pieces of code or even different programs, where an individual piece of code does not have an obvious bug. To address those, we need a combination of:

  1. Buying tools to automatically run static and dynamic security tests on software for known security issues, and importantly train developers on how to use the output of these tools.
  2. Hire security focused testers and researchers who look for issues that are less well-known and not checked for automatically.
  3. Educate developers about the subtler aspects of security, e.g. correct use of cryptography, secure secret handling, nuances of authentication and authorization protocols, and least-privilege principles and techniques.

I am convinced that if your codebase is low-quality, that is, has a lot of technical debt, you won’t be able to eliminate simple programming errors or take more advanced steps to improve security.

If your codebase has a lot of technical debt:

  1. Making changes to code will introduce bugs inadvertently;
  2. It will take longer and longer to make changes to the codebase safely;
  3. Your developers will be so busy fighting deadlines and fixing inadvertent breakages that they won’t have any time to think about security.

You cannot make software secure without first making the codebase high quality.

How to Fail at Improving Quality

Long ago, I set out to improve the quality of my company’s codebase. I failed, miserably. Don’t be me.

Early in my career, after many years of mostly solo coding, I got invited to a startup as the first techie. I whipped up a prototype, we demoed, got funded, and things got real. I hired a small squad of geniuses, all smarter than me, and we built an MVP.

We got some initial success. We added new team members. We added features almost as quickly as we could imagine them. Our codebase grew quickly, and admittedly somewhat haphazardly. After a while, the codebase started feeling as fragile as a Jenga tower: new features would take forever, and surprise bugs kept crashing the release party. I took a critical look at the monster I had helped create.

I found that the quality of our codebase was in the dumps. My team was not at fault: I was an equal contributor to the rot. Some code was so inscrutable that it might as well have been encrypted. Rare was the comment that explained what the code did. Some functions were so long that they would qualify as a PhD thesis. Bugs got through because test coverage was inadequate. When we doubled down on writing tests, we found it hard to do so because much of the codebase was so convoluted that finding all the test scenarios was a major undertaking, and disheartening.

I had to rally the team to save the codebase before it toppled over! I tried a few things, roughly in this order:

  1. Exhortations. I called a team meeting, explained the problem, and showed some examples of problematic code. The team agreed and decided to do better. Result. Squat. Nothing changed. Lesson: Good intentions don’t work; you need mechanisms. 5.

  2. Leading by Example. I took up a complex feature, wrote the best code I could, documented it well with beautiful comments, and smothered the code with tests. The team agreed this should be the gold standard. Result. Diddly-squat. Nothing changed, just faster. Lesson: People don’t value examples of behaviour from those unlike them very highly. 6

  3. Authority. A senior engineer discovered SonarQube, a code analysis and quality reporting tool. He added it to our build process and showed its reports to the team. We liked it. I told the team to look at what SonarQube reported after every build, and fix the issues promptly. I started reviewing the reports in our regular team meetings. Result. Code quality crept up, as long as I was reviewing the reports. I could not review the reports for a few weeks due to a crisis and some crunch-time. When I went back to take a look, quality had gone down to original levels, presumably because the developers had decided the reports were no longer important to me, and therefore to them. Lesson: Authority doesn’t translate into a self-sustaining movement.

  4. Simplistic Metrics. Desperate, I proposed I’ll track bug count per developer, hinting it might influence performance reviews. The team hated my proposal, and I beat a hasty retreat. Still, some junior developers began avoiding work that would involve the gnarlier parts of the codebase, to avoid inadvertently introducing bugs. I had to hold several 1:1s to rebuild their trust. Result: The damage to morale was so bad that “squat” felt like progress. Lesson: Avoid overly simplistic performance metrics, especially for measuring people. 7

At some point, I could not sustain the effort, and gave up. We did ship a product and grew the userbase to millions. Still, every new feature added was an uphill struggle. It was… unpleasant.

Over time, I learnt more about human behaviour and did better at improving quality. I want to share some principles and techniques with you.

Principle: Social Proof

Social Proof is a discovery in social psychology: people use the actions of others to decide how to behave, especially when they view those others as similar to themselves.

This is a very robust finding, backed by a lot of research. Some examples follow.

  1. A brewery-pub in London started placing a sign on the bar stating, truthfully, that its most popular beer that week was its porter; porter sales doubled almost immediately. 8

  2. A restaurant chain partnered with researchers to boost sales of certain menu items. Instead of lowering prices or adding expensive ingredients, they simply labeled dishes as “most popular.” Sales jumped 13-20%. Other labels like “House Specialty” or “Chef’s Recommendation” had little to no effect. Turns out, people love what’s already popular—an easy, ethical, and cost-free solution! 9

  3. Dutch high school students increased fruit consumption by 35 percent, when told their peers eat fruit to keep healthy, even though they had claimed no intention to change upon receiving the information. 10

  4. During the COVID-19 outbreak, researchers studied the reasons Japanese citizens chose to wear face masks: only one reason made a major difference in mask-wearing frequency: seeing other people wearing masks. 11

  5. Researchers sent messages to households in San Marcos, California, asking them to conserve energy. There were four variants: “The environment will benefit”; “It’s the socially responsible thing to do”; “It will save you significant money on your next power bill”; and finally the social proof message “Most of your fellow community residents do try to conserve energy at home.” The households getting the social-proof message saved 3.5 times as much energy as those getting any of the other messages. 12

  6. In a further twist, in the energy conservation study above, there were two control groups in the study. Group A got no message about conserving energy at all. Group B got a message simply urging them to save energy. The difference between the two control groups was negligible, showing that simple exhortatons don’t work.

The science is clear. People react strongly to the message “others are doing it”, and very little to other messages like “you should do it”, “it is good for you”, “it is good for the world”, etc.

Causes. Why does this happen? One explanation is that humans find it harder to defy other humans than impersonal information. A second one is that knowing many others are doing something makes people believe that the thing is feasible for them too.

  1. Researchers found that participants were less likely to conform to information from computers than from humans, even when they rated both as equally reliable. FMRI scans showed that defying humans activated the brain’s negative emotion center (the amygdala), causing what researchers called “the pain of independence.” Defying computers didn’t trigger the same emotional response, as there were no social consequences. 13, 14

  2. Researchers studied households’ willingness to recycle waste when given different reasons to do so, in the Italian cities of Rome, Cagliari, Terni, and Macomer. They concluded that households were most willing to recycle if they believed that many of their neighbours recycled at home, in part because they saw recycling as less difficult to manage. 15

Similarity. This effect is stronger when the perception is that the others are people like them and weaker if they are different, somehow. Sometimes, only the message “other people like you are doing it” works.

  1. Some physicians routinely overprescribe some medicines, such as antibiotics. Of the several attempts to get them to change their behavior, only one method brought about lasting change: comparison of their presription rates to peers. Researchers sent different messages about antibiotic prescribing guidelines to 248 registered clinicians about antibiotic prescription guidelines. The most effective message was one that compared their antibiotic prescribing rates with those of “top performers” (those with the lowest inappropriate prescribing rates). 16

  2. Universities want their students to behave in a more inclusive manner towards marginalized groups. Researchers conducted six randomized controlled trials at a large public university in the United States, and tried several “interventions” that targeted participants perception of social norms (i.e., what are acceptable ways of behaviour towards marginalized groups). The interventions that worked best simply communicated to non-marginalized students that their peers hold pro-diversity attitudes and engaged in inclusive behaviours. What worked less well were interventions that tried to raise awareness about implicit biases, or how subtle discriminations are widespread. 17

  3. Researchers found that employees are more likely to engage in information sharing if they saw this behaviour modeled by fellow coworkers rather than by managers. 6

Backfiring. When informed that only a minority performs one of a desired action, people are reluctant to perform it themselves. If we implore people to stop doing something bad and let it slip that regrettably many others are doing that bad thing, we might increase the undesirable behaviour. This finding is also robustly supported by research and obervation.

  1. Despite guards, fences, warning signs, and the threat of fines, about 2.95% of visitors steal an estimated 11 tons of fossil wood every year from the Petrified Forest National Park. Researchers conducted an experiment in which they alternated a pair of signs in high-theft areas of the park. The first sign urged visitors to not take the fossil wood, with a picture depicting three thieves in action. It nearly tripled theft, to 7.92% of visitors. The second sign also urged visitors not to take the fossil wood, but had a picture depicting a lone thief. This reduced theft to 1.67%. 18

  2. The US IRS announced that many citizens were cheating on their returns, so the agency was going to strengthen penalties for tax evasion. Tax fraud went up the following year. 19 There are innumerable examples of public service announcements that deplored bad behaviour and asked people to improve have failed to have a positive impact and in fact increased the undesirable behaviour.

Social proof of the future. The “trend continuation” cognitive bias is another quirk of human cognitiion, where when people see a change or trend, they expect the change or trend to continue. Suppose we cannot truthfully say that many others are doing something. We might be able to say something about the trend. Because we assume they will continue in the same direction, trends make us think about where others’ behaviors will be. Thus, trends give us access to future social proof.

  1. To study the effects of different messages on water conservation, researchers invited university students to participate in a study, where they asked them to brush their teeth with a “new” toothpaste over a sink in the lab and rate the toothpaste. the The sink was equipped with a hidden water consumption meter. Before the “test”, students were made to wait their turn and given some liteture to read. The control group read nothing about water conservation. The first test group read that only a minority of their peers conserved water; this group used more water than the control group! The second group read the same thing, but also that this minority was growing. This group used the least water. 20

Software Development is a Social Activity

Programming looks like something done in isolation. A developer puts on her headphones, picks up a task from the company’s tracking system, and starts coding. No one would call this “social”. However, when coding, the developer is having a conversation, not in real-time, but asynchronously, with the rest of he organization. New code almost always interacts with existing code—the quality and style of the that code is informing and affecting the developer’s own actions. The developer’s code may change existing code—the developer makes decisions about where and how much. The developer’s code will often be used by others in the future—what is the right trade-off between readability and performance?

When a developer picks up a problem to solve, she makes many trade-offs where they must use intuition aka “gut feel”. When should a long function be refactored? What is the right approach when encountering a novel problem: (a) research the problem and all potential solutions thoroughly until she understands them well, regardless of how much time it takes, and selecting a solution; (b) research as before, but timebox the effort, and make the best possible decision based on whatever was learnt in that time; (c) or, search for solutions and just use the first one that seems likely to work.

Software development is a collaborative effort, done under uncertainty. The principle of social proof affects the decisions developers are making. It can be used to encourage good behaviour, or it might encourage bad behaviour.

Use Social Proof to Improve Software Quality Sustainably

To improve software quality, one must of course start with the basics: (1) explain why, (2) get some tools to automate what we can—like static code analysis; (3) provide training and make the tools easy to use; (4) add some checks, like a go/no-go checkpoint in your CI/CD pipeline.

This is not enough. It is very likely that developers will do the bare minimum to get through the process. The organization will need to make a large investment in risk and compliance to keep quality up.

There is a better way. We can co-opt the principle of social proof to motivate developers and teams across the organization to improve code proactively and reinforce good practices.

After doing the basics, ensure that there is a standard measure of quality across teams. Establish team-level reviews, where the quality of code owned by the team is reviewed. Ensure the chosen tool or dashboard is capable of showing metrics for a given team but also for other teams and overall organization.

Initially, authority and inspection are needed to get some positive improvements. Code quality will creep up, with some effort. Find one or two teams that are making progress.

To the teams not making progress, point out the ones that are. Emphasize that while the teams making quality improvements are in a minority right now, the number of such teams is growing.

At some point, enough teams would be engaged such that a lot of code is now high quality. You can now switch to the message that a majority of teams keep code quality high while meeting their deadlines.

Watch the laggard teams absorb and internalize this information, and change.

For even better results, create forums where developers and teams can ask the high performers “how did you do that!?”, and watch your development organization teach itself how to balance speed and quality well.

References

  1. Double-Digit Revenue Growth for Security Products in 2023 is Forecast to Continue Through 2028, According to IDC 

  2. Statista: Cybercrime Expected To Skyrocket in Coming Years 

  3. CISA Boss: Makers of insecure software must stop enabling today’s cyber villains

  4. See NIST: What Proportion of Vulnerabilities can be Attributed to Ordinary Coding Errors 

  5. Jeff Bezos, founder of Amazon—observed that asking people to try harder doesn’t solve recurring problems. People are already doing their best. They have good intentions. At best, people will put in extra hours and heroically rescue a bad situation. The solution is a change to the system that is preventing people from solving the problem permanently. He coined the term “mechanism” to describe the criteria for acceptable solutions. To learn more, see: (1) AWS Well Architected Framework: Operational Readiness Review: Building mechanisms. (2) How we leverage mechanisms at Amazon? - Zak Islam, Director of Engineering at AWS

  6. See Managers versus co-workers as referents: Comparing social influence effects on within- and outside-subsidiary knowledge sharing by Boh and Wong, 2015, in Organizational Behavior and Human Decision Processes  2

  7. KPIs, OKRs, quantitative metrics, targets, and other quantitative metrics are powerful tools, but when applied to knowledge work requiring high judgment, they can easily cause lasting and deep damage. The key is to use them judiciously. I wrote at length about this topic in a previous article

  8. See The Choice Factory: 25 behavioural biases that influence what we buy by Richard Shotton, 2018.

  9. See “Observational Learning: Evidence from a Randomized Natural Field Experiment by Cai et al, 2009 in American Economic Review

  10. See Don’t tell me what I should do, but what others do: the influence of descriptive and injunctive peer norms on fruit consumption in adolescents by Stok FM, et al, 2014, in National Library of Medicine, National Institutes of Health

  11. See Why Do Japanese People Use Masks Against COVID-19 by Nakayachi et al, 2020, in Frontiers in Psychology

  12. See Normative social influence is underdetected by Nolan et al, 2008, in Peronality and Social Psychology Bulletin

  13. See Neurobiological correlates of social conformity and independence during mental rotation by Berns et al, 2005, in Biological Psychiatry 

  14. See Neuroscience and the social origins of moral behavior: How neural underpinnings of social categorization and conformity affect everyday moral and immoral behavior by Ellemers & van Nunspeet, 2020, in Current Directions in Psychological Science

  15. See Distinguishing the sources of normative influence on proenvironmental behaviors: The role of local norms in household waste recycling by Fornara et al, 2011, in Group Processes & Intergroup Relations

  16. See Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial by Meeker at al, 2016, in the Journal of the American Medical Association

  17. See Exposure to peers’ pro-diversity attitudes increases inclusion and reduces the achievement gap by Murrar, Campbell & Brauer, 2020, in Nature Human Behavior

  18. See Crafting Normative Messages to Protect the Environment by Cialdini, 2003, in Current Directions in Psychological Science

  19. See Social Influence, Social Meaning, and Deterrence by Kahan, 1997 in Virginia Law Review

  20. See Trending Norms: A Lever for Encouraging Behaviors Performed by the Minority by Mortensen et al, 2017, in Social Psychological and Personality Science