View this article on Dark Reading.
Amid a feverish cybersecurity environment, there is a growing chorus for software to be secure by design. In April, the US Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA), aligned with the cybersecurity authorities of Australia, Canada, the United Kingdom, Germany, the Netherlands, and New Zealand to create guidelines aimed at supporting software manufacturers to "embed security-by-design and by-default."
In this new paper, the agencies call on software makers to deploy threat modeling at the design stage. These guidelines follow fast on the news that the US government will legislate to introduce liability for software makers to secure the products they manufacture.
All software developers want to build secure software, so why is it so difficult to do, what does effective security by design look like, and what needs to change to embed it in the software development process?
The sheer prevalence of cybersecurity breaches is evidence of the huge challenge faced by developers trying to build secure software. Striving to get their products to market quickly, software manufacturers are incentivized to take shortcuts on security. And the challenge of designing secure software is becoming more difficult as software architecture grows in complexity, with every sector of the economy being transformed by software. The recent intention of the White House to hold vendors accountable for poor software security could be seen as an attempt to correct the current market incentives.
This is especially the case with supply chains, which are getting ever more complex, making it difficult to predict how different pieces of software will interact. We've seen this challenge in the growing trend of supply chain attacks that have affected businesses, including Air France, KLM, and Nissan in the past year.
Security by Design and Threat Modeling
It is still the case that most software security activities are focused at the end of the development process, but this creates some problems. First, scanning software through application security testing tools can miss more complex flaws in the design of an application. In addition, when you do identify a bug once you have completed development, remediation can be costly and time consuming.
It is much better to identify and address security flaws before code is written, through the process of threat modeling. There are a number of different approaches to threat modeling, but fundamental to them all is analyzing the design of the system as a cross-functional team — development and security teams coming together to identify potential security and privacy issues and developing a plan to solve or mitigate them.
So far, so straightforward. So why isn't this happening? There seem to be three main barriers: skills, responsibility, and practicality.
Embedding Security by Design
A fundamental challenge is that many developers enter the workplace without the technical knowledge to build secure software and with little or no experience of threat modeling. It is a software skill that you have to invest in and it takes time to learn. The focus of the developer is, understandably, on the functionality that they are developing, not on how a threat actor might find a vulnerability in that new functionality.
This leads us to the second barrier, which is a lack of clarity over where responsibility for security at the design stage lies, which means in many businesses threat modeling can fall through the cracks. Despite their fundamental role, the development team often views security as the responsibility of the security team. This is also entirely understandable, given that in most businesses the knowledge about the process of threat modeling and of the security risks is held by the security team. Just as you can't design secure software without the engineers, you can't build secure software without the security team's insight into the evolving attack vectors used by threat actors.
Until these two teams are working together at the very start of the software development process and threat modeling is embedded as a community practice with shared responsibility, this problem won't be solved.
The third barrier is that until fairly recently, traditional approaches to threat modeling have been impractical when developing software on a large scale. For an organization that is building many thousands of applications, the traditional approach to threat modeling, as a group in a meeting room with a whiteboard, isn't possible. However, automation of this process is now a reality. As a developer, you can now use automation to generate a threat model that contains relevant threats and countermeasures for you.
This latest guidance from the world's leading cybersecurity agencies should leave us in no doubt that security by design is no longer just best practice — it has to become a fundamental part of software development. The tools exist now to make it possible to achieve, but it must be a shared endeavor, with development and security teams working closely together before a line of code is written.