Building Secure Software: Top 11 Best Practices in Cybersecurity
From Threat Modeling to Fuzzing: All You Need to Know about Software Security
Hello Cyber Builders 🙋🏻♂️
This week, I am continuing my series on Software Product Security. I am focusing on the practices CTOs and Software engineers should implement within their engineering day-to-day.
The 11 practices I am highlighting are extracted from various standards, including how NIST see a “minimum” practice set for software vendors. I am emphasizing minimum because I know that for many developers, this is a lot already!
This post will describe each practice and state the value for software engineers and what needs to be known to implement.
This post is part of a series:
🔗 The Future of Cybersecurity: Everyone is a Software Producer. As software takes command, a new cybersecurity battlefield - software product security and its supply chain
🔗 Securing Software Supply Chains Start by Empathy. Exploring the Operational Triad of Software Product Security - Developers, Business Teams, and Product Security Officers
🔗 Understanding the Impact of the EU Certification Scheme on Cyber Builders
The new EU Certification Scheme is a nice first move but raises many questions about cost, complexity, and communication.
In This Post
This post discusses 11 practices that CTOs and software engineers should implement for software product security.
The practices include threat modeling, automated testing, static source code analysis, credentials leakage detection, runtime protections, black box testing, code-based structured unit testing, regression testing, fuzzing, automated web scanners, and third-party and open-source dependencies management.
Each practice has value and benefits in enhancing software systems' overall security and integrity.
Implementing these practices can help identify vulnerabilities early, prevent unauthorized access or data breaches, and ensure compliance with coding standards.
Implementing these practices early in the software development process and maintaining them throughout its lifecycle is essential.
Continuous monitoring and vigilance are necessary to address newly reported vulnerabilities and prevent weak links in the software chain.
Implementing these practices requires time, resources, and expertise in cybersecurity, but they are crucial for building robust and secure software.
Threat Modeling
Threat modeling is a proactive approach to securing software systems. It's a method where we abstractly represent the system under consideration, profile potential attacker personas, their possible objectives, and the tactics they might employ. It maps out potential threats to gain a holistic view of security challenges.
In the realm of cybersecurity, threat modeling offers substantial value to software engineers. It can illuminate blind spots in the security design, allowing engineers to address vulnerabilities before they're exploited. It also helps strategically focus their verification efforts, ensuring security measures are effectively placed and robustly tested.
Threat modeling should be implemented early in the software development process. It's an integral part of the design phase: as soon as you have an initial draft of system architecture, you should begin threat modeling. Early implementation allows for detecting and remedying security loopholes before they become deeply embedded in the system's design, saving time and resources.
One critical bottleneck is the complexity of creating accurate models that truly reflect all aspects of the system. Furthermore, predicting potential attacker profiles and their methods requires a solid understanding of evolving cyber threat landscapes. The process can be time-consuming and requires expertise in cybersecurity, which not all software engineering teams possess.
Third-Party and Open-Source Dependencies Management
Understanding your code's security involves monitoring your developments and ensuring that all additional libraries, packages, and services you incorporate into your software are equally secure. This means constantly scanning and checking against databases of known vulnerabilities to ensure there are no weak links in your software chain. Newly reported vulnerabilities may affect existing components, hence the need for continuous vigilance.
For software engineers, this practice is invaluable as it not only strengthens the overall security of your software but also helps you to prevent potential issues that could arise from any weak component in the system. It's a proactive approach that can save time and resources.
This policy is best implemented right from the early development stages and maintained throughout the lifecycle of the software. Making this a standard part of your development process ensures no component gets overlooked and exposes your software to risk.
Automated Testing
Automated testing is a method of consistently, accurately, and efficiently executing tests, reducing the need for manual effort. When incorporated into your workflow, it can repeatedly run tests with every new commit or before an issue is closed, making it a powerful tool for detecting software bugs early in development.
The main bottleneck in automated testing can be the initial setup and integration into your existing workflow or issue-tracking system. This may require some time and resources, but once done, it will drastically reduce the time spent on manual testing. Other potential challenges include writing compelling test cases and maintaining them as your software evolves.
📌 I have seen many software engineering teams reluctant to move to QA automation. Indeed, there is always something more urgent. QA automation is an investment: over time, it pays to catch bugs early in the development cycles. Bugs that are not even tracked or seen because developers fix them before committing new code.
Static Source Code Analysis
Using static analysis tools, essentially code scanners, is helpful to inspect possible vulnerabilities and comply with coding standards in your software code. They allow you to uphold your coding standards across the organization - when integrated inside CI/CD chains - without requiring manual intervention at every stage.
Static source code analysis is not without challenges. It may generate false positives that require time and effort, leading to delays. Software engineers often push back these tools when the “signal/noise” ratio is too high.
📌 I would recommend to start small and scale. Do not start by scanning the entire code base with the default scanner ruleset. Start on a sensitive software module and carefully review the rules. It will be valuable for everyone if something is found, showing early success and getting trust in the tool.
Credentials Leakage Detection
Detecting hardcoded secrets has become an essential aspect of software development. It includes S3 bucket credentials, API tokens, passwords, or private encryption keys. Heuristic tools can help identify these by looking for specific string patterns or interfaces requiring such parameters.
Detecting and eliminating hardcoded secrets can help prevent unauthorized access or data breaches, thereby maintaining the system's integrity and safeguarding sensitive information.
The review for hardcoded secrets should ideally be implemented during the development phase. Integrating this as a part of routine code review checks can make it an integral part of the software development lifecycle, ensuring that security is built in from the ground up.
📌 In the Uber breach, hackers accessed a private GitHub repository used by Uber software engineers, which contained credentials for an Amazon Web Services (AWS) account. This allowed the hackers to infiltrate the AWS account and access data related to Uber's ride-sharing service. The breach compromised the personal information of 57 million Uber accounts worldwide, including names, email addresses, and phone numbers— additionally, the data breach exposed details of over 600,000 driver licenses.
Runtime Protections
Cybersecurity is about protecting systems from external threats and building robust software from the ground up. Compiled or interpreted, these languages have various safeguards to prevent common programming errors that can lead to security vulnerabilities.
One key aspect of software runtime protection is dynamic analysis, which involves monitoring the behavior of an application while it is running. This allows for detecting any suspicious or anomalous activities that may indicate the presence of malware or unauthorized actions.
But it might also start with more basic controls, such as detecting overflows within process memory. Some modern languages like Golang offer this protection by default, making them particularly interesting to developing server-side software.
Black Box Testing
"Black box" testing is a strategy where you test a system, in this case, software, without knowledge of its internal workings. In other words, you see it as a "black box" with inputs and outputs but no visibility into how it processes them to produce the results. This approach can help ensure that your software meets functional specifications or requirements, behaves as expected when presented with invalid inputs or extreme load conditions (denial of service and overload attempts), effectively handles input boundaries, and adequately manages various combinations of inputs.
One potential bottleneck with this testing methodology is that it may not provide a comprehensive view of the system's functionality. Since it doesn't involve looking at the inner mechanisms of the software, some defects might go unnoticed. This limitation highlights the importance of combining "black box" testing with other methodologies for a more thorough evaluation.
Unit Testing
Creating code-based structural unit tests entails developing test scenarios grounded in the specifics of the software's implementation or the code itself. This approach is valuable for software engineers because it allows for more precise and detailed testing, focusing on individual components and their operation within the system's overall structure.
Sufficient coverage must be ensured to prevent regression when introducing new features or making significant changes to existing code.
In practice, software engineers must implement unit tests very early in software engineering cycles. Many people, including me, used to say that even before coding the core components and functionalities!
Regression Testing
Regression testing is all about catching specific bugs in the past. Regression tests are designed to spot a particular bug, acting as a spotlight that illuminates the presence or absence of an issue.
These test cases can be precious. Indeed, most bugs are striking back. It is not unusual to see a past issue resurfacing while a new feature is introduced. Regression tests are like having a pre-programmed bug detection tool, ready to point out if that pesky issue has surfaced again. This can save you time and resources in the bug detection process, plus give you a heads-up on any potential problems lurking in your code.
They are an investment as other tests, like Automated or Unit Tests. More and more organizations mandate that all severe bugs, leading to loss of availability, for example, have regression tests set up before considering the fix “done.”
Fuzzing
Fuzzing, or fuzz testing, is a powerful technique that involves bombarding a system with many varied inputs, like throwing spaghetti at a wall to see what sticks. 🙃 A fuzzer is usually programmed to try bug-revealing inputs like extremely long or empty strings and special characters.
Fuzzers are best implemented during the testing phase of a project. They can also be used periodically on existing systems as part of a comprehensive security audit. It's important to note that while fuzzing is highly effective at revealing potential issues, it’s not a silver bullet solution for all security problems.
Automated Web Scanners
Software applications, mainly those potentially connected to the internet, are at risk of cyber threats. A straightforward and effective way to address these risks is by running a web application scanner.
The best time to implement this solution is during the development process before the software goes live. It's much easier and cost-effective to fix vulnerabilities at this stage rather than dealing with potential breaches when the application is already in use.
Note that your customer will probably use such tools to assess the security of your software. Being ready to answer customer questions is a must-have for software producers.
However, one should be aware of bottlenecks while using these scanners. They can sometimes generate false positives or negatives, leading to unnecessary fixes. Their alerts are often inaccurate because they are based on some string matching (headers, version numbers) without any understanding of the underlying architecture.
Conclusion
This post has explored 11 practices that CTOs and software engineers should implement for software product security. These practices include threat modeling, automated testing, static source code analysis, credentials leakage detection, runtime protections, black box testing, code-based structured unit testing, regression testing, fuzzing, automated web scanners, and third-party and open-source dependencies management. Each practice has its value and benefits, contributing to software systems' overall security and integrity.
Did I miss something? Which ones are you already implementing? Are there any other practices or insights that should be included in the discussion? I would love to hear your thoughts and engage in further conversation about software product security.
Let’s continue the conversation in the comment section. 👇
Laurent 💚