AppSec Series 0x03: The Secure Development Lifecycle

Alejandro Iacobelli
16 min readSep 26, 2021

In previous posts, we’ve discussed some of the main reasons for insecure software. One of the most common misconceptions was that security is usually seen as an add-on instead of a built in process that must be cyclically applied from early stages.

According to H. Mouratidis and P. Giorgini, one of the reasons for this misconception is the fact that traditionally, both associated research areas (software engineering and security engineering) have been working independently from the beginning. [1]

In this post, I’ll introduce one software development methodology with the specific focus of joining these two worlds together called the secure development lifecycle (SDL or SSDLC). Before we start, one disclaimer. I will speak about SDL on a theoretical and generic level. How to apply this theory to a specific business context or methodology will be discussed in later posts.

First things first, what is software engineering?

https://blog.testproject.io/2021/03/08/the-world-of-software-development-life-cycle/

According to the Software Engineering Book of Knowledge, software engineering is defined as “the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software”. Around a vast amount of topics shared on this document, there are some formal steps that all software development should have independently from the methodology being used. Those steps are:

  • Software Requirements: The first step usually deals with understanding what we must build. Requirements engineering deals with the elicitation, analysis, specification, and validation of software requirements.
  • Software Design: Once we have a formal definition of what the system should do (functional and non-funcional requirements) we must define how we should build it. The software design step helps us with the definition of the architecture, components, interfaces, and other characteristics of a system.
  • Software Construction: This is where the formal coding starts. This stage involves tons of concepts like code hygiene, error handling, default safes, open source libraries, documentation, programming paradigms, concurrency or parallelism handling, among a long long list of topics.
  • Software Testing: As the SWEBOK states: “Software testing consists of the dynamic verification that a program provides expected behaviors on a finite set of test cases, suitably selected from the usually infinite execution domain”. Here we apply concepts like unit testing, integration testing, system testing, regression testing, performance testing, fuzzing among other topics.
  • Software Configuration Management: SCM is a method of controlling the software development and modifications of software systems and products during their entire life cycle. Affects a product’s whole life-cycle by identifying software items to be developed, avoiding chaos when changes to software occur.
  • Software Maintenance: Finally, this step deals with the totality of activities required to provide cost-effective support to productive software.

As developers, we need to apply these steps in a way that is functional to our specific context. This is where software development methodologies come to life.

What are software development methodologies?

A software development methodology is a set of rules and guidelines that are used in the process of researching, planning, designing, developing, testing, and maintaining any software product. [2] Simply speaking, these methodologies are ways of wrapping up and organizing these 6 steps defined by SWEBOK in order to deliver quality software.

There are several known methodologies out there, like waterfall, prototyping, spiral, v-model, agile like scrum or lean, among many others. Generally speaking, we should choose them according to the company’s context and not because of it’s fashion. For example, some key factors to consider could be the project owner profile, developer’s technical expertise, project complexity, budget, deadline, risk or project’s size.

The main problem about these methodologies is that there are few or non security concepts defined on them. This is why, more than a decade ago, Michael Howard and Steve Lipner, among other developers at Microsoft, proposed a new dynamic methodology with one specific goal: deliver software with the fewest vulnerabilities possible before hitting production.

The Secure Development Lifecycle (SDL)

A while ago, Microsoft teams realized that their users were demanding stronger software in terms of security and privacy. They’ve found that the development methodologies they had been using didn’t apply any security notions and it was time for a new approach that they’ve called: The secure development lifecycle.

Simply speaking, the secure development lifecycle, or SDL for short, is a set of touchpoints (activities, processes, standards and tools) set across the 6 engineering stages in order to proactively improve product’s security. One important thing to mention is that the SDL is not a strict and generic process that must be fllowed equally. Is more a dynamic process and must be adapted to the company’s development methodology, specific constraints and must be improved over time based on clear fail criteria.

Since Microsoft’s first formal publication, the software security community has been adapting and reshaping the methodology according to different contexts and ways of coding. Next, I’ll summarize some of the most important touchpoints to consider. Some of them are meant to be applied before or after a specific stage where others could be applied on any timeframe.

SDL touchpoints

Disclaimer before we start. The touchpoints that we are about to see are just a quick introduction. How to apply them to our specific company culture and engineering methodology is another discussion.

Software Requirements Stage

At this initial stage we have the chance to complement the functional requirements set by specific stakeholders (like product teams) with non-funcional ones. You can think of non-functional requirements as system constraints. Examples of these types of requirements could be “the information being stored must be encrypted with AES-256-GCM-SIV” or “The login must count with anti-automation protections, specifically a captcha and a rate limit based on IP”.

A lot of effort has been put into frameworks that can help us set this type of requirements in a structured way (like use-misuse case, OWASP ASVS or Core security requirements artifacts [3]) and also in adapting them to any methodology out there, like agile [5][8]. One important thing to understand is that for requirements to add value they need to be as specific as possible. To say “the application must be protected from remote attackers” or “all data should be encrypted” gives no value to developers at all.

Software Design Stage

In this second stage we have one of the most important steps on the SDL, called threat modeling. You can think of this as a time-out from the daily work in which developers can stop thinking about building and start thinking about breaking or how things can go wrong.

Threat model exercises are structured ways of reviewing a design in order to look for weaknesses. The main idea is to find as many structural flaws as we can, set a specific priority or risk to each of them and finally look for ways to mitigate them. On one hand, there are tons of really useful methodologies to structure this exercise, like STRIDE, PASTA, attack trees, use-misuse cases, among many other ideas. On the other hand, DREAD or Bug Bars [16] are great prioritization frameworks. There is a great introductory book written by Adam Shostack called “Threat Modeling: Designing for Security”, I strongly recommend you to read it.

As a developer, the best way to get prepared for this type of exercises is to learn about secure design principles. This concept has first been mentioned in 1975 by Jerome Saltzer and Michael Schroeder [4]. From that point, more and more concepts have been added to the list. We will discuss this in more detail on another post.

Software Construction Stage

There are many many touchpoints to discuss on this stage, so I will just introduce some of the most important ones: Default safes, secure coding training, software composition analysis, banned functions, secret management and secure coding cheat-sheets or guidelines.

Default safes

The main idea behind a default safe is that just by using a component (artifact, framework, web server, code pattern) I inherit security protections or secure ways to build code. Think about it as smuggling secure configurations or protections into frameworks or toolkits that developers use every day, so they are protected from vulnerabilities almost without knowing about it. The more default-safes we can build, the easier will be to achieve stronger applications.

Examples of daily used default safes could be toolkits that offer strong cryptographic schemes like bouncycastle [9], C compiler security flags like “-pie” to achieve full ASLR randomization [10], decorators like the Django csrf_protect, that allow easy CSRF protection to exposed methods or using React or Angular libraries that encode strings before injecting them into the DOM, reducing the chance of XSS.

There is no question about the effectiveness of default safes, and if you are not a believer, just study how the adoption of ORM’s (that uses prepared statements or parameterized queries by default) has reduced the amount of SQLi on a massive scale, or how the adoption of managed programming languages has reduced the amount of memory-related vulnerabilities like stack or heap based buffer overflows.

Another thing worth mentioning is what Ksenia Peguer found on an interesting research [6]: As closer to the development framework the default safe is, the more secure outcome we will get. I recommend this talk to understand a little bit more.

Secure coding trainings

We can’t provide built-in protections for all types of vulnerabilities out there, particularly those called the “business logic” ones. This is why, as a defense in depth strategy, training developers around concepts like secure design, how to recognize common vulnerability patterns or correctness examples is key.

Some important concepts around trainings . First, mix theory and practice. As humans, we tend to absorbe new concepts better if we practice what we’ve learned. Second, know your audience. Explaining internal file access via webview misconfiguration to backend developers or speaking about buffer overflows to teams that use managed programming languages like Java is a loss of everybody’s time. Don’t follow OWASP TOP 10 blindly, build your own custom top 10 based on internal vulnerability management statistics.

Fourth, developers code, so explain vulnerabilities from the coding perspective, not just the offensive side. Most trainers are not teachers or professors, so explaining to developers totally new concepts is not an easy task. It’s better to start with something that they are familiar with. Finally, apply gamification to your strategy. We all love games, so the more gamification you can implement, the more adoption you will have.

Software composition analysis

Code reuse (dependencies) is the building block of today’s software industry. Moreover, some research shows that around 70% of a project’s code is wrapped up into 3rd party code [7].

So, how to choose a dependency is key to avoid vulnerabilities, backdoors or licensing issues that could affect your company. As developers, there are some checks to be made, like avoiding dependency hell [17], checking for active vulnerabilities, licensing problems, aging of unresolved issues, project maintainability, quality code coverage, amount of transitive or circular dependencies or maintainer’s trust.

Another important topic around dependencies comes from how the company’s CI/CD pipeline works. Attack vectors like dependency confusion or typosquatting are possible scenarios to consider too. In another post we will get more in detail about this problem, but for now, remember that as developers, we should have a clear criteria of topics to research before choosing a library.

Banned functions

There are all sorts of functions that we must avoid using for different reasons. For example, legacy cryptographic schemes like md4, md5, sha1, rc4, RSA<2048 O DES, just to mention a few of them. Other examples of functions related to security problems are all types of “eval()”, “strcpy()” or “dangerouslysetinnerhtml()”.

The main idea here is that we must provide developers a full list of functions that must not be used in a way that is easy for them to know. A simple way to integrate this checks on development stages is to provide an IDE plugin that matches this functions and warns developers not to use them as they are coding, add CI/CD checks or fork and maintain patched branches of dangerous dependencies that bring this vulnerable functions.

Security baselines/checklists

Depending on the organization’s internal coding policy and the application security team alignment (compliance based or engineering based), we can have different security baselines to follow. As a developer, it is very important to understand them correctly before shipping features into production. From an engineering point of view, it’s always a good idea to bring automation in order to efficiently enforce these practices. IDE’s integrations, pipeline quality gates or production scanings with clear criteria are some good approaches.

AuthN, AuthZ, logging, semantic and syntactic input validation, output encoding, default cryptographic primitives or standard configurations are some of the most common protections that all applications must have. For example, if you are coding an API and your company supports Oauth-v2, SAML or OpenID you should have clear documentation about when to use each and what are the best practices that must be followed.

Software Testing Stage

This stage is where most of the potential vulnerabilities can be found. Fuzzing, SAST, DAST, BCA, Software Composition Analysis, peer code review and other sort of testing techniques (like stressing your apps or prototype based testing) are just some of the activities that take place on this stage.

SAST

In a nutshell, static application security testing means that a program will go over your code line by line, looking for potential vulnerabilities and reporting back to you with detailed information. These tools use lots of different techniques to get the job done, like pattern matching, data flow analysis or taint analysis. This last two techniques model the way data flows through the program at runtime using the AST as input. If a tainted value without proper cleaning gets into a sink function, a vulnerability is present.

There are tons of tools available. Which tool and combination to choose will depend on your budget, languages, type of vulnerabilities you are interested in, among other topics. Remember, these tools usually find low hanging fruits, so don’t use them as a silver bullet. Remember the SAMATE competition conducted by Nist [11]. Second, these tools usually report a fair amount of false positives, so you should have a strategy to avoid flooding developers with useless reports. Third, don’t use the default recommendations that these scanners show, they are generic and you need context based ones. And last, the bigger the code, the slower the scan, so if you are thinking about integrating this into a CI/CD pipeline, choose your strategy wisely. For example, splitting the scan is a good idea. Scan the top 10 most critical vulnerabilities on the pipeline and leave all the others for the offline scanner.

Security Code Review

Most developers agree that peer review is a good approach to catch business logic errors, bugs, detect collusion scenarios or enforce good practices. To be trained around secure coding concepts is key to adding security to those reviews.

An important concept to clarify is that code review is not the same as SAST, so can’t be replaced by it. The main reason is call “context”. SAST, as any tool, can’t understand context, so there is no better defense in depth strategy than formal review from developers. A good idea is to have an easy to read cheat-sheet that developers could use to remember what to look at and how.

BCA

Binary code analysis basically means scan the project after being built, but before we execute it. One advantage that BCA scanners have over SAST solutions is the ability to look at the compiled result and detect vulnerabilities created by the compiler itself. Furthermore, library function code or other code delivered only as a binary can be examined.

One complementary approach that binary analysis detects is the absence of security features and malware including backdoors and other unintended functionality caused by a compromised build process.

SCA

Other things that must be tested are the dependencies that have been chosen on the coding phase. We must find active vulnerabilities, reported backdoors, incompatible licencies or if they are banned from the organization for internal reasons.

There are tons of tools that can help us to understand what dependencies are vulnerable. Some free examples are the OWASP dependency check project or npm-audit or, if you prefer comercial ones, Snyk or Blackduck are good choices too. In my opinion, one key factor to consider before choosing one solution is the DX (developer experience) that you want to achieve. For example, if your organization has a custom build process with custom feedback feeds, you are probably going to need to blend in. In this case, a solution that has an API o a way to extend some functionality is very important to maintain a consistent experience with all the other core teams like SRE.

Another key point is related to mitigations: What must we do when a vulnerable dependency is found?. Just reporting a vulnerable dependency without giving any “safe version” or “safe new dependency” will produce a negative impact on the developer side, mostly in terms of time. For every vulnerable library and version, find the closer patch or minor without vulnerabilities. In the best case scenario (dependency supporting semver) the update will be made without any refactoring of code. If you are going with fully automated PR and builds, canary

DAST

There are vulnerabilities that are easily found interacting with the running application. DAST is one of the last defense in depth vulnerability scanners that you could use. Think about DAST as a program that runs specific tests (or requests) on your running application and if the expected response comes back, a vulnerability is present.

There are some things to consider when implementing this type of solution. One, remember that this types of solutions produce a lot of false positives, so it is important to build an abstraction layer between the DAST solution and the developers in order to filter out all the noise. Second, learn about the false positives and try not to show them every time a scan is run. Third, build an attack surface inventory (protocols, methods, URI’s, headers and parameters), so the scanner knows exactly what to scan and how. Remember, wasting requests on discovery when you already know your surface is a waste of time, traffic and money.

Fuzzing

In a nutshell, fuzzing is a technique that consists of sending “random” values to every input defined by the application in order to find unexpected behaviors that could lead to bugs or vulnerabilities. Think about this concept as a brute-force approach: It’s easier to send all types of garbage to the application and detect crashes or unexpected behaviors than to understand every possible branch of execution of a complex program.

The concept of fuzzing is ancient and usually applied in a dynamic way (running application). There are different approaches to this discipline. Whitebox fuzzing, a type of fuzzing that takes advantage of code-coverage-based search heuristics or symbolic execution to enhance the bug searches. Structure aware, that uses known data structures defined on the program (greybox) or structure unaware (blackbox) that could be just sending /dev/random to every input.

As Patrice Godefroid stated in his article Fuzzing, hack art and science: “Blackbox fuzzing is a simple hack but can be remarkably effective in finding bugs in applications that have never been fuzzed. Grammar-based fuzzing extends it to an art form by allowing user’s creativity and expertise to guide fuzzing. Whitebox fuzzing leverages advances in computer science research on program verification, and explores how and when fuzzing can be mathematically “sound and complete” in a proof-theoretic sense.” [14]

Software Maintenance Stage

From the security perspective, we can think of the maintenance stage as the set of activities that we can perform on production to keep the software as secure as we can. Some of these activities could be penetration testing, vulnerability assessments, automated scanings, crowdsourced initiatives like bug bounty or continuous monitoring with approaches like WAF or RASP.

Penetration testing

Simply speaking, penetration testing exercises are structured ways of testing that all the security invariants of my application remain true. For example, if my biometric identity validation feature must guarantee a FPR less than 5% no matter what, the penetration testing goal should be to find all the possible ways to break that assumption.

When you ask someone to perform a penetration test on your applications, you should specify one or more key goals. The job of the penetration tester should be to achieve that goal, no matter how many vulnerabilities must be found and combined to achieve them.

Vulnerability assessments

In contrast to a penetration test, a vulnerability assessment goal is to look for a specific set of vulnerabilities (technical or business logic) to make sure that the application is secure against a set of vulnerabilities or weaknesses. Usually, the testing list is defined by the tester and handed over to the client once it’s done.

Penetration testing and vulnerability assessments must be combined to achieve a good balance between depth and scope. Many people tend to delegate this type of tests to automated tools, and while this is an effective approach, it is not optimal. Remember that tools don’t understand context, so business logic vulnerabilities are usually out of scope for this tools. As a tester, you should use them as a complement and not as your only approach.

Crowdsourced Security

We will get into this beautiful philosophy later on, but the main idea behind it is to take advantage of the power of the community to achieve specialized tasks (like vulnerability assessments) instead of only trusting highly trained professionals. These approaches has come to the security world through ideas like bug bounty.

The bug bounty approach is simple, instead of paying for highly trained professional’s time (despite their results), just pay for the results of a wide and not necessarily highly trained community. This approach is an excellent complement to penetration testing and vulnerability assessments, particularly on companies with a wide and dynamic attack surface.

WAF

Another complementary approach to apply on this stage is related to monitoring and telemetry. A web application firewall is one of the most common ways to detect potential attacks on production apps.

In a nutshell, a WAF is a layer 7 proxy that reads incoming traffic, matches it against a predefined set of rules and blocks all the positive hits. From those hits, you can learn tons of information that you could use to keep shielding your applications, like potentially dangerous IP’s, potential vulnerabilities that the application may have or potential victims of attacks.

As all approaches, WAF has some drawbacks. Among the main ones is the amount of false positives that could be produced on wide attack surfaces or dynamic environments. Bad deploys that start sending characters like “../” or “<script>” from the client-side to the backend could cause a massive self DoS. Another drawback is that it is very difficult to analyze the amount of hits that a normal day could produce. Knowing if a hit is exploiting a real vulnerability is key, but if you have 500K hits every day, this will become an impossible task. Here is when solutions like IAST+RASP come to life, but more of this for another post..

Cavets

Where to start implementing this program?

One question I’ve always found intriguing is where should I start if I’m the first application security engineer at the company? In the next posts we will discuss this further, but if you can’t wait, BSIMM or OpenSAM are two excellent starting points.

How can I apply this to agile specifically?

We will discuss in other posts what is the best way to apply these activities to an agile culture. For a state of the art introduction you should read Agile application security, from Laura Bell, Michael Brunton, Rich Smith and Jim Bird. [15]

References

[1] Integrating Security and Software Engineering book.

[2] https://dbjournal.ro/archive/17/17_4.pdf

[3] http://computing-reports.open.ac.uk/2004/2004_23.pdf

[4] https://www.cs.virginia.edu/~evans/cs551/saltzer/

[5] https://roberthurlbut.com/Resources/2019/CodeMash/Robert-Hurlbut-CodeMash2019-User-Story-Threat-Modeling-20190910.pdf

[6] https://www.youtube.com/watch?v=FCxorFM3yZk&t=1736s

[7] https://www.synopsys.com/blogs/software-security/open-source-audit-data/

[8] https://safecode.org/publication/SAFECode_Agile_Dev_Security0712.pdf

[9] https://www.bouncycastle.org/

[10] https://developers.redhat.com/blog/2018/03/21/compiler-and-linker-flags-gcc/

[11] https://samate.nist.gov/docs/NIST_Special_Publication_500-283.pdf

[12]https://www.synopsys.com/blogs/software-security/open-source-audit-data/

[13] https://hydrasky.com/network-security/fuzzing-web-application-using-burp-suite-intruder/

[14] https://cacm.acm.org/magazines/2020/2/242350-fuzzing/fulltext

[15] https://www.amazon.com/-/es/Laura-Bell/dp/1491938846

[16] https://docs.microsoft.com/en-us/previous-versions/windows/desktop/cc307404(v=msdn.10)?redirectedfrom=MSDN

[17] https://en.wikipedia.org/wiki/Dependency_hell

--

--

Alejandro Iacobelli

Software engineer, penetration tester, bounty hunter, and appsec professor. I like debates, strategic or technical. Feel free to contact me to philosophize.