Software is written by people and, by extension, prone to failure. Depending on the problems this software must solve, the impact of that failure could go from negligible to life threatening. For instance, a segmentation fault produced by an unhandled exception has a very different impact on an e-commerce search functionality than it has on a pacemaker. Ideally, we must focus on delivering secure software, but the world is not ideal, and trade-offs are usually made.
How do we define “secure software”?
From a computer science perspective, if we can prove the correctness of a program (given a certain specification), then we can guarantee that the program is secure, because no uncertain outcomes could possibly occur. Of course, if requirements are defined with no security in mind, programs won’t be either. This is why concepts like security requirement engineering and threat modeling are very important topics.
The main problem with this theoretical approach is that it is complex (maybe imposible), time consuming and not scalable to perform in the real world, at least for now. This is why instead of proving that a program is correct, we use the opposite approach. We try to find evidence that a program is not correct, and then fix those findings.
Fuzzing, unit testing, regression testing, integration testing, smoke testing, static application security testing, code review, and a lot of other techniques are widely used to find as much incorrect behaviours as possible.
Every unexpected outcome or unintended behaviour is usually called a bug and depending on how this bug affects software it could be security related or not. Generally speaking, security related bugs are called vulnerabilities and occur when the bug could affect confidentiality, integrity, availability or accountability.
So, is it possible to build a perfectly secure software ?
Short answer: No. As the authors of Writing Secure Code put it: “ [...] the most secure system is the one that’s turned off and buried in a concrete bunker, but even that is not perfect security [...]”. Even if it was possible to get near a fully secure concept, it would be so expensive and would take so long to develop that it would probably be outdated before it hit the shelves.
If you think about it, it is not enough to write secure code. You need to make sure that all the components your software is built upon are secure too. This means libraries, frameworks, operating systems, network components, web servers, compilers and even firmware versions. As Ken Thomson explained in his Turing award lecture, “you can’t trust code that you did not totally create yourself.”
Because of this, we tend to define baselines, “how secure” the software should be. To build this baseline, we ask business owners what degree of risk are they willing to accept on their products (this concept is called risk tolerance or risk aversion). This is usually done by asking carefully crafted questions and trying to figure out how much time and resources are they willing to invest.
If the perfectly secure software does not exist, why should I even care?
As a business owner, a reasonable question to ask could be why should I care to invest time and money on building secure software if we are probably going to be hacked anyways?
Just ask yourself, would you buy a house without doors in the most dangerous neighborhood of the city, a car without airbag nor seatbelts or deposit all your life savings under your bed on your doorless house? If your answer is yes, this series are not for you. If your answer is no, why do you think your users will?
Out there is a whole world of people (usually called threat agents) with different motivations to turn your product apart (and by extension your users), and all the evidence suggests that they are getting better.
Some people do it for fun and others just want to take advantage of your company resources. Your competition might use illegal techniques to take you down or maybe terrorists groups want to hurt your customers through your product.
Attackers are lazy and usually target weak products first. As main goal you should aim to continuously increase the cost needed for a successful attack to ocurre. Raise the bar to limits that attackers are not willing to invest.
In conclusion, if you care about your product, and by extension your clients, you must invest in security. In order to build secure software, you will need to build a strong culture and a strong engineering process. This takes time, so better to start sooner than later.
Over the next posts I will try to share some ideas around software security, that might help you understand how to achieve secure software, or at least, get near this goal.