AppSec Series 0x04: Crowdsourcing Security

Alejandro Iacobelli
9 min readDec 23, 2021
The power of the crowd

More than a decade ago, Jeff Howe defined a phenomenon that has gone unnoticed for a long time: “Nonprofessionals contribute to the economy more than ever”. He called this phenomenon “crowdsourcing”.

In a nutshell, crowdsourcing means that the crowd can fix specific problems way better than smaller groups of highly trained professionals. This concept is based on ideas like Joy’s law: “no matter who you are, most of the smartest people work for someone else” or the diversity trumps ability theorem [1].

The truth is that there is more than enough evidence to support this claim [2] and there are tons of very successful companies built around these ideas [3]. In this post, I’ll introduce one specific approach in which amateur or professional researchers from all around the globe can help companies to reduce their risk by finding vulnerabilities: Bug bounty programs.

A little bit of history

Systems have been compromised by creative and skillful individuals for as long as it gets. In the 1970s, Captain Crunch, the father of Phreaking [6], was one of the first legendary examples. But this story wasn’t always pretty, because, for many years, companies tend to antagonize anyone who tries to put their defenses to the test.

We were in a bad situation back then. On the one hand, companies didn’t want to be exposed without a proper time to fix their bugs, and on the other, the crowd wanted to protect people by exposing them to the world as soon as possible. This confrontation drove these “hackers” to dark corners of the web, where bad actors paid for those vulnerabilities for criminal actions. Something needed to change, and here was when the concept of responsible disclosure began to take shape.

The idea was that is fine for outsiders to find vulnerabilities but only if they give the affected company enough time to fix them before going public. The concept has been evolving and gaining popularity over the years [7,8,9]. In our days, this concept has taken a lot of shapes, and one of those goes by the name of bug bounty programs.

Bug bounty programs

Simply speaking, a bounty program’s goal is to reward anyone for detecting and reporting vulnerabilities only if a specific set of rules are followed. In order to implement a bug bounty program, two basic things are needed: A crowd and a platform to interact with that crowd.

Companies like Google or Paypal have their own self-hosted ways to interact with their researchers. The main issue with this approach is that it needs a big investment in terms of the payment process, platform maintenance, user experience, security, marketing, and SEO. This is why most companies go for 3rd party solutions like HackerOne, Bugcrowd, or Synack, among many other choices [4].

Independently of what approach you choose as a company, there are several topics to consider before starting on a bug bounty journey…

Do your pre-work

Running a bounty program is not an easy task and it’s better to put your house in order before you start. There are some key topics to consider:

  • Vulnerability fixing SLAs: If you are going to pay for vulnerabilities, you better fix them on time. If you don’t, you will only be throwing your money away. Another reason to fix vulnerabilities right away is to avoid duplicate reports, one of the main reasons for player turnover. Having a strong internal vulnerability management process is key.
  • Response team capacity: Reviewing reports is not the only task around program management. Payments, conflicting researchers, rewriting reports into a developer-understandable format, or thinking about proper fixes are other important topics to deal with. Building a team with a specific focus is a good idea.
  • Clear your backlog: On one side, if you pay for bugs you already know you are misspending your resources, and on the other, if anyone finds them and you don’t want to pay, you will lose good researchers for wasting their time. Clear your backlog (or load it into your platform) before starting on this path is recommended.
  • Build a proper policy: Building a policy is a complex thing. You must think about scope, payment criteria, eligible vulnerabilities, clear documentation, and update scheme, among other topics. We will discuss this later…

Prudence is the mother of all virtues

Sun Tzu said, “If you know the enemy and know yourself, you need not fear the result of a hundred battles”. Going full-in, in terms of exposed scope or payment amounts, without knowing your available budget, team capacity, application security maturity, or basic program management dynamics could lead to all sorts of bad scenarios.

For example, if you publish your whole attack surface to the world without having a certain maturity in terms of cloud or application security (just to name some examples), you would be paying tons of money for trivial bugs (Ej. Vulnerabilities detected by simple scanners). Also, you are going to be flooded with false positive bugs and you better have a properly sized team to handle them or all your core indicators will be affected, and by extension, the success of your program (Ej. Bad response or payment times).

Don’t underestimate the complexity of managing a program. Start slow, take your time to learn the dynamics and you will be fine. Unless of course, your budget and team capacity are unlimited.

Take care of your response times

If you look up any public bounty program, you will see that part of the information being displayed is related to response efficiency. The main reason is that no one wants to work on something that nobody cares about, and response times are a great indicator of how much a company cares.

Generally speaking, there are 4 big “times”: First response time, triage time, resolution time, and payment time. The first response time is also called “ack” time and it gives you information about the reception of your report. Triage time means that they are actually working on understanding, validating, and fixing the vulnerability. Resolution time means, well… that the report has been fixed. At last, “payment time” tells you the time between your first submission and your payment.

Keeping all times to the minimum is important, but be careful with some caveats. For example, researchers want to be paid right away, and usually, payments are made after the vulnerability has been fixed. So if your internal triage process is not mature enough, you will lose researchers simply because no one wants to get paid 6 months after their work has been done.

This is why most programs choose to pay as soon as the vulnerability is confirmed, even if it is not fixed, but this decision has its own set of consequences too. One collateral is that is highly probable that other researchers will find that same (not fixed) vulnerability and you will have to tag that report as duplicate. If this situation goes unchecked for a while, you will end up with tons of researchers leaving your program for wasting their time on already known (but not fixed) bugs.

Optimizing your internal triage process and SLAs is the best strategy you can go for. All the other hacks will end up affecting your program sooner or later.

Choose your scope wisely

The term scope is usually applied to describe two concepts: The surface (IPs, domains, schemes, ports/protocols, applications) that you will allow your researchers to test and the type of vulnerabilities that you are willing to pay for. A wide scope (in both senses of the word) could produce your crowd to focus on unimportant things and a narrow one could make the assessment very tedious, producing a negative impact on players.

When defining a scope, another important concept to keep in mind is its clarity and maintainability cost. Today’s web application's main flows are usually composed of a lot of subdomains and maybe half of those are in scope and half are out of it. As a researcher, it is very tedious having to check at every step of the way if the flow being tested is in or out of scope. On the other hand, keeping an up-to-date list of allowed subdomains (particularly on an agile development methodology) will imply a high cost in terms of maintainability. A good approach is just allowing all subdomains that come off from the main flows that you are interested in testing. This will significantly reduce maintenance and enhance the researcher’s experience.

In or out-scope vulnerability lists follow the same patterns. Go for a whitelist approach if your out-of-scope vulnerabilities are many, and a blacklist if they are not. Another piece of advice: if a researcher finds a good vulnerability outside your defined scope, it’s always a good idea to make a little recognition.

Money and Maturity

Just like any good or service in the economy, bug bounty programs are governed by the law of supply and demand. On one hand, if you pay 1 million US dollars for every valid bug, the whole world will be looking for them. On the other hand, if you just offer a pat on the back, almost no one is going to be interested.

This basic reasoning may suggest that it is a good idea to pay big amounts from the beginning, but is usually not. At least not without considering other variables, like application security maturity or available budget.

As a universal constraint, all budgets are limited. This means that it is our job to make the most of them, so paying big money for low-hanging fruit bugs is not a good idea (it’s cheaper to run automated scanners). Payment amounts should increase in parallel with your application’s security maturity.

One quick way to measure maturity is by the flow of vulnerabilities that are discovered over fixed periods of time. For example, if you start by paying 1000 USD for a critical bug and 6 months later no one finds a single one, probably it’s time to increase the amount.

Policy: benevolent dictator

Crowds can help companies to spot all sorts of creative bugs, but they are even more effective if you specify what you are wishing to find. Like the benevolent dictator in the open source philosophy, you should guide your crowd into what is important to the company. Here is where policies come to life.

A policy is your binding document with your crowd. All the useful information must be there. Engagement rules, disclosure rules, payment information, attack surface, out-of-scope vulnerabilities, API signatures or functional information are some examples.

Keep your policy simple, with constant updates, and focus on what the crowd needs, not you. Use other program policies as inspiration and always check with your legal team before going public.

Not a silver bullet

Programs are not and must not be used as a single application security strategy. Remember that any type of offensive exercise proves the presence of vulnerabilities, not the absence of them. These programs must be just a single checkpoint on a more complex SDL strategy.

Another important thing is that crowdsourcing is not a replacement for outsourcing. Formal penetration testing must not be replaced by these programs but used as a complementary strategy. Remember that most bug bounty programs better fit into a vulnerability assessment category than into a penetration testing one.

No crowd, no program

The failure or success of your program will be strongly defined by a single factor: your ability to build and maintain an engaged crowd. This is why the main goal of most commercial bug bounty platforms is about building the biggest, most diverse, and most skillful crowd. But despite the help they could provide us, it is our job to get as involved as we can with that crowd.

Some tips. First, the crowd doesn’t work for you, you work for them. If you put them at the center of your decision-making process, you will be just fine. Second, the crowd doesn’t want to just talk to us, they want to talk to each other, and “hacking events” are a good place to make this happen. Building on-site events or teaming up with security conferences are useful approaches. This will give space for them to share ideas and build stronger relationships.

Another good idea is to build a crowd that are also customers. The more they know your product, the better bugs they will find. Finally, public recognition is always helpful. You can do this by allowing researchers to make their discoveries public or by publishing a hall of fame with more restricted information, just to name some examples.

References

[1] https://www.pnas.org/content/101/46/16385

[2] https://www.amazon.com/-/es/Jeff-Howe/dp/0307396215

[3] https://craft.co/crowdsourcing-companies?page=4

[4] https://github.com/disclose/bug-bounty-platforms

[5]https://duo.com/labs/research/history-of-vulnerability-disclosure

[6] http://www.dit.upm.es/~pepe/401/5789.htm#!-alone

[7] https://www.nmrc.org/pub/advise/policy.txt

[8] https://cve.mitre.org/docs/docs-2000/cerias.html

[9] http://seclists.org/bugtraq/2000/Jun/182

--

--

Alejandro Iacobelli

Software engineer, penetration tester, bounty hunter, and appsec professor. I like debates, strategic or technical. Feel free to contact me to philosophize.