The economics of proving exploitability
It depends
When a security researcher or engineer discovers a new vulnerability, what should they do about it? Should we prove exploitability, go part of the way there, or just fix the bug? As with so many things, the answer is “it depends”.
Side note on exploit markets
Nothing in this post is intended to apply to exploit markets where the value is derived from selling or using exploits to achieve the organization’s goals. In those cases the economics are still complex, but they’re very different from anything I’ve ever worked on. I don’t have many references on this, but a recent podcast episode with Mark Dowd was very interesting.
Microeconomics
Microeconomics studies how individuals and businesses make decisions about allocating scarce resources, and how those decisions interact within markets.
Developer perspective
From the developer’s perspective, the resources being allocated are development and testing time. The decision before the developer is whether to spend that time fixing a “potential” vulnerability or whether they should do something else that might provide more value. That value could be monetary value or simply something more fun than fixing a bug. For a simple vulnerability a patch might only take a few seconds, with a few more spent on creating a unit test, writing change descriptions, and a few minutes getting peer review to commit the change.
For simple vulnerabilities it is hardly worthwhile to spend time on proving exploitability unless that code is in a particularly hot path. In that case understanding the code more deeply and proving exploitability becomes a better value proposition, if it means millions of dollars spent in additional compute cycles. For a more complex vulnerability or something requiring an architectural fix, the development effort might require significant costs requiring multiple months or years of human effort to fix. In the more complex case then it is clearly better for the development team to more fully understand exploitability.
Security perspective
From the security researcher’s perspective, the resources being allocated are whether to spend a potentially significant amount of time proving exploitability rather than spending that time finding more vulnerabilities. As both an internal and external bug hunter I have usually tried to err on the side of finding more potential vulnerabilities.
There is certainly a perverse incentive here in that I might work on something for a while only to discover that it isn’t exploitable, and it might seem like I have wasted my time on it. If I’d just reported the potential vulnerability and there was a patch then it seems like my bug output is higher even if it ultimately wasn’t exploitable. While I don’t like the idea of having a bug output that is higher than it should be, the real question for a large organization is whether we’re minimizing the overall cost of the vulnerabilities. (Of course, not fixing any of them and not employing security researchers at all is an option, though this also has downstream costs that could result in significantly lower revenue or not having a business at all.)
Organization perspective
The developer and security researcher perspectives are at odds here. The larger organization will want to minimize overall costs, but achieving this is complex and spans several groups who may not have ever directly communicated in the past and might even not work for the same company. There could also be direct monetary consideration involved, as in the case of bug bounties. Here there’s clearly more incentive on the developer to require more proof of exploitability, to ensure they are not rewarding bugs that should be out of scope.
Who is correct?
There’s invevitably going to be some amount of antagonism in the relationship between security researchers and developers. This exists for many reasons, including embarassment at having introduced a vulnerability, disbelief that someone will actually be able to exploit a vulnerability, and a desire by a security researcher to minimize their effort on a given bug so they can move on to the next bug.
Sometimes it’s just about people wanting to prove they’re right about something. (To be clear, I am not immune from this.) Ultimately the question of who is correct is, I think, irrelevant. The question is how we move forward and fix potential vulnerabilities at a minimal cost, while better understanding exploitability when the associated development costs are too high for simple fixes.
Organizational Challenges
When meeting resistance to the urgency of fixing individual vulnerabilities, taking the time to craft a full exploit can provide enormous value. It’s very difficult to argue with a bug report when presented with a working exploit, and it can help establish credibility for the future. I have personally presented working exploits to development teams for their products, and it is incredibly rewarding to see their thinking change after this. The working relationship after this isn’t always perfect due to competing interests, but in my experience it’s usually much improved.
With an organization that already takes these seriously and has a mature security program, simply patching the overflow can be the right choice. If the additional overflow check is in a critical path, however, it could add a significant computing cost because of the additional CPU cycles. Spread across billions of devices this can add up, but across a dozen it might not matter unless a large portion of the machine’s time is spent looping over this code. In that case it can be worth verifying exploitability.
Fun
Lastly, writing exploits is fun. They can be a fun challenge and learning experience. They can prove to be a roller coaster of emotion that ends in failure, but it’s incredibly satisfying to create a working exploit. From that perspective there is clear value in creating the full exploit.
Conclusion
I don’t yet have an easy answer to this. In my experience the best outcomes for the organization are when security and development teams work closely together to ensure that their perspectives and current priorities are well understood. A developer who doesn’t know how critical a vulnerability really is might not understand the impact it will have if that vulnerability is not fixed quickly. Similarly, a security researcher who doesn’t understand the current business priorities won’t have as much empathy as they should for the pressures on the development team. It might not be as obvious to the researcher where the best place is to draw the line between reporting a bug without proof and showing that the bug is likely or certainly exploitable.
By participating earlier in the design and development phases security teams can minimize the overall costs of shipping products to customers. This is best achieved by establishing a close working relationship with development teams.