How I stop worrying and let a project ship
Consider a scenario similar to the previous post, where there’s some big security issue in a project that wants to ship soon. Instead of stopping the project in its tracks, you decide to let it go. Why is that?
Reasons to let it ship
- Investigate further and discover whether the security issue is really exploitable. As a side benefit, maybe this will give mitigation ideas. (More on this later)
- We’re here to build useful products and make money, not to build a perfect work of art. Maybe after stepping back and thinking about it some more you can convince yourself (and others) that it’s actually a reasonably secure design despite the issue you’ve found.
- You don’t want to damage the relationship with the product team because it will hurt the overall mission.
- The number of people on earth that have the skillset or access necessary to exploit the issue is in the single digits, and none of them are going to be motivated to do so.
Mitigations
Rather than just letting the project ship unmodified, maybe you can work out some kind of mitigating controls. Examples:
How I stop projects from shipping at big tech companies
Shipping projects at a big tech company is very challenging work. Much can be written on the subject (such as Sean Goedecke’s post on how to ship), but it’s not my area of expertise. Instead, I work to make sure that projects don’t ship. Or rather, that projects that shouldn’t be shipping don’t ship. I do this for the same reason doctors sometimes measure success in terms of “nobody died who shouldn’t have”. Projects should be built to meet the security needs of the business without introducing risks that are likely to make the project cost more than the revenue it brings in. I want to ensure that we aren’t shipping projects with unnecessary risks when we know how to mitigate those risks.
Locked Out
Five or six years ago I created a Facebook account because I needed an account for work. I hadn’t used the account since then, but a few weeks ago decided to try out using Threads. While doing that I found my old Facebook password to log in and associate my new Instagram/Threads account with the old Facebook account. After a few days, I was greeted with this:
The economics of proving exploitability
It depends
When a security researcher or engineer discovers a new vulnerability, what should they do about it? Should we prove exploitability, go part of the way there, or just fix the bug? As with so many things, the answer is “it depends”.
Side note on exploit markets
Nothing in this post is intended to apply to exploit markets where the value is derived from selling or using exploits to achieve the organization’s goals. In those cases the economics are still complex, but they’re very different from anything I’ve ever worked on. I don’t have many references on this, but a recent podcast episode with Mark Dowd was very interesting.
Ask Why
There are many things that go into good security engineering work. One of these is gaining a deep understanding of how things work. In other words, “asking why”, rather than taking an answer and accepting it.
Before you read this, watch this 7.5 minute clip of an interview with Richard Feynman.
My natural tendency, and I suspect this is true of many others working in security, is to want to delve deeply into understanding how things work. When I learn of a new interesting product or feature I want to take it apart. Some of the recent excitement in machine learning and AI has led me to both play with tensorflow and pytorch usage to understand how models are built, but also to investigate their internals to understand how they work under the hood. The Machine Learning Crash Course, Andrej Karpathy’s Neural Networks: Zero to Hero videos/notebooks, and Jose Duart’s blog post on updates to the Tensorflow threat model are a few good starting points, if you’re so inclined.
Contextual Bloat
Contextual bloat
Background
Bert Hubert recently wrote an excellent piece on software bloat and related this to downstream effects on security. Why Bloat Is Still Software’s Biggest Vulnerability - IEEE Spectrum I suggest reading that first, as well as Niklaus Wirth’s 1995 article “A Plea for Lean Software”. This post is mostly an attempt at delineating a strategy to improve the situation in the short term.
One of the examples given in Hubert’s original post is a garage door opener that uses 50 million lines of code. That seems like a very large number, especially when compared to some choice open source projects. Chromium, the open-source project behind Google Chrome, is approximately 40 million lines of code. The Linux kernel contains 46 million lines of code.