Vulnerability Disclosure on the Shoulders of Search-Engine Giants

The subject of vulnerability disclosure seems to be replacing religion and politics as the top subject for awkward holiday dinners with family. OK, maybe not your family.

There are a few major schools of thought on vulnerability disclosure:

  1. Full Disclosure: Drop bugs on folks without any vendor notice.
  2. Coordinated Disclosure: Drop bugs, but only after at least attempting to work with the vendor to get a patch ready and their PR team on alert.
  3. Disclosure? LOL: Either hold onto bugs for personal use or sell them.

"But Mark, you forgot Responsible Disclosure!", no, I didn't. Responsible disclosure is as annoying of a phrase as ethical hacking. The very thought of trying to objectively qualify actions with "responsible" or "ethical" for these sorts of contexts is exhausting. For our purposes, what people dub as responsible disclosure is usually well aligned with coordinated disclosure.

Looking Backward Before Going Forward

Any good discussion about vulnerability disclosure should start about 15 years ago with RFPolicy from Rain Forest Puppy, better known as the CTO of Bluebox Security, Jeff Forristal. This text file is considered by most to be the first thoughtfully published document on the topic of how to appropriately handle vulnerability disclosure with a vendor.

Even today, vulnerability disclosure policies spoken about by researchers will often mention RFP when describing their own process on the matter.

The meat of this document is overly simplified as:

"This policy states the 'guidelines' that an individual intends to follow. You basically have 5 days (read below for the definitions and semantics of what is considered a 'day') to return contact to the individual, and must keep in contact with them *at least* every 5 days. Failure to do so will discourage them from working with you and encourage them to publicly disclose the security problem."

Since 2000 when this document was released, a lot has changed in terms of the amount of visibility security research has, the continued growth of "the scene," the number of folks to drop bugs on (SaaS, anyone?), and increased usage of CFAA/DMCA against said researchers.

While bug bounty programs are luckily popping-up left and right, we've got a long ways to still go for safe, friendly interactions with vendors to handle disclosure in ways that won't land folks in jail for being the messenger of bad news.

Getting Googly with Vulnerability Disclosure

In 2010, Google's security team published a blog post about their own views on critical bug handling and vendor responsiveness. In this post, they state that a vendor should fix critical bugs within 60 days of being notified and after which, Google would publicly disclose the matter — fixed or not.

"Accordingly, we believe that responsible disclosure is a two-way street. Vendors, as well as researchers, must act responsibly. Serious bugs should be fixed within a reasonable timescale. Whilst every bug is unique, we would suggest that 60 days is a reasonable upper bound for a genuinely critical issue in widely deployed software. This time scale is only meant to apply to critical issues."

Subsequently, Google has gone ahead and formed the badass "Project Zero" and similar to their previous timeline have stated: 

"Project Zero will report bugs it finds only to the software vendor, and it will give those vendors 60 to 90 days to issue patches before public disclosure. This time frame may be reduced for bugs that appear to be actively exploited."

Yahoo is Evolving Nicely... Thanks to Stamos

Since bringing on Alex Stamos as CISO earlier this year, Yahoo looks like a completely different company from a security point-of-view. Whether it's their bug bounty program being run through HackerOne or being the first major mail provider to enforce a "reject" policy for DMARC, Stamos has pushed through big changes in very little time.

In their most recent move, Yahoo has stated that they will now disclose bugs within 90 days. This aligns their policy with Google and thus, we have companies best known for their search engine technologies leading the charge to simplify vulnerability disclosure processes and put some heat on vendors to "do the right thing."

OK, and then?

Here's the thing. RFPolicy may be near and dear to the hearts of many security researchers, but a random vendor will laugh at you for using that as your justification for releasing a bug. The "cool vendors" who "get it" are going to already probably work this process out in the right way without batting around 15-year-old community policy.

These days, we have the work Katie Moussouris did to get ISO 29147 to even be a thing but it doesn't cover one of the crucial parts of this equation: the timeline. That makes sense, though, because that's not really a battle to be easily won as part of an ISO standard. Agreement is just too messy from too many angles.

Google and Yahoo set precedent. I can e-mail a vendor, notify them that I am following the lead of Internet giants for the disclosure timeline, and hang my hat on that for a while without much fuss. Between ISO 29147 for process and Google/Yahoo for timeline, there's at least hope vendors being contacted won't just go, "Righttttt, so we're just going to sue you."

The more bugs that come out of Yahoo and Google, the more examples we have of them following their stated timelines and releasing bugs on a wide variety of vendor and big names. This is good for everyone in research.

The more we can do to support these companies publicly, the more awareness will be given to bug bounty programs, bug handling processes, and disclosure timelines. All of these points makes security research a better field to be a part of and maybe keeps a couple more people out of legal battles for no good reason.