is a cofounder of Flynn and an architect of the Tent Protocol.

Brown M&M's, Tesla, and Security

August 6, 2015

Last week a friend drove me to the airport in his new Tesla. I’d never been in one before so I took the chance to poke around a little. The Tesla Model S has a lot of bells and whistles many cars lack. Most visibly, there is a huge touchscreen computer in the center console which includes a web browser. There is also a built-in 3G or LTE modem so the company can ship over-the-air updates and remotely access and monitor the car.

The first thing I did was visit SSL Labs in the browser to check if the browser’s TLS implementation was up to date.

Tesla’s browser is based on a version of Webkit that was released in 2011. The TLS configuration is completely broken, leaving it open to a variety of attacks that would compromise the authenticity and confidentiality of data transferred. This is a concern when visiting websites, and I’d advise against doing any online banking or private surfing from your Tesla’s browser. But it’s a much bigger concern when you consider what it says about Tesla as a company.

Why cars need to be secure

Vehicles in general and cars in particular should be held to the highest possible standard with regard to security. We trust not only our lives but the lives of everyone else on the road to them every day. As cars become more integrated with computers and transceivers, the possibility for something to go seriously wrong also increases. The most recent and serious example of this so far is the remote takeover vulnerability recently disclosed in Jeeps.

Disclaimer: I don’t own or have regular access to a Tesla, their software is closed source, they don’t ship a software simulator, and any attempt to poke around more deeply might brick a car that starts at $70,000. I’d love to explore more, but I can’t. Here are some (worst case) possibilities that I can’t exclude yet:

The touch screen system controls both entertainment and functional components of the car. Given what we know about car security it’s possible that vulnerabilities in the web browser could be used to pivot out into more critical functions just by visiting a web site.

Tesla can also ship over-the-air updates to cars in the field. I’m immediately curious if their update framework relies on the same TLS configuration as the web browser. If it does, a malicious attacker could tamper with an update and do anything from bricking the car to driving it off the road.

Brown M&M’s

Again, I have no way to determine the extent of the vulnerability, and it’s possible that the Tesla security team fully sandboxed the browser from the start. I’m not optimistic and here’s why:

Van Halen famously had a clause in their performance contract requiring a bowl of M&M candies be provided backstage, but that the brown M&M’s be removed. Like cars, concerts are technically complicated and, if poorly executed, dangerous to the performers.

From David Lee Roth’s autobiography:

Van Halen was the first band to take huge productions into tertiary, third-level markets. We’d pull up with nine eighteen-wheeler trucks, full of gear, where the standard was three trucks, max. And there were many, many technical errors — whether it was the girders couldn’t support the weight, or the flooring would sink in, or the doors weren’t big enough to move the gear through.

The contract rider read like a version of the Chinese Yellow Pages because there was so much equipment, and so many human beings to make it function. So just as a little test, in the technical aspect of the rider, it would say “Article 148: There will be fifteen amperage voltage sockets at twenty-foot spaces, evenly, providing nineteen amperes …” This kind of thing. And article number 126, in the middle of nowhere, was: “There will be no brown M&M’s in the backstage area, upon pain of forfeiture of the show, with full compensation.”

So, when I would walk backstage, if I saw a brown M&M in that bowl … well, line-check the entire production. Guaranteed you’re going to arrive at a technical error. They didn’t read the contract. Guaranteed you’d run into a problem. Sometimes it would threaten to just destroy the whole show. Something like, literally, life-threatening.

If the security of something so basic and so visible to customers is so broken, I’m deeply suspicious about the rest of the car’s security.


Tesla can and should make the most secure cars in the world.

They are designed to be digital from the ground up. The company has no legacy products, no overbearing parent company, and is trying to earn the trust of drivers as it presents them with a radical new set of technologies.

The PR costs of bad security alone make it a worthwhile investment and Tesla has a customer base of unusually tech-savvy customers. Companies like Google invest massive amounts of money into the security of their technology, but when Google ships insecure software, very few people die. The stakes for transportation are much higher.

Tesla also has the unusual ability to ship over-the-air updates and immediately fix newly-discovered vulnerabilities in any part of its software. However, there have been zero CVE identifiers issued to or security advisories from Tesla, despite 45 bugs being rewarded by their bounty program.

Here’s how to fix it:

First, open source everything. Tesla cars are remarkably closed today. Their vehicles are also very expensive and there are relatively few on the road. This combination makes it very difficult for security researchers to experiment.

Their current position with closed-source software is particularly at odds with the company’s stated patent philosophy:

Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.

We believe that applying the open source philosophy to our patents will strengthen rather than diminish Tesla’s position.

Elon Musk

Second, Tesla should modernize their security bounty program. Bounty programs are important for vendors because they encourage both research and coordinated disclosure. Surely Tesla wants newly discovered vulnerabilities reported to them first instead of at DEF CON or sold to the highest bidder by vulnerability brokers.

This isn’t a new idea, at least to Musk:

You want to be extra rigorous about making the best possible thing you can. Try to find everything that’s wrong with it, and fix it. Seek negative feedback, particularly from friends.

Their current maximum payout is $1,000 and specifically excludes issues related to TLS configuration. Compare this to to Google Chrome’s $50,000 and the fact that it costs $70,000 just to get your hands on a Tesla.

Update (2015-08-13): At some point after this post was published, the maximum payout from Tesla was bumped to $10,000.

United Airlines recently started offering security bounties in airline miles. Tesla would attract a massive audience of hobbyist and professional security researchers if they offered Powerwalls or even a Model S in exchange for disclosing critical vulnerabilities.

Tesla wants us to believe that they’re the future of cars. At least when it comes to security, it’s time they start acting like it.

For more details on attacks against Tesla cars, you should read this article detailing some vulnerabilities disclosed today by Kevin Mahaffey and Marc Rogers.

Docker Image Insecurity

December 23, 2014

Recently while downloading an “official” container image with Docker I saw this line:

ubuntu:14.04: The image you are pulling has been verified

I assumed this referenced Docker’s heavily promoted image signing system and didn’t investigate further at the time. Later, while researching the cryptographic digest system that Docker tries to secure images with, I had the opportunity to explore further. What I found was a total systemic failure of all logic related to image security.

Docker’s report that a downloaded image is “verified” is based solely on the presence of a signed manifest, and Docker never verifies the image checksum from the manifest. An attacker could provide any image alongside a signed manifest. This opens the door to a number of serious vulnerabilities.

Images are downloaded from an HTTPS server and go through an insecure streaming processing pipeline in the Docker daemon:

[decompress] -> [tarsum] -> [unpack]

This pipeline is performant but completely insecure. Untrusted input should not be processed before verifying its signature. Unfortunately Docker processes images three times before checksum verification is supposed to occur.

However, despite Docker’s claims, image checksums are never actually checked. This is the only section0 of Docker’s code related to verifying image checksums, and I was unable to trigger the warning even when presenting images with mismatched checksums.

if img.Checksum != "" && img.Checksum != checksum {
  log.Warnf("image layer checksum mismatch: computed %q,
             expected %q", checksum, img.Checksum)

Insecure processing pipeline


Docker supports three compression algorithms: gzip, bzip2, and xz. The first two use the Go standard library implementations, which are memory-safe, so the exploit types I’d expect to see here are denial of service attacks like crashes and excessive CPU and memory usage.

The third compression algorithm, xz, is more interesting. Since there is no native Go implementation, Docker execs the xz binary to do the decompression.

The xz binary comes from the XZ Utils project, and is built from approximately1 twenty thousand lines of C code. C is not a memory-safe language. This means malicious input to a C program, in this case the Docker image XZ Utils is unpacking, could potentially execute arbitrary code.

Docker exacerbates this situation by running xz as root. This means that if there is a single vulnerability in xz, a call to docker pull could result in the complete compromise of your entire system.


The use of tarsum is well-meaning but completely flawed. In order to get a deterministic checksum of the contents of an arbitrarily encoded tar file, Docker decodes the tar and then hashes specific portions, while excluding others, in a deterministic order.

Since this processing is done in order to generate the checksum, it is decoding untrusted data which could be designed to exploit the tarsum code2. Potential exploits here are denial of service as well as logic flaws that could cause files to be injected, skipped, processed differently, modified, appended to, etc. without the checksum changing.


Unpacking consists of decoding the tar and placing files on the disk. This is extraordinarily dangerous as there have been three other vulnerabilities reported3 in the unpack stage at the time of writing.

There is no situation where data that has not been verified should be unpacked onto disk.


libtrust is a Docker package that claims to provide “authorization and access control through a distributed trust graph.” Unfortunately no specification appears to exist, however it looks like it implements some parts of the Javascript Object Signing and Encryption specifications along with other unspecified algorithms.

Downloading an image with a manifest signed and verified using libtrust is what triggers this inaccurate message (only the manifest is checked, not the actual image contents):

ubuntu:14.04: The image you are pulling has been verified

Currently only “official” image manifests published by Docker, Inc are signed using this system, but from discussions I participated in at the last Docker Governance Advisory Board meeting4, my understanding is that Docker, Inc is planning on deploying this more widely in the future. The intended goal is centralization with Docker, Inc controlling a Certificate Authority that then signs images and/or client certificates.

I looked for the signing key in Docker’s code but was unable to find it. As it turns out the key is not embedded in the binary as one would expect. Instead the Docker daemon fetches it over HTTPS from a CDN before each image download. This is a terrible approach as a variety of attacks could lead to trusted keys being replaced with malicious ones. These attacks include but are not limited to: compromise of the CDN vendor, compromise of the CDN origin serving the key, and man in the middle attacks on clients downloading the keys.


I reported some of the issues I found with the tarsum system before I finished this research, but so far nothing I have reported has been fixed.

Some steps I believe should be taken to improve the security of the Docker image download system:

Drop tarsum and actually verify image digests

Tarsum should not be used for security. Instead, images must be fully downloaded and their cryptographic signatures verified before any processing takes place.

Add privilege isolation

Image processing steps that involve decompression or unpacking should be run in isolated processes (containers?) that have only the bare minimum required privileges to operate. There is no scenario where a decompression tool like xz should be run as root.

Replace libtrust

Libtrust should be replaced with The Update Framework which is explicitly designed to solve the real problems around signing software binaries. The threat model is very comprehensive and addresses many things that have not been considered in libtrust. There is a complete specification as well as a reference implementation written in Python, and I have begun work on a Go implementation and welcome contributions.

As part of adding TUF to Docker, a local keystore should be added that maps root keys to registry URLs so that users can have their own signing keys that are not managed by Docker, Inc.

I would like to note that using non-Docker, Inc hosted registries is a very poor user experience in general. Docker, Inc seems content with relegating third party registries to second class status when there is no technical reason to do so. This is a problem both for the ecosystem in general and the security of end users. A comprehensive, decentralized security model for third party registries is both necessary and desirable. I encourage Docker, Inc to take this into consideration when redesigning their security model and image verification system.


Docker users should be aware that the code responsible for downloading images is shockingly insecure. Users should only download images whose provenance is without question. At present, this does not include “trusted” images hosted by Docker, Inc including the official Ubuntu and other base images.

The best option is to block locally, and download and verify images manually before importing them into Docker using docker load. Red Hat’s security blog has a good post about this.

Thanks to Lewis Marshall for pointing out the tarsums are never verified.

  1. Checksum code context.

  2. cloc says 18,141 non-blank, non-comment lines of C and 5,900 lines of headers in v5.2.0.

  3. Very similar bugs been found in Android, which allowed arbitrary files to be injected into signed packages, and the Windows Authenticode signature system, which allowed binary modification.

  4. Specifically: CVE-2014-6407, CVE-2014-9356, and CVE-2014-9357. There were two Docker security releases in response.

  5. See page 8 of the notes from the 2014-10-28 DGAB meeting.

SMS Vulnerability in Twitter, Facebook and Venmo

December 3, 2012

Update: Twitter has fixed the issue for users of short codes. Users that use a “long code” should enable the PIN code in their account.

Twitter users with SMS enabled are vulnerable to an attack that allows anyone to post to their account. The attacker only needs knowledge of the mobile number associated with a target’s Twitter account. Messages can then be sent to Twitter with the source number spoofed.

Like email, the originating address of a SMS cannot be trusted. Many SMS gateways allow the originating address of a message to be set to an arbitrary identifier, including someone else’s number.

Facebook and Venmo were also vulnerable to the same spoofing attack, but the issues were resolved after disclosing to their respective security teams.



Users of Twitter that have a mobile number associated with their account and have not set a PIN code are vulnerable. All of the Twitter SMS commands can be used by an attacker, including the ability to post tweets and modify profile info.

Service Providers

All services that trust the originating address of SMS messages implicitly and are not using a short code are vulnerable.



Until Twitter removes the ability to post via non-short code numbers, users should enable PIN codes (if available in their region) or disable the mobile text messaging feature.

Twitter has a PIN code feature that requires every message to be prepended with a four-digit alphanumeric code. This feature mitigates the issue, but is not available to users inside the United States.

Service Providers

The cleanest solution for providers is to use only an SMS short code to receive incoming messages. In most cases, messages to short codes do not leave the carrier network and can only be sent by subscribers. This removes the ease of spoofing via SMS gateways.

An alternative, less user-friendly but more secure solution is to require a challenge-response for every message. After receiving an SMS, the service would reply with a short alphanumeric string that needs to be repeated back before the message is processed.

Disclosure Timelines


The issue I filed was initially inspected by a member of their security team, but was then routed to the normal support team who did not believe that SMS spoofing was possible. I then reached out directly to someone on the security team who said that it was an “old issue” but that they did not want me to publish until they got “a fix in place”. I received no further communication from Twitter.

17 Aug 2012 I notified Twitter about the vulnerability via their web form.
20 Aug 2012 Twitter Security routed my report to their mobile support team.
6 Sep 2012 Twitter asked me not to publish until they have fixed the issue.
15 Oct 2012 I requested an update on the issue, and receive no response.
28 Nov 2012 I notified Twitter that I would publicly disclose this issue.
4 Dec 2012 I received confirmation that the issue has been resolved.


Initially Facebook did not respond to my report on their security vulnerability page. I then emailed a friend who works at Facebook, who facilitated my contact with their security team.

19 Aug 2012 I notified Facebook about the vulnerability via their web form.
6 Sep 2012 I received a response after getting a friend on the engineering team to bump the issue internally.
28 Nov 2012 I received confirmation that the issue had been resolved.

Disclosure: I will receive a bounty from Facebook for finding and reporting this issue to them. The Facebook bounty program requires responsible disclosure and time to resolve internally in “good faith” before publishing.


I initially disclosed this issue to Venmo support, as they do not have a security contact published. When I did no receive a response, I notified the Braintree security team (Braintree recently acquired Venmo), who responded very promptly.

29 Nov 2012 I notified Venmo support about the vulnerability.
30 Nov 2012 I notified Braintree security and received a response within 40 minutes.
1 Dec 2012 I received confirmation that Venmo SMS payments have been disabled, mitigating the vulnerability.

Security Disclosure Policy Best Practices

July 6, 2012

Every company with public-facing web applications needs a clear security disclosure policy. This policy serves three main purposes. First it tells people who discover a vulnerability how to proceed and what your response will be so they can report problems easily. Secondly it tells your employees exactly how to process and respond to these reports. Most importantly, a clear public security policy lets your users know in advance how you will respond to recently discovered vulnerabilities.

As I discovered last week when notifying Heroku of a vulnerability in their build system, even the most progressive, respected companies don’t always get it 100% right.

No matter how well designed your application, users will find bugs. When (not if) that happens, it is vital that the person who discovers the vulnerability knows how to report it.

Bug reporting is a funnel, just like every other part of your web application. If users get confused or decide you are not worth their time, they will leave the site without converting. In this case your “users” may be curious hackers, security researchers, or even your own customers. It’s your job to make reporting as easy, painless, and rewarding as possible. If you fail, you risk not finding out about the vulnerability, which could endanger your users’ data and the company’s future. Remember, the person reporting a security flaw is being friendly and doing you a big favor. Treat them accordingly.

Here’s how:

Security Page

At some point, someone will need to report a security vulnerability. You don’t want to have that conversation you in public or in the clear. It is also not a problem you want to outsource to Twitter, Facebook, or a forum. The first thing on your security page should be a link to The second is a PGP public key, which can be used to send you encrypted messages. This page should always be served over HTTPS. Only by encrypting each step in the communication process can the sender be sure the person they are emailing is who they claim to be without being intercepted. If the PGP key is not posted already, a responsible hacker might email a request for one to be posted which wastes valuable time. The person discovering the vulnerability might become unavailable by the time the key is posted, causing further delays.


The policies listed on this page should be clear, concise, and friendly. The person who discovers a vulnerability is already in a difficult position and it is important they understand your policies.


A person who accidentally discovers a vulnerability may need to experiment further to see how far the problem goes, and even if it is in fact a vulnerability. Often it is not possible to determine the nature of a vulnerability without trying to do something that should not normally be possible or allowed. Unfortunately parts of this process often necessarily run afoul of the law in some jurisdictions, meaning that if the hacker wants to report a problem to a company, s/he needs to admit to participating in a possibly illegal activity. Some companies make this difficult situation worse by threatening legal action outright or if the disclosing hackers refuse to sign a retroactive NDA. Minor penetration is a routine part of security testing and user exploration. Regardless of the legal threats available to your company, using them against users, hackers, or researchers who acted without malicious intent is never advisable.

For these reasons it is vital that you indemnify and hold blameless anyone who penetrates your site, and in the process of exploring or experimenting, extracts a small amount of sensitive data and promptly notifies you then destroys any data collected. Without a clear and binding promise of immunity from future prosecution, those who discover vulnerabilities may not notify you at all. Publishing a clear policy that protects the party disclosing an exploit is the first, and most important step in building trust.


Details about a vulnerability are worth a great deal of money. This kind of information is valuable not only to your competitors, but also on growing black and grey markets. Many other parties would pay for the details of a vulnerability in order to exploit it themselves. This means that anyone discovering a vulnerability on your site could sell it easily. Knowledge of a vulnerability will always be worth more to your company than any of these others. However, a hacker asking for money either before or after making a disclosure can make everyone uncomfortable. Avoid this situation entirely by advertising generous bounties ahead of time. Here’s what some companies are currently offering:

Facebook Starting at $500
Mozilla $3,000 flat fee
Google $500-$20,000 depending on severity

The free market determines how much a vulnerability in your system is worth. For products with large user bases, the prices can get very high.

The best companies also make their pre-release code or a staging server with dummy data available for testing with even higher rewards. In this case exploits can be found and remedied before users’ data is ever compromised.


Without the threat of full disclosure, responsible disclosure would not work, and vendors would go back to ignoring security vulnerabilities. Bruce Schenier

Do not ever try to get hackers or researchers to take a bounty in exchange for not publishing their discovery (and it is their discovery). Unfortunately some companies offer bounties only in exchange for silence. Don’t attach strings to bounties. Many hackers prize recognition higher than remuneration, and there’s no need to deprive them of both. Similarly, security researchers make their living off of their reputation for discovering holes. A great policy is to offer a reasonable no-strings-attached bounty and then double for coordinated disclosure.

Coordinated Disclosure is when the researcher who discovers a vulnerability notifies the vendor of the product and allows them a reasonable amount of time to fix it before disclosing publicly.

No matter what, remember that someone disclosing a vulnerability to you directly is usually trying to do the right thing.


Respond very quickly. When the crisis ends, the discussion will be about how your company handled the situation. It is crucial that the relevant team began work immediately instead of waiting until the next day, or for a lawyer’s permission to talk to the person who reported the problem. Any delay will look like either incompetence or a conspiracy to cover up the vulnerability. You can’t afford for customers or the press to think either. Your customers will be busy checking their own data and then decide if they should change providers. The timing of your response should not be its own reason to leave.

Emails to are your company’s highest priority, period. Your lawyer, investors, and mother can all wait. Messages to that address should be forwarded to the highest ranking member of your security/ops/engineering team on call and the CEO. If you don’t have a 24-hour on-call rotation, create a script with Twilio, Tropo, or Adhearsion that wakes up the CEO or founders. Whoever needs to respond to the problem gets woken up and/or called back from vacation. You asked your customers trust you with their data (and often their own customers’ as well). Many of those affected would gladly wake up themselves to fix it if they were able. You have an obligation to respond as quickly as you are physically able.


Hearing about a vulnerability is a high stress time for your team. Try not to make it worse. They already feel like they screwed up no matter what caused the vulnerability. Don’t have a blame culture–make it clear you’re all on the same team and share a single mission: doing the best you can for your customers.

Your security team needs to be in direct contact immediately with the person who discovered the vulnerability. Any attempts at legal maneuvering will slow down their response and distract the only people who can fix the problem(s). Make it clear that your team’s responsibility is to patch the application, not manage your company’s strategic communications. If developers are free to treat each other as peers without a lawyer leaning over their shoulders, the patch will come sooner.


Customers need to know about a problem after it has been patched. Public disclosure means you get caught with your pants down. Discovery without disclosure means someone has a backdoor to your users’ lives and businesses. Which do you think they would prefer? Your obligation to your customers comes first, regardless of any embarrassment it might cause. If world governments are not able to keep their embarrassing secrets, it is unlikely you will be able to either (for long). Honest and direct communication builds user loyalty, hiding the failures that affect your users causes far bigger problems.


  1. All vulnerabilities caught in pre-release stage.
  2. Vulnerabilities in production code caught and fixed before public disclosure.
  3. Vulnerabilities caught and disclosed to users and company simultaneously.
  4. Vulnerabilities discovered by malicious party and exploited without company or user knowledge.

Always remember when you are dealing with disclosure of a security issue that things could be much worse–you could not know about it.

Vulnerabilities in Heroku’s Build System

July 3, 2012

Update: Heroku’s official response.

Last week I discovered a major security flaw in the Heroku Cedar stack build system. This vulnerability exposed sensitive information including API keys, private keys and server credentials.

Once I realized the extent of the vulnerability, I immediately informed Heroku. I have been in regular contact with their security team and the problem has since been fixed.

Understanding the issue requires operational knowledge of the Cedar stack build system.

Cedar Build Process

Since Heroku runs on Heroku, after receiving a git push of an application, the build request is dispatched to a regular Heroku app named Codon that handles builds. Codon runs a buildpack which compiles the application so that it can be deployed.

Normally apps running on Heroku are entirely isolated using Linux Containers, but to perform builds Codon runs untrusted code in this container.

Source Code Exposure

I encountered a Ruby exception and backtrace from the Heroku build system while experimenting with custom buildpacks. Ruby backtraces look like this:

app.rb:2:in `foo': undefined method `a' for nil:NilClass (NoMethodError)
        from app.rb:5:in `<main>'

Backtraces include the paths to the source files that encountered the exception. This pointed me to the source files for Codon, which indicated the possibility of gaining read access to the code.

I then ran a custom buildpack that copied the source code into my Heroku app and verified that it was possible to view the source code of Codon.

While examining the source code I discovered that there was another vulnerability that was much more serious than source code exposure.

Sensitive Credential Exposure

Like most Heroku apps, Codon uses environment variables to configure runtime options including sensitive credentials. This ensures that credentials are not checked into version control. However, due to the constraints of Heroku containers, Codon is running as the same user as the buildpack, which is untrusted. This allows the buildpack to dump the environment variables of Codon from the Linux process table:

cat /proc/*/environ

The environment variables exposed included critical credentials such as internal API keys, a SSH private key with access to source code repositories, Redis connection details, and a key with access to their Campfire account.

Disclosure Timeline

Immediately after discovering this vulnerability, I sent an email to Heroku’s security team to start the disclosure process. I requested a PGP key first, as they did not provide one on their website. Here is the discovery and disclosure timeline:

2012-06-26 19:45 PDT Encountered backtrace and began experimenting.
2012-06-26 20:25 PDT Sent email to Heroku asking for PGP key.
2012-06-26 22:40 PDT Received PGP key from Heroku.
2012-06-26 22:56 PDT Received follow-up email with mobile phone number of a Heroku security engineer.
2012-06-26 22:58 PDT Sent PGP encrypted description of the vulnerability.
2012-06-26 23:06 PDT Received confirmation of receipt.
2012-06-27 12:01 PDT Received confirmation that an interim patch would be pushed in a few hours, and full patch by Tuesday (2012-07-03).
2012-06-28 20:44 PDT Checked validity of credentials, SSH and Campfire keys were still valid.
2012-06-29 16:13 PDT Checked validity of credentials, all credentials were invalid.
2012-07-03 13:35 PDT Received confirmation that the issue had been patched.

Customer Impact

The build system appears to have been vulnerable since the Cedar stack launched about a year ago. Customer applications and credentials could have been compromised at some point due to the credential exposed by the vulnerability. Anyone who ran applications on Heroku during this period should immediately reset all sensitive credentials, and audit their access logs to determine if any infrastructure or data has been accessed.

I suspect that a variant of this vulnerability may exist in other Platform as a Service build systems. Further research is warranted.

Full Disclosure: I remain a Heroku customer with several apps in production, and I have no plans to change platforms. Heroku offered me a paid penetration test contract, but required that I sign a retroactive non-disclosure agreement which would have precluded publishing this article.

If you liked this, then you should check out my article on security disclosure policy best practices.