Google Concedes Google Code Not Good Enough?

Google recently released an update to End-to-End, their communications security tool. As part of the announcement, they said:

We’re migrating End-To-End to GitHub. We’ve always believed strongly that End-To-End must be an open source project, and we think that using GitHub will allow us to work together even better with the community.

They didn’t specifically say how it was hosted before, but a look at the original announcement tells us it was here – on Google Code. And indeed, when you visit that link now, it says “Project “end-to-end” has moved to another location on the Internet”, and offers a link to the Github repo.

Is Google admitting that Google Code just doesn’t cut it any more? It certainly doesn’t have anything like the feature set of Github. Will we see it in the next round of Google spring-cleaning in 2015?

New Class of Vulnerability in Perl Web Applications

We did a Bugzilla security release today, to fix some holes responsibly disclosed to us by Check Point Vulnerability Research, to whom we are very grateful. The most serious of them would allow someone to create and control an account for an arbitrary email address they don’t own. If your Bugzilla gives group permissions based on someone’s email domain, as some do, this could be a privilege escalation.

(Update 2014-10-07 05:42 BST: to be clear, this pattern is most commonly used to add “all people in a particular company” to a group, using an email address regexp like .*@mozilla.com$. It is used this way on bugzilla.mozilla.org to allow Mozilla Corporation employees access to e.g. Human Resources bugs. Membership of the Mozilla security group, which has access to unfixed vulnerabilities, is done on an individual basis and could not be obtained using this bug. The same is true of BMO admin privileges.)

These bugs are actually quite interesting, because they seem to represent a new Perl-specific security problem. (At least, as far as I’m aware it’s new, but perhaps we are about to find that everyone knows about it but us. Update 2014-10-08 09:20 BST: everything old is new again; but the level of response, including changes to CGI.pm, suggest that this had mostly faded from collective memory.) This is how it works. I’m using the most serious bug as my example. The somewhat less serious bugs caused by this pattern were XSS holes. (Check Point are going to be presenting on this vulnerability at the 31st Chaos Communications Congress in December in Hamburg, so check their analysis out too.)

Here’s the vulnerable code:

my $otheruser = Bugzilla::User->create({
    login_name => $login_name, 
    realname   => $cgi->param('realname'), 
    cryptpassword => $password});

This code creates a new Bugzilla user in the database when someone signs up. $cgi is an object representing the HTTP request made to the page.

The issue is a combination of two things. Firstly, the $cgi->param() call is context-sensitive – it can return a scalar or an array, depending on the context in which you call it – i.e. the type of the variable you assign the return value to. The ability for functions to do this is a Perl “do what I mean” feature.

Let’s say you called a page as follows, with 3 instances of the same parameter:

index.cgi?foo=bar&foo=baz&foo=quux

If you call param() in an array context (the @ sigil represents a variable which is an array), you get an array of values:

@values = $cgi->param('foo');
-->
['bar', 'baz', 'quux']

If you call it in a scalar context (the $ sigil represents a variable which is a scalar), you get a single value, probably the first one:

$value = $cgi->param('foo'); 
-->
'bar'

So what context is it being called in, in the code under suspicion? Well, that’s exactly the problem. It turns out that functions called during hash value assignment are evaluated in a list context. However, when the result comes back, that value or those values are assigned to be part of uthe hash as if they were a set of individual, comma-separated scalars. I suspect this behaviour exists because of the close relationship of lists and hashes; it allows you to do stuff like:

my @array = ("foo", 3, "bar", 6);
my %hash = @array;
-->
{ "foo" => 3, "bar" => 6 }

Therefore, when assigning the result of a function call as a hash value, if the return value is a single scalar, all goes as you would expect, but if it’s an array, the second and subsequent values end up being added as key/value pairs in the hash as well. This allows an attacker to override values already in the hash (specified earlier), which may have already been validated, with values controlled by them. In our case, real_name can be any string, so doesn’t need validation, but login_name most definitely does, and it already has been by the time this code is called.

So, in the case of the problematic code above, something like:

index.cgi?realname=JRandomUser&realname=login_name&realname=admin@mozilla.com

would end up overriding the already-validated login_name variable, giving the attacker control of the value used in the call to Bugzilla::User->create(). Oops.

We found 15 instances of this pattern in our code, four of which were exploitable to some degree. If you maintain a Perl web application, you may want to audit it for this pattern. Clearly, CGI.pm param() calls are the first thing to look for, but it’s possible that this pattern could occur with other modules which use the same context-sensitive return feature. The generic fix is to require the function call to be evaluated in scalar context:

my $otheruser = Bugzilla::User->create({
    login_name => $login_name, 
    realname   => scalar $cgi->param('realname'), 
    cryptpassword => $password});

I’d say it might be wise to not ever allow hash values to be assigned directly from functions without a call to scalar.

Email Account Phishers Do Manual Work

For a while now, criminals have been breaking into email accounts and using them to spam the account’s address book with phishing emails or the like. More evil criminals will change the account password, and/or delete the address book and the email to make it harder for the account owner to warn people about what’s happened.

My mother recently received an email, purportedly from my cousin’s husband, titled “Confidential Doc”. It was a mock-up of a Dropbox “I’ve shared an item with you” email, with the “View Document” URL actually being http://proshow.kz/excel/OLE/PPS/redirect.php. This (currently) redirects to http://www.affordablewebdesigner.co.uk/components/com_wrapper/views/wrapper/tmpl/dropbox/, although it redirected to another site at the time. That page says “Select your email provider”, explaining “Now, you can sign in to dropbox with your email”. When you click the name of your email provider, it asks you for your email address and password. And boom – they have another account to abuse.

But the really interesting thing was that my mother, not being born yesterday, emailed back saying “I’ve just received an email from you. But it has no text – just an item to share. Is it real, or have you been hacked?” So far, so cautious. But she actually got a reply! It said:

Hi <her shortened first name>,
I sent it, It is safe.
<his first name>

(The random capital was in the original.)

Now, this could have been a very smart templated autoresponder, but I think it’s more likely that the guy stayed logged into the account long enough to “reassure” people and to improve his hit rate. That might tell us interesting things about the value of a captured email account, if it’s worth spending manual effort trying to convince people to hand over their creds.

HSBC Weakens Their Internet Banking Security

From a recent email about “changes to your terms and conditions”. (“Secure Key” is their dedicated keyfob 2-factor solution; it’s currently required both to log in and to pay a new payee. It’s rather well done.)

These changes will also enable us to introduce some enhancements to our service over the coming months. You’ll still have access to the full Internet Banking service by logging on with your Secure Key, but in addition, you’ll also be able log in to a limited service when you don’t use your Secure Key – you’ll simply need to verify your identity by providing other security information we request. We’ll contact you again to let you know when this new feature becomes available to you.

Full details of all the changes can be found below which you should read carefully. If you choose not to accept the changes, you have the right to ask us to stop providing you with the [Personal Internet Banking] service, before they come into effect. If we don’t hear from you, we’ll assume that you accept the changes.

Translation: we are lowering the security we use to protect your account information from unauthorised viewing and, as long as you still want to be able to access your account online at all, there’s absolutely nothing you can do about it.

The Latest Airport Security Theatre

All passengers flying into or out of the UK are being advised to ensure electronic and electrical devices in hand luggage are sufficiently charged to be switched on.

All electronic devices? Including phones, right? So you must be concerned that something dangerous could be concealed inside a package the size of a phone. And including laptops, right? Which are more than big enough to contain said dangerous phone-sized electronics package in the CD drive bay, or the PCMCIA slot, and still work perfectly. Or, the evilness could even be occupying 90% of the body of the laptop, while the other 10% is taken up by an actual phone wired to the display and the power button which shows a pretty picture when the laptop is “switched on”.

Or are the security people going to make us all run 3 applications of their choice and take a selfie using the onboard camera to demonstrate that the device is actually fully working, and not just showing a static image?

I can’t see this as being difficult to engineer around. And meanwhile, it will cause even more problems trying to find charging points in airports. Particularly for people who are transferring from one long flight to another.

LinkedIn Moving to Always-On HTTPS

I didn’t see this article by LinkedIn when it was first posted in October last year. But it warms the heart to see a large company laying out how it is deploying things like CSP, HSTS and pinning, along with SSL deployment best practice, to make its users more secure. I hope that many more follow in their footsteps.

IE11, Certificates and Privacy

Microsoft recently announced that they were enhancing their “SmartScreen” system to send back to Microsoft every SSL certificate that every IE user encounters. They will use this information to try and detect SSL misissuances on their back end servers.

They may or may not be successful in doing that, but this implementation raises significant questions of privacy.

SmartScreen is a service to submit the full URLs you visited in IE (including query strings) to Microsoft for reputation testing and possible blocking. While Microsoft tries to reassure users by saying that this information passes to them over SSL, that doesn’t help much. It means an attacker with control of the network can’t see where you are browsing from this information – but if they have control of your network, they can see a lot about where you are browsing anyway. And Microsoft has full access to the data. The link to “our privacy statement” in the original SmartScreen announcement is, rather worryingly, broken. This is the current one, and it also tells us Each SmartScreen request comes with a unique identifier. That doesn’t contain any personal information, but it does allow Microsoft, or someone else with a subpoena, to reconstruct an IE user’s browsing history. The privacy policy also says nothing about whether Microsoft might use this information to e.g. find out what’s currently trending on the web. It seems they don’t need to provide a popular analytics service to get that sort of insight.

You might say that if you are already using SmartScreen, then sending the certificates as well doesn’t reveal much more information to Microsoft about your browsing than they already have. I’d say that’s not much comfort – but it’s also not quite true. SmartScreen does have a local whitelist for high traffic sites and so they don’t find out when you visit those sites. However (I assume), every certificate you encounter is sent to Microsoft, including high-traffic sites – as they are the most likely to be victims of misissuance. So Microsoft now know every site your browser visits, not just the less common ones.

By contrast, Firefox’s (and Chrome’s) implementation of the original function of SmartScreen, SafeBrowsing, uses a downloaded list of attack sites, so that the URLs you visit are not sent to Google or anyone else. And Certificate Transparency, the Google approach to detecting certificate misissuance after the fact which is now being standardized at the IETF, also does not violate the privacy of web users, because it does not require the browser to provide information to a third-party site. (Mozilla is currently evaluating CT.)

If I were someone who wanted to keep my privacy, I know which solution I’d prefer.

How Mozilla Is Different

We’re replacing Firefox Sync with something different… and not only did we publish the technical documentation of how the crypto works, but it contains a careful and clear analysis of the security improvements and weaknesses compared with the old one. We don’t just tell you “Trust us, it’s better, it’s the new shiny.”

The bottom line is in order get easier account recovery and device addition, and to allow the system to work on slower devices, e.g. Firefox OS phones, your security has become dependent on the strength of your chosen Sync password when it was not before. (Before, Sync didn’t even have passwords.) This post is not about whether that’s the right trade-off or not – I just want to say that it’s awesome that we are open and up front about it.

Microsoft ‘Mortally Wounds’ SHA-1

Microsoft has announced that CAs in its root program may not issue certs signed using the SHA-1 algorithm, starting just over two years from now, and that Windows will start refusing to recognise such certs starting just over 3 years from now.

Make no mistake, this is a huge move and an aggressive timetable. 98% of certificates in use on the Internet today use SHA-1. Any certificate being used on the public web today which has an expiry date more than 3 years in the future will not be able to live out its full life. And it’s also an important and necessary move. SHA-1 is weak, and as computing power increases, is only getting weaker. If someone came up with a successful preimage attack on SHA-1, they could preimage a commonly-used intermediate cert from a popular CA and impersonate any website in a way only detectable by someone who examines certificates very, very carefully.

I strongly welcome this, and want to use it as an opportunity to make further improvements in the CA ecosystem. Currently, the maximum lifetime of a certificate under the Baseline Requirements is 5 years. It is due to reduce to 39 months in April 2015. Given that 98% of the certificates on the Internet are going to need to be thrown away 3 years from now anyway, I want to take the opportunity to reduce that figure early.

Long-lived certificates are problematic because CAs understandably strongly resist having to call their customers up and tell them to replace their working certificates before they would naturally expire. So, if there are certificates out there with a lifetime of N years, you can only rely on 100% coverage or usage of an improved security practice after N years. With N = 5, that reduces the speed at which the industry can move. N = 3 isn’t awesome, but it’s a whole lot better than N = 5.

So I will be bringing forward a motion at the CAB Forum to update the Baseline Requirements to reduce the maximum certificate lifetime to 3 years, effective from January 1st 2014.

Living Flash Free

I’ve just got a new laptop, a Thinkpad X230 running Ubuntu 13.10, and I’m going to try living Flash-free. (I know, I know – James Dean, eat your heart out.) I know there are free software Flash implementations, including one from Mozilla, but I’d like to see how much stuff breaks if I don’t have any of it installed. I’ll blog from time to time about problems I encounter.

Face Keys…

I’m currently in a “Cryptography Usability” session at Mozilla Festival, where someone made the point that crypto terminology is complex, and we should simplify it.

Inspired by this, I wondered: instead of “public key” and “private key”, why not call them “face key” and “arse key”? They are an associated pair, but you show one to the public, and keep the other one well hidden. It’s certainly a metaphor with explanatory power…

Ubuntu Full Disk Encryption

Dear Internet,

If you search for “Ubuntu Full Disk Encryption” the first DuckDuckGo hit, titled “Community Ubuntu Documentation”, says: “This article is incomplete, and needs to be expanded”, “Please refer to EncryptedFilesystems for further documentation”, and “WARNING! We use the cryptoloop module in this howto. This module has well-known weaknesses.” Hardly inspiring. The rest of the docs are a maze of outdated and locked-down wiki pages that I can’t fix.

What all of them fail to state is that, as of 12.10, it’s now a simple checkbox in the installer. So I hope this blog post will get some search engine juice so that more people don’t spend hours working out how to do it.

THBAPSA.

Attack Surface Reduction Works

According to the training presentation provided by Snowden, EgotisticalGiraffe exploits a type confusion vulnerability in E4X, which is an XML extension for Javascript. This vulnerability exists in Firefox 11.0 – 16.0.2, as well as Firefox 10.0 ESR – the Firefox version used until recently in the Tor browser bundle. According to another document, the vulnerability exploited by EgotisticalGiraffe was inadvertently fixed when Mozilla removed the E4X library with the vulnerability, and when Tor added that Firefox version into the Tor browser bundle, but NSA were confident that they would be able to find a replacement Firefox exploit that worked against version 17.0 ESR.

Good riddance to E4X.