HSBC Weakens Their Internet Banking Security

From a recent email about “changes to your terms and conditions”. (“Secure Key” is their dedicated keyfob 2-factor solution; it’s currently required both to log in and to pay a new payee. It’s rather well done.)

These changes will also enable us to introduce some enhancements to our service over the coming months. You’ll still have access to the full Internet Banking service by logging on with your Secure Key, but in addition, you’ll also be able log in to a limited service when you don’t use your Secure Key – you’ll simply need to verify your identity by providing other security information we request. We’ll contact you again to let you know when this new feature becomes available to you.

Full details of all the changes can be found below which you should read carefully. If you choose not to accept the changes, you have the right to ask us to stop providing you with the [Personal Internet Banking] service, before they come into effect. If we don’t hear from you, we’ll assume that you accept the changes.

Translation: we are lowering the security we use to protect your account information from unauthorised viewing and, as long as you still want to be able to access your account online at all, there’s absolutely nothing you can do about it.

The Latest Airport Security Theatre

All passengers flying into or out of the UK are being advised to ensure electronic and electrical devices in hand luggage are sufficiently charged to be switched on.

All electronic devices? Including phones, right? So you must be concerned that something dangerous could be concealed inside a package the size of a phone. And including laptops, right? Which are more than big enough to contain said dangerous phone-sized electronics package in the CD drive bay, or the PCMCIA slot, and still work perfectly. Or, the evilness could even be occupying 90% of the body of the laptop, while the other 10% is taken up by an actual phone wired to the display and the power button which shows a pretty picture when the laptop is “switched on”.

Or are the security people going to make us all run 3 applications of their choice and take a selfie using the onboard camera to demonstrate that the device is actually fully working, and not just showing a static image?

I can’t see this as being difficult to engineer around. And meanwhile, it will cause even more problems trying to find charging points in airports. Particularly for people who are transferring from one long flight to another.

LinkedIn Moving to Always-On HTTPS

I didn’t see this article by LinkedIn when it was first posted in October last year. But it warms the heart to see a large company laying out how it is deploying things like CSP, HSTS and pinning, along with SSL deployment best practice, to make its users more secure. I hope that many more follow in their footsteps.

IE11, Certificates and Privacy

Microsoft recently announced that they were enhancing their “SmartScreen” system to send back to Microsoft every SSL certificate that every IE user encounters. They will use this information to try and detect SSL misissuances on their back end servers.

They may or may not be successful in doing that, but this implementation raises significant questions of privacy.

SmartScreen is a service to submit the full URLs you visited in IE (including query strings) to Microsoft for reputation testing and possible blocking. While Microsoft tries to reassure users by saying that this information passes to them over SSL, that doesn’t help much. It means an attacker with control of the network can’t see where you are browsing from this information – but if they have control of your network, they can see a lot about where you are browsing anyway. And Microsoft has full access to the data. The link to “our privacy statement” in the original SmartScreen announcement is, rather worryingly, broken. This is the current one, and it also tells us Each SmartScreen request comes with a unique identifier. That doesn’t contain any personal information, but it does allow Microsoft, or someone else with a subpoena, to reconstruct an IE user’s browsing history. The privacy policy also says nothing about whether Microsoft might use this information to e.g. find out what’s currently trending on the web. It seems they don’t need to provide a popular analytics service to get that sort of insight.

You might say that if you are already using SmartScreen, then sending the certificates as well doesn’t reveal much more information to Microsoft about your browsing than they already have. I’d say that’s not much comfort – but it’s also not quite true. SmartScreen does have a local whitelist for high traffic sites and so they don’t find out when you visit those sites. However (I assume), every certificate you encounter is sent to Microsoft, including high-traffic sites – as they are the most likely to be victims of misissuance. So Microsoft now know every site your browser visits, not just the less common ones.

By contrast, Firefox’s (and Chrome’s) implementation of the original function of SmartScreen, SafeBrowsing, uses a downloaded list of attack sites, so that the URLs you visit are not sent to Google or anyone else. And Certificate Transparency, the Google approach to detecting certificate misissuance after the fact which is now being standardized at the IETF, also does not violate the privacy of web users, because it does not require the browser to provide information to a third-party site. (Mozilla is currently evaluating CT.)

If I were someone who wanted to keep my privacy, I know which solution I’d prefer.

How Mozilla Is Different

We’re replacing Firefox Sync with something different… and not only did we publish the technical documentation of how the crypto works, but it contains a careful and clear analysis of the security improvements and weaknesses compared with the old one. We don’t just tell you “Trust us, it’s better, it’s the new shiny.”

The bottom line is in order get easier account recovery and device addition, and to allow the system to work on slower devices, e.g. Firefox OS phones, your security has become dependent on the strength of your chosen Sync password when it was not before. (Before, Sync didn’t even have passwords.) This post is not about whether that’s the right trade-off or not – I just want to say that it’s awesome that we are open and up front about it.

Microsoft ‘Mortally Wounds’ SHA-1

Microsoft has announced that CAs in its root program may not issue certs signed using the SHA-1 algorithm, starting just over two years from now, and that Windows will start refusing to recognise such certs starting just over 3 years from now.

Make no mistake, this is a huge move and an aggressive timetable. 98% of certificates in use on the Internet today use SHA-1. Any certificate being used on the public web today which has an expiry date more than 3 years in the future will not be able to live out its full life. And it’s also an important and necessary move. SHA-1 is weak, and as computing power increases, is only getting weaker. If someone came up with a successful preimage attack on SHA-1, they could preimage a commonly-used intermediate cert from a popular CA and impersonate any website in a way only detectable by someone who examines certificates very, very carefully.

I strongly welcome this, and want to use it as an opportunity to make further improvements in the CA ecosystem. Currently, the maximum lifetime of a certificate under the Baseline Requirements is 5 years. It is due to reduce to 39 months in April 2015. Given that 98% of the certificates on the Internet are going to need to be thrown away 3 years from now anyway, I want to take the opportunity to reduce that figure early.

Long-lived certificates are problematic because CAs understandably strongly resist having to call their customers up and tell them to replace their working certificates before they would naturally expire. So, if there are certificates out there with a lifetime of N years, you can only rely on 100% coverage or usage of an improved security practice after N years. With N = 5, that reduces the speed at which the industry can move. N = 3 isn’t awesome, but it’s a whole lot better than N = 5.

So I will be bringing forward a motion at the CAB Forum to update the Baseline Requirements to reduce the maximum certificate lifetime to 3 years, effective from January 1st 2014.

Living Flash Free

I’ve just got a new laptop, a Thinkpad X230 running Ubuntu 13.10, and I’m going to try living Flash-free. (I know, I know – James Dean, eat your heart out.) I know there are free software Flash implementations, including one from Mozilla, but I’d like to see how much stuff breaks if I don’t have any of it installed. I’ll blog from time to time about problems I encounter.

Face Keys…

I’m currently in a “Cryptography Usability” session at Mozilla Festival, where someone made the point that crypto terminology is complex, and we should simplify it.

Inspired by this, I wondered: instead of “public key” and “private key”, why not call them “face key” and “arse key”? They are an associated pair, but you show one to the public, and keep the other one well hidden. It’s certainly a metaphor with explanatory power…

Ubuntu Full Disk Encryption

Dear Internet,

If you search for “Ubuntu Full Disk Encryption” the first DuckDuckGo hit, titled “Community Ubuntu Documentation”, says: “This article is incomplete, and needs to be expanded”, “Please refer to EncryptedFilesystems for further documentation”, and “WARNING! We use the cryptoloop module in this howto. This module has well-known weaknesses.” Hardly inspiring. The rest of the docs are a maze of outdated and locked-down wiki pages that I can’t fix.

What all of them fail to state is that, as of 12.10, it’s now a simple checkbox in the installer. So I hope this blog post will get some search engine juice so that more people don’t spend hours working out how to do it.

THBAPSA.

Attack Surface Reduction Works

According to the training presentation provided by Snowden, EgotisticalGiraffe exploits a type confusion vulnerability in E4X, which is an XML extension for Javascript. This vulnerability exists in Firefox 11.0 – 16.0.2, as well as Firefox 10.0 ESR – the Firefox version used until recently in the Tor browser bundle. According to another document, the vulnerability exploited by EgotisticalGiraffe was inadvertently fixed when Mozilla removed the E4X library with the vulnerability, and when Tor added that Firefox version into the Tor browser bundle, but NSA were confident that they would be able to find a replacement Firefox exploit that worked against version 17.0 ESR.

Good riddance to E4X.

Data Unsuitable for Authentication

It used to be that my bank rang me (on one of the regular occasions where they mistakenly block my debit card, often over payments which have been occurring regularly every month for years) and asked me to authenticate myself. These days, they ring me and make at least an attempt to authenticate themselves to me, which is an improvement. However, the conversation still goes something like this:

Bank: “Sir, if you give me the month and year of your date of birth, I’ll give you the day.”
Me: “But my date of birth is on Wikipedia. Anyone can know what it is. You can’t use it to prove I’m me.”
Bank: “…”

There should be a list of pieces of information which are considered sufficiently “public” or “well known” and so should never be used for authentication – because they are publicly discoverable or because you have to give them to large numbers of people in the course of life. There should be a website listing them, and a hall of shame for companies which use them. Those well-known pieces of information would include:

  • Date of birth
  • Current address
  • Mother’s maiden name
  • Bank account number and sort code
  • Credit card number, expiry and CVC
  • SSN (in the USA)

What else?

In general, companies should instead use information specific to their relationship with that customer. For example “last item you bought” or “last transaction you made” at a bank.

A BREACH Mitigation Idea

The BREACH attack on SSL was recently disclosed at Black Hat. It depends on an attacker being able to notice a small difference in response size between two very similar requests.

Therefore, various mitigation methods which have been proposed involve attempting to disguise the length of a response. Some have said that this is mathematically infeasible, in that there is no way of disguising it enough without destroying the compression performance to such a degree that you might as well just switch compression off. I am not qualified to assess that claim, but I did have an idea as to how to add arbitrary length to the response in a transparent way.

My suggestion to vary the length is to add some non-determinism to gzip on the server side. This has the significant advantage of being transparent both to the application being served and to the client – it requires no changes to either. This makes it much easier to deploy than mechanisms which pad the content itself, or add data which needs to be explicitly ignored on the client. A quick review of the algorithm suggests that there are several places where you could introduce non-determinism.

The gzip algorithm either reproduces a string of bytes literally or, if that string has occurred before, replaces it with a pointer-length pair to a previously-occurring string of bytes. I suggest that the way that adds most variability to the output length would be to have gzip either not shorten a string, or shorten it to less than the maximum possible shortening, on some small percentage of occasions when it would otherwise do a full compression. The question is whether one can add enough length non-determinism to make the leaky length difference hard to detect without many repetitions and statistical analysis (which would drastically slow down the attack), without destroying the compression performance.

This scheme can be used to add arbitrary amounts of additional length (up to the original size of the file) in arbitrary places and patterns, dependent on static or variable factors.

As determinism may be an expected feature of gzip in some contexts, it would probably need to be made an option (which mod_gzip and similar libraries would invoke).

As an opening (and probably terrible) proposal for a static random length increase, I suggest that for each replaced sequence in the file, gzip decides either to replace it fully with a pointer+length, or to do so for all the chars bar the last one (emitting that char as a literal). It should do this with a probability of 1/N+1 for the Nth replacement in the file. This front-loads the non-determinism, producing an exponential decay in the likelihood of the sub-optimal choice being chosen, and meaning that the deleterious effect on large files is not overly large but small files still get a significant length variation.

This method might also work as a mitigation for CRIME.

MITM Boxes Reduce Network Security Even More Than They Are Designed To

It was recently discovered by the Tor project that a manufacturer of Man-In-The-Middle boxes with SSL interception capability, called Cyberoam, have been embedding the same root certificate in all of their boxes.

Background: SSL is not supposed to be interceptable. The only way to do it is for the intercepting box to be the endpoint of the SSL session and then, after inspecting the traffic, send the information over a different SSL session to the client. Now that we have explicitly banned trusted CAs from facilitating this after the Trustwave incident, the box should not be able to generate its own trusted-by-default certificate for the target site. Instead, it generates a cert which chains up to the box’s own embedded root. Therefore, any user of a network whose owners wish to use a such a box to inspect SSL traffic will have been asked to import whichever root certificate the box uses into their trusted root store, in order to avoid getting security warnings – the very warnings which would otherwise correctly tell you that your communications are being intercepted.

If each box uses a different root certificate, this is not a big problem. (Well, apart from the general issue of having to permit your employer or school to intercept your secure communications.) However, as noted above, Cyberoam uses the same root for all the boxes they manufacture. This root reuse means that sites who have tried to use Cyberoam boxes to punch a small hole in their security for ostensibly reasonable purposes have actually punched a rather larger one.

If you have trusted this root, your communications could potentially be silently intercepted by anyone who owned a Cyberoam box, not just the legitimate owners of the network you were using. This would be true whether you were on that network, or elsewhere (e.g. if you went to another location with your phone or laptop). Furthermore, anyone who purchases a Cyberoam box can try and extract the root (they may have physical security in place, but that’s just a speedbump) and then they don’t even need a Cyberoam box to MITM you.

From reading their online docs, this problem seems to also occur with similar devices from Sonicwall (PDF; page 2) and Fortigate. (Thanks to a commenter on the Tor blog for noticing this.) I suspect that many vendors use this insecure configuration by default.

The Cyberoam default root certificate is not trusted by the Mozilla root store – Cyberoam is not a CA – and we do not plan to take action at this time. However, this is another important lesson in the unintended consequences of intentionally breaking the Internet’s security model. Messing with the Internet security infrastructure breaks things, in unexpected and risky ways. Don’t do it.