Browser Vaccination

Thought 1: In the future, consumer-level browsers will increasingly be connected to trusted sources of sites that the browser should not attempt to visit. Many active anti-phishing schemes (such as the NetCraft toolbar) do something like this. But the lists are composed of URLs which have to be manually reported by users and verified by hand, because it’s impossible for browsers to automatically detect phishing attacks with perfect accuracy.

Thought 2: Currently, if a security hole is discovered in a browser, you generally have to update or make a configuration change to be protected – there’s no way for browser vendors to protect users who take no action. And many users don’t upgrade immediately, if at all.

So… it would certainly be technically possible for browsers to automatically detect sites attempting to exploit fixed security holes. For example, Firefox 1.0.4 could have been written to detect sites attempting to use the Firefox installation API with a javascript iconURL. Rather than just blocking the exploit attempt, it could then, either automatically or with the user’s permission, report the URL of that site back to a central server, so it could be assessed for placing in a block list feed. Such an assessment could be automatic – script a copy of the browser to go to the URL and see if it detects the exploit also.

Then, older browsers which had not been upgraded, but which were blocking sites from a list including that feed, would still have some amount of protection from attack. As soon as it had been reported by one user using a new browser, all users using older versions would be vaccinated against attack from that site.

15 thoughts on “Browser Vaccination

  1. By doing so, the users with the outdated version of the product would actually have a good reason not to upgrade. I think that is not what you want.

    Security fixes your example, Firefox, should get to the user in an easier way without the need of downloading the complete product. And the users should be notified in some better way to the fact there browser is outdated. E.g. by changing the start page to some kind of nicely explained and shown warning. The small icon in the upper right corner is not the solution. I’ve seen too many users allready completely ignoring the icon.

  2. Thought 1 isn’t much of a thought, but thought 2 is thought of the year for me. Gerv, that is genius!

    It has to implemented in my opinion because you can actually protect users who haven’t upgraded. A previously thought impossibility. It can happen without the user doing anything either, with the browser just updating a list of bad URLs.

    Then you would have some kind of notification when attempting to visit a bad site, which an option to visit it anyway (only after a big warning,) and something BIG to tell the user to upgrade, and also show them what they can do to prevent the particular vulnerability until they do upgrade (disable JavaScript for instance.)

    As for getting the master list updated by browsers, this shouldn’t be too difficult. With a lot of the vulnerabilities recently, you see things in the Console like “$URL: Premission denied to …”.

    There would probably have to be an option to diable reporting bad websites for privacy and other reasons.

    Again great idea. You never cease to impress :) I would love to see this happen. I might even have a go at getting an extension working some time this summer…

  3. If widespread, lists of phishing sites won’t work for the same reasons as lists of spamming sites don’t, as Mark Pilgrim notes. This biggest reason is this one, I think:

    […] [phishers] will set up fake identities to report real sites and try to poison the list. Are you manually screening new contributions? That won�t scale. Are you not manually screening new contributions? That won�t work either.

    Remember, even more so than spam, phishing makes money.

    Malcolm

  4. Why should users not be up to date..?
    With the FF1.1 patching system, would it not be possible to simply automatically download and install patches in the background so the user has nothing to ignore?

  5. In order to make the vaccination work, browsers should stop reporting their version to the server and to JavaScript scripts. Otherwise, the black hats can merely avoid triggering the exploit in newer versions. Exploit scripts might also guess the browser version by sniffing the availability of scripting objects.

  6. I wonder how you actually would detect an attack. Let’s say it’s a buffer overflow. If a site triggers the overflow it can very well be that it’s not an attack but just some unexpected input where the unpatched browser would crash or do nothing spectacular at all. I think it is far from trivial to accurately detect attacks. The space of possible, harmless (albeit incorrect) input and the space of malicious input are huge and the overlap ain’t small.

  7. The current trend to publish black/white lists for trustworthy websites scares me a lot for several reasons:

    1. Censorship
    Maybe this is a little paranoid, but i believe that sooner or later the list providers will (or will be forced) to censor the list for other reasons beside security. Look at what happens to Google and how they manipulate the search index in different countries to fullfill a local law.

    2. People will switch of their brains entirely
    Many people will put ther full trust in this kind of lists and it will be harder and harder to teach them how to handle the internet, the internet risks and the informations in it. It’s like all those “parental lock” filters like Net Nanny etc. Instead of teaching childrens how to deal with sex, violence and extreme politcal views they try to block it. That just doesn’t work.

    3. False positives
    I prefer to let 10 murders walk away without being puniched instead of putting 1 man into jail who isn’t guilty. Same on the internet. This kind of blacklisting could easily ruin a small company.

    4. Cross-Site Scripting
    XSS can be found on nearly every page out there (see http://www.mikx.de/index.php?p=6 for details). As long as this situation isn’t changing blacklisting stuff is mostly useless – you just can’t blacklist google, lycos or aol without people rejecting to use such a list.

    5. Technical limits
    Detecting malicious scripts is a hard job. Show me a detection script (that works with acceptable performance) and i bet there will be a way to break it. Webbased mailers took years to write proper html filters and there are still html injection bugs on full discolsure every few weeks – and that is easy(!) in comparison.

  8. I read a good article on this a while ago and unfortunatly I have no clue where it is. The jist of it was that we need to separate browser from the rendering engine. The main reason people are so aprehensive to ugrade is because their worried that their favorite extentions won’t work or that they’ll lose thier userdata or something, this shouldn’t be a barrier to security patches. I understand that not all security bugs are in the rendering, but a large amount of them are, so that would at least cover those. Let the user set options if they really want to, but have the rendering engine upgrade by default without asking the user.

    I don’t know how well this would work from a programing standpoint. I have a suspicion that it wouldn’t be possible with the way that firefox works right now. You won’t be able to rely on compilers so you’ll have to make binaries and that might require quite a bit of work.

    This would also solve the backwards-compatibility issue for web-designers, so that they won’t have to test with multiple firefoxes once we start getting a larger user base. Please, multiple IE’s and netscapes are enough ;)

  9. Just a wild idea: why don’t we develop a Greasemonkey extension that filters exploits out of pages, based on a dynamic list?

  10. localhost said: By doing so, the users with the outdated version of the product would actually have a good reason not to upgrade. I think that is not what you want.

    I don’t think so; if they hit a dodgy, site, the message would be very stern about telling them to upgrade. And it could never protect against everything; some exploit attempts are not detectable.

    I do agree also that we need smaller updates and perhaps better notification.

    Malcolm said: If widespread, lists of phishing sites won’t work for the same reasons as lists of spamming sites don’t.

    That’s not true – I even covered that point in my blogpost. The fact of whether a site is attempting an exploit can be checked automatically, and this is not true of phishing sites. So it will scale, and fraudulent reports can be ignored.

    Kroc said: With the FF1.1 patching system, would it not be possible to simply automatically download and install patches in the background so the user has nothing to ignore?

    There are privacy and user trust issues with changing software without notifying users at all.

    Henri said: In order to make the vaccination work, browsers should stop reporting their version to the server and to JavaScript scripts.

    Indeed, that would be true – or rather, security updates wouldn’t update the version number. I’m not sure if this is a big or a small disadvantage.

    tr said: I wonder how you actually would detect an attack. Let’s say it’s a buffer overflow.

    I don’t claim that exploits of all holes are detectable in this way. I gave an example of one which was. I suspect most are – for a buffer overflow, you fix the buffer to a large value (beyond any legimitate use) and then detect if there’s overflow.

    Michael Krax said: <several things which worried him about blacklists>

    We need to remember that users will choose to use particular lists; if it doesn’t meet their needs or is overbroad, they can change provider. It’s not comparable to controversial content – browser exploits are almost always visually impossible to detect. How can you teach a user to avoid them? I’ve covered false positives above. XSS is a problem, but there are other ways to crack that nut. And if Google gets XSSed, I would want my browser to stop me going there and tell me to upgrade. I’ve also covered detection above.

    Neil M asked: How does a site get removed from the blacklist? (For whatever reason)

    Implementation detail. But remember, a user can always access the site by upgrading their browser, and it’s only ever added if the exploit is detected by the list manager.

  11. Gerv said: browser exploits are almost always visually impossible to detect. How can you teach a user to avoid them?

    True, most browser exploits are neither automaticly nor manually detectable.

    I was thinking about educating people about the risks of the internet and common pitfals (like not visiting a popup that says “You are the 100.000.000 visitor and won a free car!”) – not to learn how to debug a javascript file ;)

    Sadly many people who should take care of education (mainly parents and politicians) prefer some kind of blacklist/censor method to block content instead of teaching them how to handle/avoid it. At least this is true in germany, not sure about other countries?! Same dumb idea like music cd copy protection. I fear that if the community establishes a blacklist system those people will use that as an argument to cut budget on education – instead of accepting the limits of such a system.

    I am not against a blacklist system. Probably i would use it myself as another mitigation factor beside firewall and AV software. I just fear those system for the false promises many marketing people could put in it.

  12. True, most browser exploits are neither automaticly nor manually detectable.

    Of the 12 fixed in 1.0.3 and 1.0.4, I think 8 would be wholly or partially detectable by a patched browser coded to look for them. So I think it would be a significant proportion.