The IE Blog has a post about the new Phishing Filter which will be built into IE 7. Basically, there’s a client-side whitelist and a server-side blacklist; if you turn the filter on, every URL you visit which is not on the whitelist gets sent off to Microsoft’s servers to be checked. And if you suspect a site is a phishing site, you can click “Report Phishing Site” on the Tools menu to send that URL off into a queue to be verified.
However, for privacy reasons, IE strips off the URL parameters before sending off URLs. And this is where the problems with such an approach start to become apparent. What guarantees that the web page the manual URL checker person views (requested without URL parameters) is going to be the same one that the original reporter saw?
Server-blacklist-based anti-phishing implementations put you in an arms race, and one in which the phishers hold all the cards. They have 20,000-strong botnets with automatic deployment tools; you have to check every submitted URL by hand. They can invent new ways of obfuscating and redirecting URLs; you are limited by the tools built into your deployed client. They have a large financial incentive; you are giving away a free product.
There’s no magic bullet, but I believe the correct route to take is a combination of greater SSL use (which means we need SSL vhosting), stronger certificate field verification and OCSP, combined with in-browser standalone heuristics and a sprinkling of user education. A minimal amount of the latter is IMO, sadly, unavoidable – it’s very hard to protect people who will put their credit card number into just any web form which asks for it.