Inefficient GreaseMonkey Scripts

I’ve been having performance problems with my Firefox (2) for months. Opening lots of tabs at once, or even a particularly large page, would cause the browser to freeze for several seconds. I finally got around to tracking this down, and it turned out to be a couple of GreaseMonkey scripts.

These scripts fire on every page and scan the page text for particular things (in my case, UK postcodes and Bible references) and hyperlinkify them.

The postcode linkifier puts together an XPath expression:

var xpath = "//text()[(parent::" + allowedParents.join(" or parent::") + ")]";

where allowedParents is an array of 50 tag names. It then evaluates it, and applies a complicated postcode regexp (although not quite the full thing):

const pcRegex = /\b[A-PR-UWYZ][A-HK-Y0-9][A-HJKSTUW0-9]?[ABEHMNPRVWXY0-9]?
{1,2}[0-9][ABD-HJLN-UW-Z]{2}\b/g;

to all the text. I can imagine that this is inefficient, although I wouldn’t like to guess at which part takes the lion’s share.

The Bible refalizer uses simpler XPath:

textNodes = document.evaluate('//text()', document, null,
XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null);

but a much more complicated regexp involving various abbreviations of the names of the 66 books of the Bible.

So the question is: is there a way of providing this type of full-text search and modify function which has acceptable performance? Does the Firefox platform need to provide more help with this particular use case? It seems to me that it might be quite common.

6 thoughts on “Inefficient GreaseMonkey Scripts

  1. Many extensions modify the page DOM for linkification or altering links, Adblock removes content, Firefox itself inserts spans for highlighting search results. I think you’re right that these are fairly common cases.

    What I’d really, really like to see is an API that allows us to declare content replacements up front, such that as a page loads, things like ad removal can be achieved by replacement rather than brute-force removal, and so that linkification can happen before user ever sees the unlinkified content.

    I’m not sure if Adblock would get any real benefits from this, but Greasemonkey and associated scripts certainly would (not waiting for load to fire or having to deal with collections of nodes would certainly make my day better next time I write a script).

  2. I had the same symptoms on both trunk & branch which I tracked down to the Text_size_defaulter_dammit script.

  3. > although I wouldn’t like to guess at which part takes the lion’s share.

    Why guess when you can profile?

    Running Postcode Linkify on http://lxr.mozilla.org/seamonkey/source/layout/base/nsCSSFrameConstructor.cpp gives the following results:

    About 60% of the time is spent under nsXPathEvaluator::Evaluate
    About 15% of the time is spent JS-wrapping the C++ objects
    About 4% of the time is spent testing regular expressions

    In this case, the XPath query returns 35165 textnodes, by the way. So it’s not like we’re not running the regexp a lot of times. We are.

    In this case, no modification is happening, because there are in fact no postcodes in this file. I would suspect that unless you have a file with commonly occuring postcodes the modification parts would be cheap anyway.

    > Is there a way of providing this type of full-text search and modify function
    > which has acceptable performance?

    “Full text search” is easier than “full-text search but only inside these nodes” kinda stuff like the postcode script is doing… For example, just doing the “//text()” XPath on the same file instead of conditioning on the parent reduces the runtime of the XPath expression by a factor of 30 (not 30%, but 30 _times_). It returns twice as many text nodes, so the time taken for JS-wrapping is doubled, as is the time to evaluate the regexp, so the overall runtime is only reduced by 30% or so.

    Moving all the testing into C++ so there’s never any JS involved would of course be faster, as would making JS and XPConnect faster in general. See also moz2.

  4. Jesse: as usual, you rock. I’ve converted both the Bible and Postcode scripts to be AutoLink filters. You should promote this script more :-)

    Boris: Thanks for doing that! Is there a doc on wikimo or somewhere detailing how to do such profiling?

  5. I wonder how fast the following xpath would be:

    “(” + allowedParents.join(“|”) + “)/text()”

    We’re optimizing some stupid constructs out, like “bar//foo” (which is equivalent to “bar/descendant::foo”), but I don’t think that we’re able to find silly parent:: axes uses.

    The difference here is that we’re doing the check on the parent node name only once per element, and for each of it’s text nodes.