Tunelines

In churches, we learn new songs from time to time – which is a good thing. This is normally done by the music leader singing the song, and then everyone trying to join in. Those who read music would perhaps like to have music, but it’s almost always not available, because it couldn’t be projected (as those who can’t read music would be lost and projectors are low resolution) and photocopying it and handing it out is inconvenient and disruptive.

But what if one could take the bare essentials of sheet music and display them alongside the words? What’s most important to people when learning a new tune? I would say two things – note duration, and the pitch difference between the previous note and the current one. Hence, Tunelines, which are inspired by Sparklines, a very simple way of showing a graph of data, usually over time. The idea is that they can be displayed alongside the lyrics while a congregation is learning a song, and removed after a few times once everyone has the hang of it.

There are two variants, one with verticals and one without. I prefer with, as I think it’s easier to follow, but reasonable people may differ. Right-click and “View Image” for a larger version. My example is Before The Throne of God Above.

Picture of some song lyrics with lines alongside them

For various reasons my church has no plans to use these, so I’m shelving this project, but just wanted to put it out there in case it inspires anyone else.

Accessing Vidyo Meetings Using Free Software: Help Needed

For a long time now, Mozilla has been a heavy user of the Vidyo video-conferencing system. Like Skype, it’s a “pretty much just works” solution where, sadly, the free software and open standards solutions don’t yet cut it in terms of usability. We hope WebRTC might change this. Anyway, in the mean time, we use it, which means that Mozilla staff have had to use a proprietary client, and those without a Vidyo login of their own have had to use a Flash applet. Ick. (I use a dedicated Android tablet for Vidyo, so I don’t have to install either.)

However, this sad situation may now have changed. In this bug, it seems that SIP and H.263/H.264 gateways have been enabled on our Vidyo setup, which should enable people to call in using standards-compliant free software clients. However, I can’t get video to work properly, using Linphone. Is there anyone out there in the Mozilla world who can read the bug and figure out how to do it?

UI Hall of Shame: Mailman Moderation UI

The moderation UI provided by Mailman 2.1.14, the version Mozilla uses (note: I don’t know if this is the latest version, and I don’t know if this UI is still present) looks like this. The controls for an individual message are as in this screenshot:

Mailman UI - complex controls

What is should look like is something like this:

Subject Sender Spam Score Reject and Blacklist Accept and Whitelist Reject Accept Defer
***SPAM*** Regarding Your Online Account account.review@royalbank.com 4.4
Depto. Comercial comercialsouzasul@oi.com.br 4.2


I would love it if someone were to write a Greasemonkey script or similar which did this rearrangement. It would improve my life measurably. Any takers?

Alternative To “Look For The Lock”

Firefox 4.0 will be the first major browser shipped without a ‘lock’ icon for SSL connections. Instead, we have identity indicators like the EV indicator and the domain indicator.

Lots of websites tell users to “look for the lock” to check they are secure. These websites will want to update their text to say something else. It would be awesome if we had already developed some (cross-browser) text and graphics they could use, one text for EV and one for non-EV sites. We could work to make it as simple as possible. We could even create a website which detected the user’s browser and explained what to look for, and also provided instructions for sites who wanted to take our explanation and ship it on their site.

If we don’t do this, we will get site authors writing messages like “look for a green bar” instead of the much more useful “look for site identity”. And another opportunity to improve the security of the web will be lost.

Anyone up for doing this?

Scrolling Usability Fail

Have a quick look at the Firefox UI heatmap. 95% of people explicitly click and drag the vertical scroll slider to scroll. And they do it a lot – an average of 200 clicks per user.

However, using the vertical scrollbar to scroll a web page sucks.

  • It’s a long way from the content area, where the user’s attention should be focussed;
  • It’s thin and hard to hit, particularly if the window is not maximised (hands up if you’ve tried to scroll a page and ended up resizing it because it’s a non-maximised window up against the edge of the screen);
  • At least on Ubuntu, the colour difference between it and the scrollbar track is minimal, making it hard to see;
  • It’s difficult to get consistent speed scrolling because the amount you have to move it is proportional to the size of the page. This can, on very large pages, lead to you entirely losing your place.

With good colour contrast between the bar and track, the scrollbar is a reasonable UI for seeing where you are in a page, but it’s a poor UI for scrolling.

Obviously, I’m not the first person to notice this. First, we had the wheel mouse, and now we have trackpads with dedicated vertical scroll areas (and sometimes horizontal too). And on some systems it’s possible to scroll a page by holding down a modifier key and moving the mouse, although I can’t for the life of me work out how to do it on Ubuntu. (Update half way through writing: this is what the “Use autoscrolling” preference is in the Firefox advanced options. I would certainly not have worked that out from the name – it’s not about automatically scrolling at all! We should fix that.)

And yet, with all these mechanisms available, 95% of people still pull the mouse all the way over to the side of the screen, click and drag.

Why?

Do the existing mechanisms suck? And can we do something about this? Making scrolling more pleasant would improve my day immensely. Can we turn ‘autoscrolling’ on by default?

Universal Subtitles – Usability Kudos

Universal Subtitles (a Drumbeat project) aims to be “Wikipedia for Subtitles” – they want to see every single piece of online video both subtitled (for the hard of hearing) and translated into multiple languages. To give you some idea of the size of this task, at the moment, 24 hours of video are being uploaded to YouTube alone every minute.

They have a web client for subtitling, which I have just tried out, and I must report that it’s an absolute joy to use. You might imagine that subtitling a video properly would take hours and be really fiddly – but they make it a three-pass process (input, align and check) and the UI is really smooth. Each step is preceded by an instruction video, there are keyboard commands and intuitive drag controls. And it’s all built using web standards.

Sign up for an account and try it out :-)

Simple Scan

A word of praise for “Simple Scan”, the new scanning app in Ubuntu Lucid 10.04. Turn on your scanner, start the application, press “Scan”, watch the image appear before your eyes, drag some borders to crop, and press “Save”. It’s that simple. Great work.

Even more noteworthy: the interface they implemented is even simpler than the one in the spec, which is still option-heavy. Most commonly, when you try and implement a UI it ends up more complicated than the spec due to unforeseen edge cases.

Uploading Screenshots

Futher to my previous post… it struck me that a lot of people’s problems would suddenly become much clearer if they could upload a screenshot. However, taking and uploading screenshots is currently a reasonably complex process, especially on XP (where it involves Microsoft Paint).

Here’s today’s usability challenge: can we enhance Firefox in some way to make this easier? For example, it would be great if we could give users instructions like:

  • Right click on the “Browse” button of this file upload control and select “Upload Screenshot”
  • See the timer start to count down from 10 in the top right of your screen
  • Bring the relevant window or tab you want to take a screenshot of to the front
  • Wait for the timer to expire
  • Hit “submit” on the form

Finally…


With the release of Ubuntu 10.04 yesterday, it’s finally possible:

(Comparison). Of course, the number of possibilities for Caps Lock has now gone up from 10 to 14 (you can also now make Caps Lock an additional Hyper, Num Lock or Super if those are options you’ve been waiting for all your life), but at least the important one is there!


Liferea Usability Rant

Another blogpost I wanted to read has disappeared into the ether, and it’s time for another usability rant – this time about Liferea. This is Ubuntu’s recommended feed reader – at least, it’s the only result of a search in the Ubuntu Software Centre for “feed reader” which is in the repository it provides support for. Although when I asked mpt, who works for Canonical in usability, what the official feed reader was, he said:

https://help.ubuntu.com/9.10/internet/C/internet-otherapps.html recommends Liferea. I think that’s as official as you’d get on that subject.

Which is hardly a ringing endorsement. Anyway, I moved to it for half my RSS feeds (the personal ones) to see if anything was better than Thunderbird’s frankly patchy feed reading support.

OK, it doesn’t have Thunderbird’s “thought you’d read this item? Let me give you another, unread copy of it in the feed” bug, but boy – did the developers actually sit down and try and read feeds with it? The usability is a nightmare.

The main way of reading feeds is an Unread virtual folder, which contains all of the unread items. As you read things and move to the next one, they disappear from here, although you can still find them in the folder for the individual feed.

There’s a “Next Unread Item” button, but no “Previous Unread Item” and no history. So if you accidentally move off the item you are reading, it immediately disappears from the Unread view (because you read it, duh) and there’s no way of finding it again! If you can’t remember which blog it’s in, you have to trawl through 50 feeds, looking at the topmost few entries, and see if you recognise it. And, as you start this process, you realise that the next unread item, which the cursor went to when it went off the one you wanted to read and are now chasing, is also now lost, because it got marked as read when it got highlighted and now you’ve moved the focus off that one too. And you can’t remember anything about that one at all.

Basically, they’ve implemented a browsing application without a Back button. Genius.

That’s OK, you think: I can create a Search folder for “all items newer than a week” and sort it by date, and find my lost items that way. Except that “date” is not one of the options in the search builder. You can search by whether it’s a podcast or not, but not when the wretched thing was written. Great. You have to create a folder of everything, which wedges your machine for 20 seconds every time you open it as it loads 5000 items into a data structure using some sort of naive n2 algorithm.

There’s a Mark Items Read on the toolbar. Accidentally click it, and you’ve lost your “to read” queue entirely, with no way of getting it back. And it’s right next to the “Update All” button which is used regularly. “Mark Items Read” might as well be labelled “Cause Me To Scream in Frustration”.

  • You can’t select multiple items. If you try, the first one gets deselected and, because it was now marked as read, disappears from the view!
  • You can never find a feed you want, because the list is in the order they were added, and there’s no sort, only drag-and-drop. Hey, I can practise my manual quicksort!
  • If you decide you want to keep an item in your “to read” queue, and so, having scanned it, press Ctrl-U to mark it back as unread, then Ctrl-N refuses to work. Presumably because it thinks “you’re on the next unread message already, dummy”.

Grr.

Facebook Email Links

In the past, I have commented on the Facebook policy of showing email addresses as graphics rather than text. They sent a cease-and-desist to Chris Finke about his addon to convert them back to proper mailto: links.

I am pleased to announce that Facebook have relented! A Facebook spokesman said:

Showing email addresses in plain text makes it easier for people to use
the information to connect with their friends.

In the spirit of making it even easier for people to use the information to connect with their friends, I’d like to plug Chris Finke’s recently-released “Facebook Email Links” Firefox addon, which does the same thing as the old one, but without the need for OCR. Install it, and you will be able to email your Facebook friends using standards-based email with a single click.

So Long, And No Thanks For The Externalities

And here’s a summary post to round off the series on So Long, And No Thanks For The Externalities (Part 1, Part 2, Part 3). :-)

The concluding points of the paper are that:

  1. We should reduce the cost of security advice to users
  2. We should offer advice whose cost is proportional to the victimization rate
  3. We should retire advice that is no longer compelling
  4. We should prioritize the advice we do give

I agree with all four. 3) and 4) in particular can be hard to persuade people of, particularly geeks and techies who, as well as understanding the advice easily themselves, are often people who like people to have “all the information”. Read some of the comments on my first post about passwords to see examples.

For SSL, I am hoping that we can get to a point where the main piece of security advice on the web is “check the name of the company is correct in the green box”. The EV vetting system should ensure a very low false issuance rate, and the revocation system should ensure minimal damage for any falsely-issued certs. That glance should hopefully take each person a half a second per site. It’s not the 0.36 seconds per user per day which the paper argues for, but it’s a whole lot closer than we are now.

I’m convinced we can’t protect users fully with zero user education. But we should minimise what that education is. And it shouldn’t involve reading URLs.

Certificate Errors

This is the (belated) third of three posts relating to So Long And No Thanks For The Externalities. At least I got them all out in the same decade :-)

First off, the author notes the often-found result that setting a site’s favicon to the lock symbol will help fool users (as will putting a lock in the page body). Although it won’t solve the entire problem, on general principles my view is that (now that tabbed browsing is established as the way everyone browses) we shouldn’t put favicons in the URL bar, only on the tabs. The principle is that the URL bar should be entirely trusted, and not controllable (apart from the contents of the URL, obviously) by the site being visited.

This, incidentally, is an issue with tabs-on-top. It puts the browser-controlled bit of the UI (the URL bar and toolbar) between two page-controlled bits of the UI (the title/favicon, and the page itself). That makes it harder to educate the user about what is trustworthy and what is not. I can’t see any way around the idea that some bits of the UI a user is faced with are trustworthy and some are not (unless you make it all untrustworthy). If there must be this split, keeping a geographical distinction between the two types must help. Logically, I’m a fan of tabs-on-top. But I can see security disadvantages too.

I absolutely agree that a goal of secure website UI design should be to minimise the number of errors encountered, while still maintaining security. Repeated warnings habituate users into ignoring them – although we have made them harder to ignore in recent Firefoxes. I would be very interested in research which looked at whether this has had an impact on the number of bad certificates out there. The trouble is that it’s hard to measure that number, because just because a scanner can’t trace a cert back to its root, that doesn’t mean that the intended users of that website can’t.

Unfortunately, some suggestions people commonly make for eliminating warnings (e.g. “just quietly show an HTTP UI if the cert is self-signed) have security holes at the edge cases.

STS is a great example of a way we can improve security without the user noticing anything different. For sites which use it, it entirely solves the problem raised in Section 5.2, where an attacker can intercept and redirect an initial HTTP request before the HTTPS session is established.

At the end of the section, the author makes the astonishing claim that:

In fact, as far as we can determine, there is no evidence of a single user being saved from harm by a certificate error, anywhere, ever.

Even if that were true, if we removed all certificate errors and just blindly trusted any SSL connection, the situation would change somewhat rapidly. They come close to admit this in the following sentence:

Of course, even if 100% of certificate errors are false positives it does not mean that we can dispense with certificates. However, it does mean that for users the idea that certificate errors are a useful tool in protecting them from harm is entirely abstract and not evidence-based. The effort we ask of them is real, while the harm we warn them of is theoretical.

The assertion that the benefit of certificate errors (a side effect of certificate checking) is not evidence-based is the same as the assertion that forbidding guns and grenades on planes reduces the risk of hijack is not evidence based. No-one wants to try the alternative to gather the evidence!

The fact that most phishing sites don’t use SSL is because users don’t look for SSL or site identity. And that’s a UI and education problem we could fix, if the world put its mind to it. After all, most people now automatically buckle up when they get in a car. And now we have EV, we can build a site identity system on rock rather than sand.

URL Reading

This is the second post about Cormac Herley’s paper called “So Long And No Thanks For The Externalities”, which highlights the cost to users of security advice.

He focusses on 3 areas of advice-giving: Password Rules, URL Reading (to avoid phishing) and Certificate Errors. This blogpost is about URL Reading.

His point is that teaching users to read URLs for protection from phishing is a lost cause. And I think he’s probably right. There is no way we can provide simple, reliable advice in this area – URL syntax is complex enough that anything simple isn’t reliable, and what’s reliable isn’t simple. We need a way to securely replace URLs with a human-readable, unambiguous, verifiable, site or business identifier. And that’s exactly what EV certificates are.

So stay tuned for tomorrow’s installment on Certificate Errors, where he has something to say about those :-)

Password Rules

Cormac Herley of Microsoft Research recently published a paper called “So Long And No Thanks For The Externalities”, which highlights the cost to users of security advice. His point: user time is not free, and there are a lot of users, so that adds up to a massive cost, which often outweighs the harm that the advice is trying to avoid.

He focusses on 3 areas of advice-giving: Password Rules, URL Reading (to avoid phishing) and Certificate Errors. This blogpost is about Password Rules.

The 7 rules most often cited are these:

  1. Length
  2. Composition (e.g. digits, special characters)
  3. Dictionary membership (in any language)
  4. Don’t write it down
  5. Don’t share it with anyone
  6. Change it often
  7. Don’t re-use passwords across sites.

I won’t recap his entire argument (read the paper) but his point is that they all impose a cost, and don’t actually provide significant mitigation against common attacks.

Basically, he’s right. Here’s my alternative proposal.

Sites should abandon putting any restrictions beyond the most basic (longer than 3 characters, not your own name) on passwords, and should not give advice like the above. Instead, they should measure the strength of the password chosen. If it’s weak, have a 5-unique-strikes-and-lockout policy (unique strikes, so that a client attempting to authenticate multiple times with a mis-typed or old password doesn’t trigger the lockout). If it’s strong, have no lockout (but perhaps a rate limit).

This means that most people can pick weak passwords with a much reduced risk of brute-forcing, and those people who are concerned about DOS (someone keeping them locked out) can just pick a stronger password and have the lockout removed. You don’t even need to explain any of this to the user; users who might be disliked enough by those with the technical capability to mount a sustained DOS are few and far between, and are much more likely to pick strong passwords anyway.

The principle here is to take the load off the users and put it on the technology. Such a scheme is more complex to implement server-side than a simple “hash the password, compare it to the stored one, if it’s the same, authenticate” mechanism which sites have been using for ever. But it’s likely to provide a much greater increase in security than giving advice which is generally ignored.