One of a few articles I’ll write about accessibility. I’ve been working on accessibility here and at Observable, and have collected a few thoughts.
One of the most frequently cited accessibility tests is color contrast: whether text on a page contrasts enough with its background to be readable by people with different types of vision, including, for example, low vision or macular degeneration.
According to the automated test suites, many websites have insufficient color contrast. Color contrast failure is one of the key tests driving claims about the web being inaccessible.
Now, before I proceed, let me say what I’m not saying. I agree that the web has dramatic accessibility problems. The concerns raised by WebAIM are real, the lived experience of people browsing the web with screen reader software, braille displays, and all types of capabilities matters tremendously. There’s a preponderance of evidence that for everyday use websites are insufficiently accessible and web developers are unaware or uninterested in the problem.
Okay, so: automated color contrast testing tools, and perhaps the standards they test, are bad.
Why are they bad? Because they don’t correctly handle the difference between light text on dark backgrounds and require contrast high enough that placeholder text fails. Because massive websites like Apple.com fail it. Or Stripe, arguably the leader in tech-marketing web design. Or articles on web accessibility, like Ethan Marcotte’s “The Web We Broke” - the article itself doesn’t pass the color contrast test.
According to the WebAIM report, maybe 1% of sites pass the test.
Because automatically detectable errors constitute a small portion of all possible WCAG failures, this means that the actual WCAG 2 A/AA conformance level for the home pages for the most commonly accessed web sites is very low, perhaps below 1%.
Is this a failure of epic proportion, or a bad test? I think it’s a little of both. If the test reflected natural perception, then a designer or two should be able to pass it simply by using adequate contrast. As far as I can tell, that doesn’t happen: nobody ‘falls into success’ with this test – the only people passing color contrast tests are designing to the test. And even those designing to the test, as the experience at Shopify makes clear, end up having to compromise design or usability in exchange.
Here’s what should happen:
Number one: Be honest about uncertainty. Automated accessibility testing is imprecise. The tools regularly identify incorrect failures - like testing the color contrast between a dynamically-set colors, or incorrectly computing the color of stacked elements. They also identify incorrect successes - like the aforementioned ‘dark text on saturated background’. But there is only mention of uncertainty in the WebAIM report - in the Method section - is that there could be more failures than automatically identified.
Number two: Recalibrate color contrast rules. Contrast levels should be recalibrated to reflect human perception and, in short, be achievable. The idea that only 1% of the web is currently color-contrast accessible says a lot more about the arbitrary and overbearing nature of the test than about the web. In addition, the testing tools themselves should advertise their own limitations.
Number three: Aim for A first. The WebAIM study writes about A/AA color compliance - with AA being the should, and A being the must requirements. Compliance with A should be the first step. Of course, better is better, but as the supposed 1% pass rate implies, we’re a little desperate for some wins.
In short: color contrast tests should be achievable, should accurately reflect vision, and the tooling for them should be clear about what it can and cannot find.