Intro to Accessibility Testing

What's the right approach for accessibility testing? We break down automated, manual, and external audit options so you can make the right choice for your team.

Why test for accessibility?

Accessibility testing is a crucial yet often overlooked step in the release process of any user-facing software component. Incorporating elements of both manual and automated testing, accessibility testing identifies potential incompatibilities, code errors, and poor interface designs that may prevent users from being able to operate your software.

By guaranteeing service availability to all, accessibility testing helps organizations reduce external risk, improve code quality, and increase user satisfaction in order to deliver the best possible customer experience.

One of the most important things we’ve learned is that there’s no one-size-fits-all approach to accessibility testing.

What works best for a large enterprise software team may not work as well for a ten-person startup. The best approach for each team depends on bandwidth, expertise, timeline, and budget.

What is accessibility testing?

Accessibility testing measures how available (or accessible) your user interface is to users who rely on assistive technology. Assistive technology includes hardware (such as eye trackers or tactile displays) and software (such as screen readers) that helps a user consume and operate digital content in their preferred modality. For example, a screen reader transforms text and visual content to audio so users with visual impairments can perceive and operate it.

To be considered compatible with assistive technology, software must conform with legal and best-practice standards. The W3C, which is the web’s governing body, releases a common standard known as the Web Content Accessibility Guidelines (WCAG) 2.1. Many countries and regions also codify accessibility standards into law, such as Section 508 in the United States and EN 301 549 in the European Union. WCAG 2.1 is considered the “gold standard” in the industry since it is inclusive of both Section 508 and EN 301 549.

All standards contain numerous criteria by which to measure software for accessibility. For example: are all interface elements labeled? Do images and videos have text descriptions available? Can users operate it using only a keyboard? If your software meets all the criteria for a particular standard, it is considered compliant (and therefore accessible). If it doesn’t meet all these criteria, it can only be considered partially accessible and is therefore vulnerable to external threats such as legal demands, customer churn, and profit loss.

How is accessibility testing done?

Accessibility testing can follow one of three approaches:

  • Automated
  • Manual
  • Some combination of both.

Automated testing catches straightforward failures, such as missing image descriptions or color contrast problems. Manual testing roots out operational issues like focus traps. Combining automated and manual testing provides the highest coverage level and the greatest return on investment for any size engineering organization.

Automated Testing

Automated testing may involve static code analysis, written behavioral or integration tests, or automated accessibility scans. The thing to keep in mind about automated testing is that it is only a small part of the puzzle. Static code analysis tools and automated scanners catch about 30% of all potential failures, including markup errors, unlabeled elements, and color contrast problems.

In terms of implementation cost, static code analysis offers the best bang for your buck. Code linters such as eslint-plugin-jsx-a11y identify simple code problems early in the development cycle and require very little customization. Most are open-source and well-maintained. Because of their low overhead and easy setup, linters are a great accessibility testing choice for teams of any size.

Automated scanners run against rendered code, such as a staging or production site. Scans catch many of the same errors found through static analysis, but may flag areas for manual investigation as well. Most off-the-shelf accessibility scanners charge a subscription fee, but many providers such as Google Lighthouse open-source parts of their engines. For local development, browser extensions such as aXe and WAVE can run accessibility scans on-demand in the browser. This is a great way for developers to check their work before deployment and reduce burden on QA staff.

Manual Testing

Manual testing catches the 70% of failures that automation can’t yet identify and is strongly recommended to ensure full compliance. During a manual accessibility test, a human tester manually executes a scenario or user flow to determine whether it can be completed with assistive technology. This will reveal the functionality barriers, such as hidden or nameless elements, that prevent users from operating your software. Although great strides have been made towards automated behavioral testing, a manual tester is best equipped to assess risk level and provide actionable recommendations.

Manual accessibility testing can be conducted either in-house or through an external service provider, depending on the size of your engineering organization. It may be conducted against a single component or feature (a spot audit) or the entire piece of software (a comprehensive audit). Quality assurance personnel are a great choice for internal accessibility testing, provided they are familiar with accessibility standards such as WCAG 2.1. If speed or resources are a concern, vendors such as auditors or usability testing agencies can conduct manual testing as well.

Accessibility Audits

For organizations who are just getting started with accessibility, an initial audit from an external firm (or internal accessibility expert, if one is available) is the fastest way to get the process started. An audit is a careful and comprehensive examination of your software’s accessibility vulnerabilities, resulting in a thorough risk analysis report. Auditors may also provide recommendations for code remediation or functionality improvements. These resources can be used to provide external reports such as VPATs or generate internal roadmaps for accessibility remediation work.

Accessibility auditors typically use both automation and manual tests to identify areas of weakness. When conducting manual accessibility testing, an auditor keeps track of the steps needed to reproduce a bug or generate an error. Auditors may assign priority based on the severity of each violation. Usually, external accessibility audits include a VPAT as well, which can optionally be shared with customers or the general public to indicate the software’s level of accessibility.

Recommendations

Technical accessibility testing is our area of expertise, and we strongly recommend starting with a comprehensive audit for teams that are new to accessibility.

Once you have a good understanding of your software’s accessibility risk level and the effort it will take to remediate, consider adopting testing tools and methodologies that best suit your needs. Fully-staffed development teams should use open-source software as much as possible, manually test new features before sending them to production, and ensure developers and QA professionals have access to accessibility resources. For remote or contract teams, off-the-shelf solutions such as scanners and on-demand testing may be a better option.

No matter what implementation you choose, accessibility testing will help you to stay ahead of customer and legal demands and continue to deliver high-quality software for all.

As software engineers, we’re well equipped to deliver actionable recommendations to developers while providing detailed risk assessments to company leadership. Contact us with details about your project and we can help you figure out what works best for your organization. As a firm that specializes in accessible technology, we’ve had the privilege to help engineering teams of all sizes solve their accessibility challenges.