WWCode Talks Tech #4: The Must-Haves for an Accessible Application

WWCode Talks Tech #4: The Must-Haves for an Accessible Application

Written by Nivedita Aggarwal

WWCode Talks Tech

Women Who Code Talks Tech 4     |     SpotifyiTunesGoogleYouTubePodcast Page
Nivedita Aggarwal, Senior Engineering Manager at Google, gives a talk entitled “The Must-Haves for an Accessible Application.” She discusses the different needs for modified accessibility, the proper way to measure this as a metric, and steps for implementing greater accessibility. 

Accessibility refers to the design of products, devices, services, or environments so as to be usable by people with disabilities. Who are our accessibility users? It could be people with different abilities, the elderly or, kids who may not be able to perform all activities. It could be you and me. We're talking about accessibility because, globally, there are over one billion people who have disabilities and six billion people who are temporarily able-bodied, meaning someday they will have a disability. If products are inaccessible, billions of people are put into potentially stressful situations and there is no inclusivity. They need to be able to achieve the same goals as everyone else. However, they may use different mechanisms to achieve their goals than the commonly understood approach.

In this accessibility talk, we will focus on four accessibility personas, people who have challenges with vision, hearing, mobility, and cognitive or learning disabilities. This is not an exhaustive list but it's a good start. A permanent disability could be that you are completely blind or you may temporarily experience a disability like vision loss when you have a migraine headache. Trying to read the board with the bright sunlight glare is a great example of experiencing a disability in a limited situation. There are a number of assistive technologies, such as large font magnification, pinch to zoom or browser zoom, captions, voice input, the basic keyboard and even autocomplete. This is in no way an exhaustive list.

There are different ways of measuring how accessible a product is and there are some standards and guidelines defined for us to refer to. In the US federally funded institutions are required to comply with the civil rights legislation of the Americans With Disabilities Act and section 508 of the rehabilitation act. Section 508 is US federal civil rights legislation that states your electronic product must meet a minimum set of accessibility standards. The federally funded agencies or programs might evaluate products based on accessibility so they're not discriminating against people. The standard that most developers rely on is web content accessibility guidelines. It was developed by the world wide web consortium, the same international group that defines all the standards of the internet. VPAT. A VPAT or the Voluntary Product Accessibility Template is a document that allows a company or an organization to provide a comprehensive analysis of the product in conformance to the accessibility standards set by section 508 of the rehabilitation act. 

We will look at how we can make sure we build things that can be operated with the keyboard. This is important for users with motor impairments but also ensures that your UI is in good shape. This ensures that everyone using your application has a better experience. Semantics is where we make sure that we are expressing our UI in a robust way, which works with a variety of assistive technologies. When we have built a web application, we test it by navigating through the elements of the page using keys on the keyboard. The order in which focus proceeds forward or backward through interactive elements via tab is called the tab order. Ensuring that we design a page with the logical tab order is important. 

We must exercise extra caution when changing the visual position of elements on screen using CSS. This can cause the tab order to jump around and can confuse users who rely only on the keyboard. Ensure that the reading and navigation order as determined by the code order should be logical and intuitive. Managing focus on a page when you're navigating is really important. If you're building a custom select element, you will have to ensure that the focus at the component level is also working. Users who are relying primarily on the keyboard could still interact with the component of control. For tab index to work in a group of radio buttons or combo list, you can set the tab index to -1 for the children and 0 for the currently selected item in the list. The component relies on the keyboard event listeners to determine which key the user has set and at what point we can set the tab index as 0 for the previously focused item. 

You can use a simple anchor tag and a clause for skip link to navigate to the main content. This is essential when you think of a user with motor impairments who probably might be using a switch device. You can find all this information on what kind of keyboard interaction is expected from a particular component in the area, one-to-one design patterns, and widget site. A responsive draw panel is a very good example of off-screen content, it is a very common UI pattern. When it comes to accessibility it can pose an interesting challenge. In that case, what we need to do is to ensure that the tab focus doesn't move to the off-screen content. To do that we need to set the visibility as hidden and display as none and when the off-screen content is about to come on screen, you can change the visibility to visible and display as a block.

Sometimes a keyboard trap may be desirable when a model appears. Let's say you are working and suddenly a calendar reminder pops up. The tab can move through the model dialogue but can move back to the main content in one or two tabs. Another type of assistive technology is the screen reader, a program that enables visually impaired people to use computers by reading screen text allowed in a generated voice. The user can control what is read by moving the cursor to a relevant area with the keyboard. A screen reader actually creates a user interface for the user based on programmatically expressed semantics. 

Imagine if you have to build the same UI for screen reader users only. You do not have to create any UI at all, just provide enough information for the screen reader to use. So how would you express the form interface for this particular form? We would be creating an API describing the paid structure. That is kind of the DOM API but with much fewer notes. This enables the screen reader to jump between the high level sections and then get through the information about each of these form element affordances to know how to fill them in. For the user, the screen reader provides the affordances based on the role alone without caring about the visual style. 

When the browser takes the DOM tree and modifies it to turn into a form, it is useful for assistive technology. Assistive technologies move from main, to form, to different elements directly. The accessibility tree is what most assistive technologies interact with. A browser can transform the DOM tree into an accessibility tree. Whenever possible, use semantic HTML and native form controls. The native elements give you keyboard support, focus, and built-in semantics for free. However, there are situations where we cannot use native elements and that's where ARIA comes to the rescue. 

Web accessibility initiatives and accessible rich Internet applications are good for bridging areas where native HTML can't be used to address accessibility issues. ARIA or simply ARIA works by allowing you to specify attributes and elements that modify the way that element is translated into the accessibility tree. If you create a plain check box, the users of assistive technologies like VoiceOver will be able to operate it. The screen reader will announce it as a check box, tell you that it has a label and whether its status is checked or not. But what happens if for some reason we decide we need to reimplement this basic check box differently either in a div tag or a list item or maybe somewhere else. We know that it should be focusable and should be able to handle the same keyboard interactions as a native checkbox, but what happens when we then start using it with a screen reader? The screen reader gives us no indication that the element is meant to be a checkbox. Sighted users can see the visual cues to understand that it is a checkbox, but there won't be any announcement to the screen readers when we are using it inside the div tag and the assistive technologies completely ignore the div tag. 

Using ARIA allows us to tell the screen reader that there is some extra bit of information here. So ARIA attributes always need to have explicit values. Adding that role and ARIA checked attribute causes the node and the accessibility tree to have the desired role and state without changing anything else about the node's appearance or behavior. To reiterate, the only thing ARIA modifies is the accessibility tree, it does not change the behavior of the element, or the appearance of the element, it's not going to make the element focusable, nor hold it, add any keyboard listeners to the element. ARIA can add semantics to an element when no native semantics exist. Often ARIA lets us create widget-type elements that wouldn't be possible with a plain edge chamber. For example, ARIA can add an extra label and description text that is only exposed to assistive technology APIs. 

These are the different ARIA roles possible and for a taxonomy of possible values. For the role attribute and associated ARIA attributes that may be used in conjunction with these roles, you can refer to the ARIA spec. That is the best source of definitive information about how the ARIA roles and attributes work together and how they can be used in a way that is supported by browsers and assistive technologies. An interesting capability from ARIA spec is that ARIA can make a part of the page live, that is in the form of assistive technology right away when they change. In case of an alert, the screen reader might choose to immediately speak to the user interrupting whatever they are doing. Aria-live has three allowable values, polite, assertive, and off. Aria-live polite tells the assistive technology to alert the user to this change when it has finished whatever it is currently doing.

VoiceOver has a tool called web rotor that allows users to navigate and choose through headings. Headings make it easier to navigate through the page. H1 will have more prominence on the page than H2 and H2 will have more prominence on the page than H3. On Mac, we use Command-F5 to invoke VoiceOver and then Command-F5 to again close it. This is how someone using a screen reader is going to navigate through different headings and different form elements that you see on the screen. The visual representation will be very different from the way the screen reader provides the headings to them. 

HTML5 introduced some new elements that help define the semantics structure of the page. These elements specifically provide structure clues in the page without forcing any built-in styling. Semantics, structural elements replace multiple repetitive div blocks and provide a clearer, more descriptive way to define the page structure for both authors and readers. The article is for self-contained sections of content like a blog entry or news article. The section is a completely generic section of a document or application. A section on its own may not make much sense, so we can include a heading inside as well. Aside represents any content that is tangentially related to the content around it. 

How do you approach testing for accessibility? Use Chrome developer tools for testing. Look for different use cases or scenarios the user will be going through and check if you're able to do the required activities with the constraints listed here without sound, color, and other things. To address the hearing-related deficiency, ask yourself these questions: is any information conveyed only by sound? Any positive or negative things that are not conveyed by visual or haptic feedback? Any audio file streaming that needs transcripts, any video that needs subtitles? For testing for vision-related concerns, check if there is any information that's conveyed only by color. Are red and green sole indicators for a status or an action? Is my design going to handle a variety of font sizes or bold font? Think about the layout here. For mobility-related deficiencies, consider this, since switch support depends on VoiceOver, did I do a good job and complete the job there? Are elements traversing along the screen, working correctly for all screens? Are my touch targets large enough to handle big fingers or elderly impressions? And for motor or learning disabilities, you must check, is the application navigation straightforward? Are the screens overcrowded with data and actions or are they clear, prioritized, and streamlined?