Improving Basecamp's accessibility - Interview with Michael Berger on how to make an existing web app more inclusive
This March, Basecamp's famed Signal vs Noise blog published a great article about their commitment to improving inclusivity and accessibility of their core product. In "How we stopped making excuses and started improving Basecamp's accessibility", Michael Berger wrote about his pledge and personal mission to make Basecamp 3 more accessible to all. That article made its rounds since it is a great insight into how such developments sometimes do not start from the management level or external factors - e.g. lawsuits - but, in this case, from the quality assurance department, where Michael Berger works.
Articles like his are very inspiring - both to seasoned web developers who put an emphasis on a more inclusive web, and to companies and individuals, who just get started with the topic of web app accessibility. We should not forget that this is Basecamp - a company (or better, a team with a philosophy) to which many people are listening, that blogs regularly and publishes influential books.
Following my blog post "Building accessible web apps - where to start?", I reached out to Michael with a series of follow up questions regarding his article on Signal vs Noise. He was happy to answer my questions, and in the following, I will publish, with his permission, his valuable and insightful answers. Thanks again, Michael, for giving us really interesting insights in your work!
You wrote that accessibility of Basecamp was a topic from time to time, and there were even demonstrations of screen-reader usage. But these occasions have not lead to an accessibility agenda. What did you do differently in 2017? Was the decisive point to mark accessibility pain points as "bugs"?
Knowing that we could be delivering a better Basecamp experience for our customers with disabilities– but weren’t – created a sort of tension that kept building over time. It was really eating at me! On top of that, we would have these lulls between active QA (Quality Assurance) testing on projects, which created some space for taking on new endeavors of this sort.
The initial work I did was around becoming familiar with the tools used to evaluate and test for accessibility, which gave me the understanding and language that I would need to communicate what I was finding to the rest of our team. At first I created bug reports for everything I would find. But that’s not really how we schedule work at Basecamp. Changing our approach to instead raise accessibility issues as bugs during the QA phase of active feature development turned out the key to making incremental progress.
Have you consulted external experts during the changes?
Absolutely! The accessibility community is filled with great folks who jump at the chance to help each other out. To build on the many resources that you linked up in your article, there’s an accessibility Slack account that I’ve found to be incredibly helpful. If I’m questioning how to interpret the WCAG guidance about how a given element should be coded I’ll first dig through the archive of transcripts, and if I’m still not sure I’ll pose my question in the #aria-patterns channel. It’s wonderful having a central place to discuss these things.
The Chicago Accessibility Meetup group (there are similar groups in other cities around the world) has also been an important source of support. I’ve learned a lot from the presentations and connected with some great people working in the field. Marcy Sutton (of axe-core/Deque/a11y wins) gave a talk on automated testing last year, right when I was starting to look into automated accessibility testing.
The annual CSUN Assistive Technology Conference is a great event to consider going to, especially when you’re just getting started in the field. I’ve attended the past two years and found it so worthwhile. Being in the presence of people with a variety of disabilities reminds you why you’re doing this work. I’ve gotten a lot out of the sessions, from case studies that directly apply to my work, to topics like "Should Art Be Accessible or Inclusive?” that can expose you to a completely different side of accessibility. And on top of all that, it’s another great chance to connect with people in the field. Like any conference it takes some planning to work around the low occupancy limits for the sessions and other implementation quirks, but overall I’d recommend it.
Which UI element was the biggest challenge to making accessible?
Timely question! The recently released Hill Charts feature took a bunch of work to make accessible. Hill Charts are a way to represent the progress of tasks visually on a bell curve. In addition to the inherently visual means of representing the data, data points on the chart were designed to be manipulated exclusively using a mouse (or a finger on a touch device).
When starting to look at cases like this I first turn to the W3C (World Wide Web Consortium) for example patterns that seem similar to what we’re designing. You can play around with the sample elements to get a feel for how the expected interactions, screen reader output, and markup required to get there.
Was it hard to get some form of "accessibility budget", internally? Even if you went into advance and developers could ask you questions regarding, extra costs certainly arose - as it is often the case, if a system is subsequently made inclusive?
This is sort of an ongoing conversation at Basecamp. The way that we first approached accessibility – incorporating testing and remediation into our QA process during new feature development – provided a useful constraint. We just need to make a single feature accessible at any given time.
Budgeting time to to make pre-existing interactions and flows accessible is bit more of a challenge. The majority of this work applies to some complex widgets, like autocomplete and picker elements for Pings (direct messages), @mentions, to-do assignments, the jump menu, and our date pickers. Our modals also need some work to ensure they lock focus and prevent a keyboard user from tabbing into the page behind the modal. Another big one is sprinkling around some
aria-live so people using a screenreader are informed of new unread items and flash messages that communicate any errors (or successes) when submitting forms.
The recently launched Hill Charts feature required about an additional week of work to be made accessible. Reflecting on this project prompted us to start getting real about addressing the backlog: What will it take to get to the point where we can officially declare Basecamp 3 “accessible”? Getting to 100% accessibility may not be a realistic target, but I think we can get at least 90% of the way there with a realistic chunk of work.
Did your work and mission to make Basecamp more accessible also affect the development of its iOS, Android, Mac and Windows apps?
Absolutely. Our native apps are composed primary of web views with a variety of native adornments, so much of the work we do to make Basecamp accessible for the desktop browser automatically applies to the other platforms. Beyond that, we’ve taken a similar approach to improve accessibility of the mobile apps as we have for the desktop, enhancing accessibility with each new feature as it’s developed.
Lately, we’ve started digging into some of the special hooks included in Apple’s UIKit. For example, we can detect if VoiceOver is turned on and tweak the app interface so that the input to filter your list of projects is automatically presented, whereas otherwise it’s shown upon swiping down on the list (you’ll spot a similar behavior in Apple’s native Mail app if you first turn on VoiceOver, then open the app.) I frequently turn to Apple and Google’s first-party apps for ideas and inspiration. Both companies do an excellent job designing with accessibility in mind.
Apart from experiencing Basecamp with a screen reader, did you or do you use other accessibility testing tools (like aXe core, tenon.io, WAVE, browser extensions etc.)?
On the desktop, the tool I turn to most often is the aXe Chrome extension. It’s great for generating a report of the types of accessibility failures that can be detected programmatically, like color contrast, semantic heading levels, proper form labelling, etc. We recently started looking into aXe core for automated reporting of these issues during development but we haven’t flipped the switch yet.
On Android, accessibility scanner and TalkBack are the primary tools that I test with. While not specifically related to accessibility testing, I use Vysor to capture screenshots and screen recordings from my phone directly on my Mac.
On iOS, Apple’s Accessibility Inspector built into Xcode is useful for checking to make sure that all elements have labels and touch targets meet the minimum size recommendations. From there, I’ve found it’s mostly a matter of testing manually with VoiceOver to evaluate everything else.
It’s good to remember that any automated tool only highlights a subset of accessibility issues. After running any automated testing tool it’s important to test manually using screen readers (on both Mac and Windows), and checking that every interaction functions exclusively with the use a keyboard (without using a mouse).
Basecamp is known for putting many of its solutions and snippets under open source license on GitHub (for example https://trix-editor.org/). Is there a chance that you will do the same with your accessibility solutions?
This is definitely something I’d like to see happen. I’m not yet sure what form these tools would take but it would be wonderful if we could use our position to help others, especially other apps built on Rails, develop with accessibility in mind.