I initiated an ad-hoc team and swarming events to rapidly collaborate on designs and discovery of the most important problems to solve first. CAT is complex and needed serious UX design attention quickly.
CAT is an internal Fidelity tool that agile squads use every day to ensure web-based accessibility compliance before product releases.
Our problems proposed by the accessibility team kept getting more complex and ambiguous at the same time. We also needed to add a set of success measurements that aligned to a new company wide program. The tool also needed to become easier to use out-of-the-box.
I stepped up to the challenge and guided a team of designers, content strategists, and engineers through discovery, ideation, concept flaring and requirement prioritizations during a one week design sprint.
The team flared out creatively in all directions across disciplines. Together we honed a set of concept visualizations and story boards. This small team aligned around the the most important problems to solve for the product to be successful and usable.
Reduction of complexity would be absolutely necessary to reduce the learning curve for new users.
The accessibility success criteria must drive the overall structure, user experience and educate users as they work.
Our engineers are the first priority of users. Their success and efficient workflow is paramount.
I continued iterating with increasingly more detailed designs and solicited feedback from partners.
I delivered detailed interaction designs with a FIGMA prototype for user testing with our internal users.
Software engineers are the primary users and they are required to validate their code with this automated tool. We really needed to make it easy for the new users to get up and going without any training.
Automated feedback is prioritized into manageable groups
Testing and fixing all web pages should become faster and easier with a new scoring system that prioritizes the work and focuses on most important fixes first.
CAT is aligned with the W3C WCAG guidelines.
Details on the scoring system UI
The plan was that all products must meet FAR level 1 requirements before release in 2021.
In consecutive years, each product team would need to comply with an increasing amount of accessibility requirements.
The design challenge was to sort out how Level 1, 2 and 3 are cumulative and compliment each other.
Details on fixing issues and the rationale are baked into each
As issues are fixed on the server they re-run automated tests and see their score increase.
Ideally this creates a positive emotional response for a tedious job.
Thoughts on mvp2
We started drawing the line here on complexity for the mvp1 build. Ideally, we would record each success as an emotional bonus.
We also discovered the need for separate content effort. The current nomenclature for issues is not clear and obvious for all types of humans.
Manual testing is more for designers and quality assurance folks
As issues are fixed, they go back and run automated tests again and get to see their score increase.