Skip to main content

Remote accessibility persona testing

Posted by: , Posted on: - Categories: Access needs, Accessibility, Assistive technology, Testing

Team members are in a video conference session. One team member shares their screen with changed colours and is zoomed into the page.

We have found there is not enough awareness about the different experiences and diverse access needs our users have. We want to highlight barriers that people encounter when using digital services. That’s why we, the Accessibility Team at the Government Digital Service (GDS), have created accessibility personas.

When we wrote about how we made our accessibility empathy lab virtual, we mentioned that we have been successfully running remote accessibility persona testing with service teams. The sessions have been so successful that we are running them more often and with more teams.

We run the sessions by asking members of the team to select a persona profile and install it on their laptops before the remote test begins. Those profiles then run different simulations and assistive technologies. If they are set up beforehand, you can create a much shorter list of instructions by only using those instructions that say “once per device”. Those can then be used to set the shared profiles up more quickly.

The team interacts with the prototype or service and completes tasks. These tasks should ideally be set up by the team’s user researcher. They then make notes of any issues that they encounter. At the end each member of the team presents their findings by sharing their screen.

We recommend you involve an accessibility specialist when running remote accessibility persona testing. They will be able to help interpret the issues identified by the team. This is because some issues arise from not being familiar with the tools or what their limitations are.

We found conducting persona testing has a lot of benefits:

  • it’s more engaging than an audit report
  • it’s always best to learn by experiencing something yourself
  • it involves the whole team
  • it helps testing for accessibility issues early on
  • it works well even when team members have no accessibility knowledge
  • it also helps finding usability issues because it helps people look at things in a different way
  • it raises awareness and understanding
  • it helps to think from the perspective of a persona even after the session ends

Conducting persona testing remotely has even more benefits:

  • you can accommodate many more people than fit into the lab
  • collecting findings is more efficient by asking all members to edit the same document
  • sharing findings is much more engaging as each member of the team gets the chance to share their screen to show the issues and talk about what they learnt
  • it is very easy to reuse the persona profiles any time once everyone has installed them

There are also some disadvantages:

  • the remote setup is browser-based, so you cannot test anything outside of the browser
  • you cannot use more common and reliable versions of assistive technology if they aren’t browser-based; for example, the remote setup uses the ChromeVox extension as a screen reader and not something more common like NVDA
  • the sight-impaired persona, Claudia, usually uses a screen magnifier, but in the browser-based setup she has to zoom into the page; this is also valuable but not the same experience

When using the accessibility persona testing you should be aware that it is not a substitute for testing with real users. You can still conduct user research with people with access needs while remote. A simulation is never a true representation of an impairment. Accessibility persona testing is also not appropriate for testing the compatibility of assistive technologies, browsers and services.

All government departments have users who have access needs. Running accessibility persona testing is one tool in our accessibility toolbox you can adopt to help understand your users and improve your services. It doesn’t cost anything - you only need to be able to install Chrome extensions. There is also no need to visit GDS’s accessibility empathy lab, as it can be done anywhere.

As remote working becomes more commonplace as a way of working we’re keen to learn more about these challenges, and about how user researchers can best respond to them. So we would like to hear from you: if you have any tips on how to run research with people with access needs remotely let us know in the comments below.

Sharing and comments

Share this page


  1. Comment by Mike Hughes posted on

    I am feeding back on and the blog on this page.

    I continue to be surprised DWP don't understand that the majority of their users with sight issues do not use assistive tech. so want to firstly highlight how unhelpful a page and an approach like this is in conveying the reality of low vision.

    To say publicly "the sight-impaired persona, Claudia, usually uses a screen magnifier, but in the browser-based setup she has to zoom into the page; this is also valuable but not the same experience" is to miss that the latter experience is not just "valuable". It is the default experience for most people with low vision given that most simply do not use or even know of assistive tech.

    The advice with the profiles is to do user testing with people who use assistive tech. This is poor practice and ultimately wholly backwards. By all means do that testing but those users represent a small proportion of the sight-impaired population and show that DWP is once again taking a tick box approach to disability and accessibility i.e. only test with people who have a visible representation of their specific field of impairments. So the fact that 90% of people with low vision will not be using assistive tech. is not as important as testing with the 10% who do. This reinforces stereotypes about low vision and could not be a less helpful approach.

    So, use of assistive tech. becomes another means of identifying a person with a valid diagnosis of sight loss, sight-impairment or low vision along with use of white cane, a guide dog, dark glasses or registration.

    It's hard to know where to start with this in terms of stereotyping but suffice to say that the majority of people with sight loss, sight-impairment or low vision have literally none of the above list of items and implicitly suggesting to DWP staff that these would be good indicators (and that is in effect what you're doing here in using assistive tech. as a cipher for low vision) is an approach which perpetuates the view of DWP front line staff that people claiming to have a VI but not having any of the above are basically "making it up". If you genuinely want to change perceptions and let people understand those with lived experience then you need to start talking to those with lived experience and recognise that if you're talking to someone with assistive tech, a guide dog etc. then you're likely already displaying a bias in terms of your perception of what vision loss looks like.

    • Replies to Mike Hughes>

      Comment by Anika Henke posted on

      Thank you for your comment. I fully agree it's important to involve users with a wide range of needs in testing.

      Just to clarify, this blog post was written and published by GDS (Government Digital Service), which is part of Cabinet Office, and doesn’t necessarily represent the views of DWP (Department of Work and Pensions).

      We advise to do user testing with people with all kinds of access needs, not just users of assistive tech. Neither our guidance nor this blog post say to focus on assistive tech during user research. If it did, I agree that wouldn’t be good.

      I fully agree that we need to talk to people with lived experience. That’s exactly what we advise people to do in our guidance.

      The latest low vision survey from WebAIM [] shows that more than 70% of respondents use magnification software.
      That data is not perfect and probably skewed towards people who are more technically proficient. But there is unfortunately not a lot of data and research around this. If you have more or better data, I’d be grateful for a link to it.

      Through using a persona, we don’t say that people with low vision always use magnification. But Claudia happens to use it.

      Three of the seven accessibility personas have a range of vision impairments. Claudia is sight impaired and uses screen magnification. Ashleigh is severely sight impaired and uses a screen reader. Ron has cataracts and doesn’t use any assistive tech at all.
      In that respect Ron is the kind of low vision user you were looking for.

      Those three personas obviously don’t cover all the different types of low vision users. But it would be impractical to have too many personas.

      A low number of data points will never represent reality. But in order for the personas to be effective, their number needs to be manageable. The aim was to have a wide spread of different impairments. The aim was not to stop people from thinking about other types of people with different impairments and behaviours.

      I understand that focusing on specific needs and only on 7 instead of, say, 100 people might skew someone’s perception. But that is in the nature of using personas and why we make clear this is only one additional way of testing and it is not a substitute for testing with real users.

      If what you are assuming is what people take away from these exercises, that would not be good indeed. Whenever we do these sessions we remind people that they still need to test with real users and that just because something works for one persona doesn’t mean it works for everyone.

  2. Comment by Mike Hughes posted on

    Thank you for your response. I don't have more or better data but it should be obvious your data is skewed. Indeed the WebAim research explicitly says "Those with more significant vision loss consistently reported higher levels of internet proficiency." What does proficiency mean though? Does it mean technically proficient or does it mean "I'm not proficient because it's not accessible to me"?

    Your fundamental issue here is that your best data is from people who manage to make claims. DWPs own stats show take-up amongst those with sight loss is very low indeed and so that's telling you instantly that your approach to accessibility is failing. Only those with assistive tech get to the point they can make or follow a claim. Personas etc. do not address this fundamental issue.

    • Replies to Mike Hughes>

      Comment by Anika Henke posted on

      We don’t have any insight into what data DWP has or how they are conducting accessibility testing. We work for Cabinet Office and not DWP. We suggest you contact them about your concerns.

      Our advice is to test with a wide range of users with access needs. People should use many more techniques than the one mentioned here.