Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One thing that frustrates me significantly about accessibility and assistive technology (specifically for the web at least) is the lack of consistency implementing standards. It's much more insidious than most other standards inconsistencies because rather than your page looking slightly off in one browser, you can have your site reading perfectly on two of the major screen readers and the third refuses to read it.

It's a massive waste of resources to find the right combination of tech that ends up magically working for all three. The running joke at my workplace is "fix JAWS, break VoiceOver; fix VoiceOver, break NVDA" etc. And yet it's not the kind of thing you want to let go because not many people will have the chance to switch to another device/reader to see if the site works there. Nor should they have to.



Holy tamole you're not even remotely kidding about this.

I spent several weeks working on a project that had to pass an accessibility audit. Came in partway through, spent weeks getting things to pass (and there were false positives with their scanning tool). Finally we were 'done', but we then learned that just meant the next step of the audit, where they'd use JAWS to pick random app screens and 'just see how things work'.

They worked horribly. But... I didn't have JAWS to test against. I used the company's one license of JAWS on a remote desktop (everything had to be remote - we were not allowed to pull the code local at all). JAWS installed, then died. Wouldn't really work on remote desktop properly. Used NVDA - that worked (awkward, but it worked). Spent days trying to 'patch stuff', and it made everything worse under JAWS.

The 'solution' was "make changes, then email someone at company X to run it through JAWS, and they'll email back what it said, then you can fix things based on that". That was really their solution. Project was put on hold shortly after that, and it was sunset a few months ago (after ... literally millions of dollars poured in over several years).

One of the big lessons from that - build accessibility in from the start - you can't arbitrarily graft this on years later and expect anything to actually 'work'.


This is an absolutely ridiculous way to run an audit. No wonder accessibility on the web is a mess.


to be clear, less of this was a problem with the audit company, and more with the company I was contracting to (who were submitting their system for audit).

However, the audit company only gave us half of what was needed, in a sense. The first half, we'd get back a list of "problems" (which, again, were problems under JAWS, although some were problematic under NVDA as well). The results had links to "best practices", so we could head off some other things ahead of time.

After that was passed, though, they'd just use the system randomly, and determine what was 'good' or 'acceptable' or not. It felt like we had the goalposts constantly moved. I kept getting asked "when will this be 'done'?" and I kept saying "I don't know what done is, and we only get to learn part of what the definition of 'done' is every few weeks, so... effectively, there is no answer. It's done when it's done". Which, of course, no one wanted to hear.

EDIT: and to be clear on your point, it was an absolutely ridiculous way to develop web software. Build a PHP app, but you're not allowed to use composer, or bring in any external/thirdparty code whatsoever (the network was completely blocked from giving us access to even pull down code). They'd gone out of their way to make sure it was not possible to use any external code, so things like security hashing and whatnot were all hand-rolled.


In my experience, TalkBack works great. It is especially nice that in concert with Chromium's aria implementation, it also works quite well. The menace has been VoiceOver, which is inconsistent. In web views, VoiceOver segments incorrectly on elements and will regularly ignore aria labels. Not sure how well JAWS works, NVDA seems to work well enough, and I've admittedly not tried Orca with a web browser.

Though honestly, I think that we should give up on trying to present exactly the same interface structure to people who are completely blind. An application design which works well for sighted people can easily seem incredibly convoluted when described by your screen reader. I've not had time to build out a framework for this, but I think it would not be hard to start delivering something better. The idea being that you would provide a structural description of interactions and content, and the framework would present this through whatever means is available on the platform. This way, in combination with platform detection, it will always present something functioning.


I wrote up some ideas about a scriptable cross platform accessibility integration system called aQuery, like jQuery for Accessibility.

http://donhopkins.com/mediawiki/index.php/AQuery

Morgan Dixon did some wonderful work at the University of Washington called Prefab. Some of the links from my page to his papers are broken, but here's his web site and a demo:

http://morgandixon.net/

https://prefab.github.io/

Prefab: The Pixel-Based Reverse Engineering Toolkit Prefab is a system for reverse engineering the interface structure of graphical interfaces from their pixels. In other words, Prefab looks at the pixels of an existing interface and returns a tree structure, like a web-page's Document Object Model, that you can then use to modify the original interface in some way. Prefab works from example images of widgets; it decomposes those widgets into small parts, and exactly matches those parts in screenshots of an interface. Prefab does this many times per second to help you modify interfaces in real time. Imagine if you could modify any graphical interface? With Prefab, you can explore this question!

https://www.youtube.com/watch?v=w4S5ZtnaUKE

Imagine if every interface was open source. Any of us could modify the software we use every day. Unfortunately, we don't have the source.

Prefab realizes this vision using only the pixels of everyday interfaces. This video shows how we advanced the capabilities of Prefab to understand interface content and hierarchy. We use Prefab to add new functionality to Microsoft Word, Skype, and Google Chrome. These demonstrations show how Prefab can be used to translate the language of interfaces, add tutorials to interfaces, and add or remove content from interfaces solely from their pixels. Prefab represents a new approach to deploying HCI research in everyday software, and is also the first step toward a future where anybody can modify any interface.


That is very cool, thank you for sharing. I'll see if I can put some weekends into my part of the solution. Prefab seems like it may be the only hope for making existing proprietary software accessible; I want to make it easy for businesses with accessibility requirements to do the right thing well.

Maybe in the future we won't need computer vision to reverse-render display elements, one can dream.


I think it's great to use JavaScript to control and combine the best of both worlds: accessibility APIs plus screen scraping / selective screencasting / pattern recognition / computer vision.

For example, you could use the accessibility APIs to find the screen position of the video window in the Skype application, perform facial recognition and tracking, and screencast the video onto a texture of a VR chat application.


I have worked with some Blind clients being paid to fix computer problems in general. i find the Microsoft windows users Prefer "windows eyes" over JAWS. I have a Blind friend and have invested time with Knoppix adreine and the latest most recent is with Sonar Linux. http://sonargnulinux.com/ (Gnome with Orca, based on arch) Orca is pretty decent using firefox. We recently added in Kodi with the screen reader addon and its a whole new world.

Orca I think does not get enough attention.

Alot of software I us is difficult to screen read due to (lazy) programmers just not labling things. nothing worse then moving around a software and all you hear is push button, push button and not knowing what they do. When you take the time to leave feedback it seems to be like talking to a wall no one does anything.

I am going to make sure I label and be more descriptive now inspired by this section of comments on my own web pages.


Assistive tech is still really in the niche of cottage industry. Here you are relying on the individual competencies of well intentioned specialists.

I remember a poingant article by Melanie Reid of The Times (UK) who broke her back horse riding. Melanie vividly described an assistive device trade show with every type of small manufacturer featuring their kit




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: