How are blind people using OpenAI technology?

Discourse forums now have a URL for finding post with a prompt as an argument.

The endpoint:

https://SITENAME/discourse-ai/embeddings/semantic-search.json?hyde=false&q=YOURQUERY

Will perform a vector similarity search.

Here is an example URL using this forum

https://community.openai.com/discourse-ai/embeddings/semantic-search.json?hyde=false&q=%22How%20can%20AI%20help%20blind%20users%22

It seems that the argument YOURQUERY will accept a prompt such as that given to ChatGPT (only tried simple single sentences so far).

The result is in JSON which while may sound like it is less useful to the blind it actually is of more use in that the JSON can be more easily incorporated into other technologies, in other words one does not have to parse an HTML page to get the relevant information.

2 Likes

I think that AVM needs to evolve to the point that it can be a competitive, full featured product to Dragon Naturally Speaking. Perhaps OpenAI can proactively engage more participants, testers, and employees who are blind or visually impaired to properly and expeditiously help enhance the AVM in this direction. The blind and visually impaired could certainly use it!

1 Like

My dad is Blind and loves using ChatGPT voice-to-speech, but he can only access it with my help. The app is inaccessible and does not work well with Apple Voice-Over screenreader. The screen-reading app does not read the buttons, and he can not access the ChatGPT microphone by himself. PLEASE FIX IT. It would be amazing if it worked for him. Make apps accessible.

5 Likes

Oh, the OpenAI API is very helpful indeed. The tools themselves are not perfect in the OpenAI side in terms of accessibility on the client side. However, the way I am using the OpenAI API is I have my own assistant basically with it that works with text, audio and images. It would be amazing to have real-time API, but they don’t have the images yet. Only audio and that’s not necessary in my case.

I can’t wait where we’re gonna have rounding boxes. That way I can click on buttons and whatnot.
Considering agents is the buzzword right now, they have to have the feature where they parse the UI elements properly. And hopefully that will be available via API. And that means I can automate some things. That I can’t automate with my screen reader. And OCR.

Truly, I have to say there is no better time to be alive and being blind than right now. Considering this is the worst we are going to get, the future is bright, even though I can’t see light haha.

Although OpenAI API is not cheap, images for example definitely are not cheap. And there are tools for myself so it works the way I want to. So therefore I can’t use other services as much other than my own. But I spoke with representatives of OpenAI, reported accessibility issues here and there and they are responsive. I just have to give myself some more time to report other issues regarding the UI and screen reader compatibility.

2 Likes

Hi All,

Now we are in ChatGPT 5, what is the update with screen reader functionality?

1 Like