Creating Audiodecriptions for the Blind with DALL E and chatgbt 3.5

Hello there,
I work for a non-profit organization in Berlin (Berlin fuer Blinde dot de), trying to make seeing the city available to blind people.

I want to use chatgbt 3.5 with DALL E for picture recognition. In order to generate a detailed text description of an existing picture (f.i. a photo of a historic building) - to let chatgbt create an audiodescription for blind people.
That’s somehow the other way round of DALL E picture creation but chatbot told me, Dall e, or Micro Azure could do.

I am not a developer, just a user, I really appreciate my experiences with chatbot, I like the discussions, the translations into loads of languages, even into easy speak for people with mental handicaps - so why not for producing for audiodescriptions?

Even instagram generates some minimal audiodescription - but how to implant Dall e or where to find the button for picture to text description in Dall e?

In case my quest is a misplaced topic, please forward my question to a suitable user forum, german or english, thx a much in advance - greets - Kai

1 Like

Hello Kai, Welcome to OpenAI developer forum. using AI like GPT-3.5 with DALL·E for generating detailed text descriptions of pictures to create audio descriptions is a fantastic idea.

However, To get started, you might consider exploring dedicated image recognition services like Microsoft’s Computer Vision API or Google’s Vision AI, which can analyze images and generate textual descriptions. Then, you can utilize a Text-to-Speech (TTS) service, such as Microsoft’s Azure Cognitive Services TTS, to convert these descriptions into audio.

I know that you’re not a developer, but your vision is inspiring keep it up. Connect with a developer or AI specialist that can help you bring this idea to life. You might also like to explore AI-focused communities or organizations in Berlin for further support.
Your initiative is making a positive impact, and I wish you the best of luck with your project. :muscle: