Improving Dolle2 using data from seeing impaired option?

I was having a conversation with chatgtp about how to use language models to improve image models and it gave me an idea. So you know how when you upload an image to twitter it ask you if you would like to give a description of the image for the seeing impaired? I don’t know what Twitter is doing with that data but it seems a shame if it’s not for imaging ai. it would help with Dolle2 understanding your prompts.