I have been experimenting with the new Dalle 3, but why it outputs gibberish as arabic?

Hey guys, I’m using the new dale 3 to generate images and it’s truly mind blowing, I have spent like 30$ on the dale 2, but I must say that dale 3 is mindblowing and is on a whole greater level than dale 2.
but I got a question, when I tell dale 3 to generate images with arabic text like (انا احب امي ) it doesn’t work at all and all the generated text look like gibberish, is it because arabic is not supported? or is it because arabic is rtl language?

prompt : Photo of a heart-shaped object with the heartfelt Arabic message ‘أنا أحب أمي’ embossed on it.
output:

I do like the little hearts that are part of the arabic…

Writing words was near impossible with prior version of DALL-E. It is now only plausible. Neural activation of layers of imagery pretraining is very far from rendering text in a font face glyph.

I suspect they have special layers in the new model just for things like human faces - and for text. You can try and try again.

Same problem.

If it’s possible in programming, why isn’t it possible in AI?