What the heck is going on with language translations! Why the sneaky removal all of a sudden!?

Oh, and it’s not just openAI it’s everywhere it’s Google it’s Amazon, ect. I don’t know what the hell has happened but one of the things about AI loved the most, is what I had considered to be one of its most powerful features… LANGUAGE! Being able to translate and speak and understand, or determine any said language. It brings people together in a world that is separated by so many different tongues, and it has helped me immensely in communication with not only friends, but business colleagues and others in different countries.

I remember testing a GPT model not so long ago. I wanted to see if it was able to determine the language of ancient aramic. I found this really cool YouTube video (https://youtu.be/Wc22W3bos64?si=20JCJ-rRDkkWadW2) that allows you to hear the languages of various countries/ civilizations. GPT, got many of them right but failed in ancient Aramaic, determining instead, that it was Hebrew. Ok no big deal.

Oh, but it doesn’t stop there. Today I had a different scenario of language detection for GPT and I get some weird response that “it can’t do it”.

I replied… oh, you very much can do it, because you have done it before what’s going on!? Gpt coughed it up. OpenAI pulled language detection from the audio portion of gpt4. Oh… but it doesn’t stop there.

Does anyone have an echo device? Try asking your Alexa to translate. SORRY CHARLIE! Alexa responds, “I can no longer translate”. Amazon pulled the feature on October 31 unbeknownst to me. So if you’re in a room with your buddy, and you want Alexa to translate for you, which she was able to do before… not entirely that well per se, because of latency issues, but it was still a viable feature and it’s completely gone!

Oh, you think you can go to Google Translate and click detect language, and then click the microphone do you? Think again, it’s grayed out, won’t work.

I’m trying not to wear a tinfoil hat here, but there is clearly something going on languages, one of the most important features of AI believe is that it can bring so many people together in breaking the language barrier. I have been able to communicate in ways that I never was able to before with business colleagues, personal friends of mine especially.

What is going on in the realm of language translation/ and or language detection.

This is down right sad.

Sounds like you’ve experienced Google attempts at browser lock-in. Like Google was shown degrading a Firefox user agent, but video was fast with a Chrome user-agent set.

Google translate site says:
Important: Supported languages vary by browser. You can translate with a microphone in Chrome and there’s limited support in Safari and Edge.

Also, you could have disabled browser access to microphone.

With Alexa, “how do you translate xxx to xxx language” or “how do you say xxx in xxx”. I got one refusal asking differently and it told me how to ask. It doesn’t understand Japanese without going into the app and changing something, apparently just ignoring me.

That video is crypto spam. It has signs of being Bark, the random amount of noise and unpredictable voice.

Playground seems happy to understand and perform.

image
image

Thank you Jay. As far as the alexa devices, simply say “alexa translate” she will reply back right away and tell you “I’m sorry I can no longer translate conversations” a feature that was disabled on October 31.

Gpt told me the roughly the same thing! Ability to automatically detect language from audio is no longer supported. Remember, we’re not speaking of text here, but audio or voice.

you offered a good advice though about the microphone and Google I’m gonna go back and check to see why that would be turned off, but I was using my Google Translate app that would be very rare that my microphone would be off considering I use dictation so often. Also, it isn’t until you select “detect language” that the microphone on google translate goes gray. But I’ll play with it some. but generally speaking, I am talking about the ability to play or speak an audible language, and have the AI or translation app, detect the language that is being played, or audibly spoken. I feel as that has just vanished.

Thank you.

Me: Gpt, sometime ago, I asked you to listen to an audio file of a language being spoken, and you were able to tell me what language it was why aren’t you no longer able to do that?

GPT 4: The ability to listen to audio files and identify languages was a feature available in a previous version of my capabilities. However, as of my current version, I don’t have the functionality to process or interpret audio files. This change in capabilities is part of the ongoing development and updating of the AI model, where certain features may be added or removed based on various factors like user needs, technological advancements, or policy considerations.

For language identification, you might consider using language identification software or apps that are specifically designed for this purpose. They can analyze the audio and provide you with the information you need.

Unbelievable.

That ChatGPT response can also be avoidance of mistruth.

The AI itself doesn’t understand spoken language audio, it can only receive text from Whisper. Proper use of Whisper is for you to provide the input language as an API parameter, and even pre-prompt in the spoken language.