Can no longer analyze my audio files

I signed up for The plus membership to ChatGPT as I initially I found this was the only AI Assistant with the ability to analyze audio files for quality, noise, room amount, etc. I had been running comparisons of short, 50 second clips, with no problems, getting detailed analysis and comparisons up until A week ago. The first issue was apparently I hit an upload quota that I had no idea about. Through the chatbot, I found I have the ability to share Google Drive links to the files that are to be analyzed with no hit on this quota. At this time, I found, however that the backend script Librosa, that was analyzing the files is currently broken. This has been confirmed by both the chat bot, as well as customer support.

My issue is this has caused my audio processing to come to a screeching halt, and what would be a relatively quick adjustment to the code, is being ignored. I feel like this has been a bait and switch technique, showing me this capability, allowing me to use it for my business, then taking it away, despite being a paying customer. Was anyone else using this function of ChatGPT? This was hugely valuable to me because of a current deficit in hearing certain frequencies, this analysis is a very supportive tool that has given me great insight on my recording spaces. Now, with adjustments made, I have no way of running the comparison.

Customer support appears to be ignoring me…giving me what looks like a scripted response 24 to 48 hours after I’ve posted in the chat. And absolutely no responses to my emails.

In my case, this is not a free service where you take what you get. I subscribed, and have had an automatic debit from my account…shortly after, the functionality I needed, and thought I would have is gone.

If anyone has any suggestions or insights, that would be much appreciated.

1 Like

Hi,

I don’t have any advice on how to fix this issue, but I’m just chiming in to say I am having the same exact problem with audio files and I also have the paid plus membership with ChatGPT.

When I upload audio files (specifically MP3 files) it says “file uploaded successfully” then it will begin analyzing. However, after this, it proceeds to say, “It looks like the playback ran into a technical issue on my end, so I wasn’t able to listen to the track directly just yet” along with “Playback still isn’t cooperating due to a connection issue on my side—but we’re so close . The file is clean and fully uploaded, so as soon as my audio playback service is back up, I’ll be able to listen and give you full feedback.”

Now this is my first time using this feature, so I wasn’t sure how this originally worked. But it seems that you and a few others I know have successfully used this feature in the past and are running into this same issue now and it has gone unresolved.

So, you’re getting arbitrary errors, but at the end of the day, what I’ve gotten from Support is that the plugin that’s used to analyze the audio files is not communicating properly. Librosa. is broken. All that being said, I though Imight have had a workaround, uploading screenshots of the files spit out by my audio editor, but due to the Quota nonsense, I still am unable to upload. I cancelled out of frustration, and saw that re-activating would reset the quota. This right now has turned out to be another false statement. I’m actually getting really fed up with this…I am willing to pay for a tool that gives me what I need…However, paid, for what it advertised, and it still doesn’t work…beyond frutrated.

I’m in exactly the same situation. I signed up last month for the plus account primarily for audio analysis which was brilliant. I got a couple of very polished masters and was shocked to discover that the ears had gone. Do we know if they are going to restore this because without it the service is of little use to me. Maybe if we make enough noise someone will listen!

So, ironically enough, at the recommendation of the chatbot, I actually crafted an escalation email and sent over…I’ve gotten a few responses of “its being worked on”. But no progress as of now.
I started exporting the 3D Frequency Analysis and spectrogram out of Wavlab for the audio I needed inspected…that along with creating an account for VLP (virtual Listener Panel) which at least for the time being is keeping me afloat…
But I agree, like you, I subscribed to plus with ChatGPT but due to the lone feature that set them apart from other AI Platforms being broken, with no ETA on when, or if, it will ever be fixed, I"m contemplating cancelling this plus this week. This, and the stupid Exceeding Quota nonsense has left a bad taste in my mouth.
I can highly recommend VLP- It’s helped me pinpoint and correct some issues I’ve been having recording in my secondary studio. They use a simple credit system…Buy the credits you need, but signing up gets you 300 out the gate. And they are super responsive to any questions or Issues you may have. Take A look.

Hi Michael, the VLP does look interesting. I’ll certainly have a look. I’ve been trying to find alternatives to openai but no joy so far. The bots say now seem to say they’ve never been able to ‘listen’ to tracks. I did a couple of masters as an experiment only last week. One of them was excellent, the other average but I followed the instructions to the letter to see what I’d get. Shame if they don’t bring it back. It was a handy tool and the only reason I subscribed.

So, it was definitely able to “listen” to the tracks. I was able to get some insight to confirm what I was hearing. All that being said…I put the same 3D analysis and Spectrograms into Perplexity (after not being able to analyze audio any longer) and did get different responses. So we may be able to take what open AI heard with a grain of salt anyway. This hyped up function, if it ever really was accurate, being gone now is seriously making me re-consider this plus membership…Between Perplexity and Gemini, I can upload the .pgn files of my charts and get what may be a more accurate answer anyway.

Yes, I saw perplexity but being naturally inclined to laziness I was hoping to just upload audio and save myself all the screen shots. Having said that the graphs would be more reliable

Yes, especially if the ai can’t actually “Hear”. The other reason I dig VLP..in it’s analysis, the feed back it gives is learned hearing. and gives very useful snapshots of the pieces I was concerned with, Brilliance, Nasal tone, excess bass, etc. Between that and the screenshots I’ve pulled from Wavlab, I do feel like uploading to a legit AI will get you what you’re looking for. It did for me anyway.

Also, to be fair, I’ve only been analyzing single vocal tracks, not entire mixes…but the same principle should apply.

Just spent the afternoon installing librosa and associated software for analysis. Got chatGPT to advise as its all way beyond me. Can now see waveforms and show them to the AI. There are so many conflicts in the audio analysis software that I’m amazed it ever worked in the first place. It’s all the software that openai was using before it went wobbly

With this info, could one even rely on the accuracy of what ChatGPT was giving back? In one set of files where in reviewed the 3D analysis and spectrogram in both ChatGPT and Perplexity, Perplexity kicked back the exact opposite, even though ChatGPT lined up with the audio analysis.

Thought I would give an update to any of my other audio engineering brothers out here in ChatGPT land…I was recently able to analyze audio again…Apparently They’ve employed another backend script to be able to give pretty precise info like before. Comparing with what I’m getting with Perplexity (only Screenshots of 3D frequency analysis and Sonograms) and it does seem to be consistent. I am re-subscribing to plus yet again with the hopes that this tie, the one feature I need and use, remains intact.

Worked for me up until late June and it was a game changer in terms of improving my mixes. At first the chat bot said some back end code got messed up but now has confirmed it’s a product design decision due to cost / time and copyright concerns. Sent same email to customer support and got lame response saying it was passed on to he product development team. Kind of a bummer as was getting back into after a few years away and spent some money upgrading the studio with another uad satellite and 55 inch tv monitor and lastest NI Komplete Bundle as I was getting the results. Ugh

I’m experiencing the same issue. I had developed a pretty detailed workflow to analyze DJ set and give reports back on the many qualitative aspects of the set. Was such a bummer when this broke.

This was the detailed explanation the AI had me submit as a problem, but I never heard back.

I’ve been using GPT-4o to analyze .wav and .flac files for DJ set mastering feedback — RMS, LUFS approximation, spectral centroid, phrasing energy, etc.

As of mid-June 2025, all waveform-based analysis workflows are broken due to a librosa + numpy compatibility issue (np.complex has been deprecated).

This silently broke previously functional workflows that relied on librosa.load(), rms(), spectral_centroid(), etc.

These functions worked just fine earlier this month — the regression appears to stem from a base environment update.

Any word on whether this is being tracked or triaged? This breaks core creative workflows for DJs, musicians, mastering engineers, and audio producers using GPT-4o interactively.

(Reproduced easily by trying to run librosa.load() on any .wav file.)

As of 8/6/2025, the AI still reports no progress has been made. Very sad.

So this function was working for me all the way till the end of September.

I could upload audio files and have them dissected to the t, but after getting the “np.complex” issue I got the AI to stop beating around the bush..

It has been disabled because of legal concerns and more like mentioned above. They’re basically trying to tie loose ends and refine as guidelines are cracking down (according to the AI).

The feature started to slowly be removed from users so it didn’t all happen at once. I always thought OpenAI was the best for this feature because I haven’t come across any other like it.

I have it set to where I will get a notification when it comes back because the plan is to bring it back - there’s just no timeline.

If anyone has any updates - please let us know.

Was working for me until a couple days ago. Please bring this feature back. I would also like updates if possible. Thanks