I’m just loving. Omni. So all’s well that progresses well.
Oh! Yay! Me too. It’s fun.
@moz_art , you are correct, sir. The new model is definitely “eager.”
I haven’t tried limiting output too much, but I’ve had success in-conversation asking it to limit focus on the current discussion; or “to explain without examples.”
In my customGPTs I have a few with explicit instructions to limit answers to “one short sentence,” in certain cases and that helps, too.
The eagerness is annoying no doubt. But I noticed some logical errors as well. Errors are gonna happen from time to time, this isn’t new. What changed is that before, if you pointed out an oversight, it would apologetically correct it, now it seems to systematically ignore specifics and repeat the same erroneous responses… with slightly different wording
Yes, it’s been 6 days now that nothing works, the instructions are no longer taken into account but yet, they continue to sell subscriptions without telling people that it no longer works.
I think we will have to take measures, it cannot continue like this.
I agree. That was my main reason for canceling my subscription and advising my coworkers to do the same. Currently looking into other models suggested in this post
I think a certain anger in all of this is SJ’s support for her fellow voice actors, something that will probably eventually impact live actors as well considering what we can already do and the progress of technology.
What is the market rate for voice overs? Market research suggests that the global voice-over market size was valued at around USD 1.5 billion in 2021 and is expected to reach USD 2.3 billion by 2026, growing at a CAGR of 9%.
It’ll be interesting to see how modern tts impacts the voice acting industry in the near future. Pretty soon I imagine you’ll be able to get an audiobook for a couple of dollars.
I actually have been buying a few for couple cents
Amazon is using something in-house for authors… all automated. They have a $7.99/month unlimited audiobook deal on select books now, I believe…
There definitely are issues when it comes to using a chat window for extensive amount of time with a lot of information it’s trying to recall while responding.
I noticed a new issue that sometimes happens when you’re asking it to continue generating something like code and it actually starts to retype the entire script from scratch in the middle of a code window.
It only seems to happen after an extensive amount of time using chatgpt. The only way I found around it is to close the window and start over which is a pain when you’re not training.
I noticed another issue where a page will give you the response and then it says there was an error and the page forces you to reload which sometimes removes the response that was provided and you have to regenerate.
The crazy thing is when I first started using gpt-4o it was amazingly fast it did so much that impressed me over got-4 but now it’s really just a slower version that breaks more.
Yeah, I’ve noticed all of this myself.
Though the slowdowns seem to usually be around this (9:00 PM Mountain Standard) time of evening. I think it’s just usage spikes.
Yeah, I’ve seen this too. The reload is a bit unpredictable, sometimes what I was working on is wherever it was meant to be, sometimes it isn’t.
It’s frustrating, but I bet it’ll clear up as they scale up their end for all the new people.
I just cancelled my sub. It’s become worse than worthless to me for coding; a distraction that never bears fruit. Feels like a completely different model than when GPT-4 first launched.
They do this all the time. Release a “new GPT” which is awesome. People subscribe. Then they slowly pull back the computing and make it lazy. It’s DISGUSTING business practices.
I bought it for a full year. I will never forget this.
“I will never forget this”
Stay strong.
It performs on a curve for sure. Either usage based on performance based on load.
But I think we can all agree it’s not consistent. I don’t know if the API performs any better - haven’t had time to build a proper chatbot.
I think it’s just lack of performance. Right away there aren’t may people using the new release so its fast and “smart”.
We’re early adopters… we got the the benefit to see how great the model is/can be and then the load got too much.
Under heavy load I promise you they have it setup to maintain speed over results knowing that the only people needing the more complex results are the only people who will complain about them. Everyone else is redoing resumes and term papers.
I think it will get better over time, it will have to if they plan a v5 anytime soon.
jm2c
Thinking about it more, they wouldn’t have the same setup with the API because it’s a pay per transaction service that if they did anything to lower the compute per query based on load they wouldn’t have a business.
OpenAI is hiding any negative comments as “offensive”.
According to the community rules these posts have been removed for offensive language:
Keep it clean. No profanity, obscenity, or insulting comments, please.
This community offers headroom to voice complaints and criticisms. Posts with inappropriate content will be removed.
Honestly, using this for coding is more aggravating than ever. I will tell it to not provide any code in a response unless I specifically ask and it responds with code. The responses are so long I just end up stopping them most of the time. Something has to change. Memories are nice, but it doesn’t follow rules.
I totally agree. It doesn’t follow rules at all. I tried bouncing between two different chat windows, one using 4o and the other one 4 Turbo. Guess which one provided to-the-point, precise and accurate answers? Hint: not 4o. I remember raging with 3.5 abot 2 years ago, and I found myself doing it again with 4o. That’s when I relized I was using a stupid model.