up until recently GPT-4-0314 was the “best” model (i still maintain it was incredibly inferior to post april downgrade) , i am finding today however that even this is becoming useless now for coding… Queue someone telling me its my fault again…
I’m very interested in how you are using GPT to write your novel.
I can see how context is an issue, and by extension the adhesiveness of the story. I wonder as well, how profound the characters and plot are? Besides the context issue, do you find any other lacking areas?
Since my early times of GPT I never bothered trying to write stories as it always wanted to wrap it up. It didn’t care to “plant any seeds” in the story. It was all very shallow.
Can you use older versione with api? Maybe i can switch and pay api only…
I think its totally out of topic, maybe web can talk in private aboit it
It is off topic, but you might want to consider starting a new topic and possibly turning it into a wiki as a novel (long-form fiction) writing/prompting guide.
I’m sure loads of people would love to read about your successes (and failures along the way), and it could grow into a great resource for the community.
They’re rolling out an update and maybe you haven’t gotten it yet. My version is now July 20th and message limit is 50 per 3 hours. I had the same issues as you and got really frustrated with the incoherence and forgetfulness, but since this update and setting custom instructions, responses are great again. GPT doesn’t forget what we’re working on and the response quality is really good for me now with little errors.
Check if you’re on the newer version and if your limit has increased yet, from my understanding this is rolled out in batches to GPT Plus subscribers so perhaps it takes a couple of days before everyone is switched over. The custom instructions can now be set under your profile.
It really seems better than it was 4 days ago. It’s still soon to tell, but apparently there’s been improvement with the July 20th update.
Imagine you went to a seafood restaurant, chose a good-looking fish to be cooked, but they switched to another fish with bad qualities and taste and lighter, but they still charged you with same price.
This thread is deteriorating into anecdotal statements and mismatched topic sub threads, please keep comments to the topic header above.
I have been having the same issues, when using chat gpt 4 version of july 20th, for code programing, it doesnt understant the code, when i ask to find bugs it doesnt find them or the proposed solution never works. sometimes it changes the code too much so it no longer makes any sense considering the initial objective. if i ask it to write a full version is writes parts and commentes where i can add other parts because they are in theory the same, however it in the mean time changed the code so much nothing works
Hmm… again, from my view, I don’t see big differences between GPT-4’s old and new versions, as well as GPT-3.5-turbo’s old and new versions. I actually hope they have been improved, as I haven’t noticed any big issues from the using. I am surprised by the number of people complaining about it because I haven’t encountered any problems myself.
The only issue I noticed was some hallucination problems yesterday and today, but it is probably because we haven’t fed the AI enough data yet. So, we are continuously updating documents and functions to see improvements.
I also read an article saying that GPT-4 is worse than the previous version because it didn’t know about prime numbers. However, our company is not using GPT-4 for prime number calculations, so that won’t be considered a downgrade for our use.
My company is only interested in the LLM API that focuses on accepting large tokens, understanding long queries, avoiding hallucinations, reading documents, answering accurately, correctly performing simple math calculations, and responding fast enough through chat.
“Oh its just that you think it is dumber its actually the same”: i dont believe that anymore. GPT-4 is getting worse and worse with every day. That is basically a fact at this point, if you were using GPT-4 every day you would notice that right away. GPT-4 is still useful, but you really feel that sudden change in quality that happened over the last days. I want to get the best model if i pay for the best model. Please stop, OpenAI.
What type of applications are you using 3.5-turbo for? I tried using it for chat completion on a large set of regulatory docs, and found it almost useless. It failed to read/understand the context documents a very large percentage of the time – the very same context documents which gpt-4 would read and give the exact answer.
Gpt4 really has gotten bad in the last couple of days. It makes tons of mistakes even in simple code that used to work 100% on the first try. It forgets things, brushes over things, gives incomplete responses (like a comment in code „//fill the rest out accordingly“ - what do I pay you you for?!).
But most shockingly, today, it made typos. Like what?! Is this gpt-2 or gpt-4?
When GPT 4 came out, I was so happy to have a really decently good programmer at my side all the time. Now, it is hardly capable of returning code without bugs that functions as prompted. Stop pruning GPT 4, please. I used to love this product. I’m using it less and less because it simply doesn’t do the job anymore.
Yes. Same experience here. My app is now broken. Using the API, I get very poor quality responses with the final few paragraphs of output not even forming sentences. The same prompt in ChatGPT still works well. Something is seriously amiss with the API version, and it started in the last few days.
I have done several experiments directly comparing the API with ChatGPT - the difference is marked.
Hopefully it will be fixed soon.
Trust me, I’m not an apologist for OpenAI. I think they could do a whole lot better with customer support, and somehow do not seem to understand that how they treat us, and how we come to perceive them, will be mightily important (as it seems totally unimportant to them now) in the future when this whole AI thing shakes out.
HOWEVER, compared to the competition today…
And, this isn’t a one-off. I stopped asking Bard code questions months ago, when I suddenly realized it had no clue about what it was saying. It just made stuff up.
Here, it admits and even explains why it, essentially, lied.
For all it’s current failings, GPT-4 has never done this to me. When it has given me totally incorrect information, it has nearly always corrected itself (when it could).
I want the current issues with GPT-4 to be fixed like everyone else here – but, people, it could be worse. A LOT worse.
But I was talking about ChatGPT! Haven’t used the api in a while.
I feel like this started with the update where gpt-4 in ChatGPT became faster.
You have to understand that what these systems say what their reasons are is not what their reasons actually are, and never will be. Their reasons come from activating neurons, and they aren’t able to perceive those. The reason is just as made up as everything else.