Oh well I can live with that.
Just had a great task made with o1. It processed correctly my zip attachments and produced a great work.
Time to forget 4o, if all goes well, perhaps…
Oh well I can live with that.
Just had a great task made with o1. It processed correctly my zip attachments and produced a great work.
Time to forget 4o, if all goes well, perhaps…
It’s great if o1 is working well, but it’s honestly still terrible for everyone who’s still trying to use 4o. There’s been essentially no real improvements from what I can tell, and I’ve been using it every day since the update, trying to see if it will improve for writing. It’s quite frustrating.
It’s great if o-models work for some people, but I have absolutely no need for a reasoning model when 4o is perfectly capable of doing what I need it to do. Do I use o3 for some coding stuff? Yes. Do I need o3 for creative writing stuff? Hell no. I use 4o with search on for 90% of the my work tasks. I don’t need nor want a reasoning model for those tasks. I hate that OpenAI is basically trying to push o-models down our throats and completely abandoning regular models…
Oh well, nothing works today again. Sigh.
again, o1 is working very fine today lol. xD
I had same problem, wasn’t available in the webapp only in the apps however it has appeared in the webapp again now for me.
However the copy message button does not work on the webapp and in the desktop app the copy button actually copies the message two times. Most likely they were doing some changes to this feature.
I noticed that despite selecting GPT-4o, it was actually using GPT-4. I realized this because something felt off - the conversation was long, but the responses were flat and lacked nuance, which made me check how another version would answer. So, I tried regenerating a response using GPT-4, and I got a pop-up message saying I had reached my limit!
Does that mean I was talking to GPT-4 the whole time? And when I hit my limit, which model was actually responding?
So unreliable.
Hi, I’m currently writing a fictional book with real names and places, and I’m strictly following the rules.
I described a situation where the protagonist randomly meets the friendly mailman on the same day, once at the office and once by chance in her building. She was shy, and I wrote that she acted like a 14-year-old in terms of her demeanor, but she’s an adult. In the evening, I wanted her to reflect on it. Then ChatGPT says:
“I’m sorry, but I can’t process this text as requested. However, if you’d like, I can help you improve the text or develop the narrative. Let me know how I can assist you further!”
What’s wrong with that? It’s a completely normal situation when flirting, yet ChatGPT immediately assumes the worst—thinking it involves some sort of psychological issue or other problems—and that it conflicts with guidelines.
Sorry, but this is really starting to go too far. It feels like the AI is not engaging with text and conversation at eye level, but instead excluding anything that could even slightly touch on the guidelines.
blah i still can’t world build or do my dnd campaigns. two weeks of this and it’s not fixed? why how? i’m so tired of these fragmented sentences and bold and short responses that waste my limits up
What I tried today was giving GPT a strict instruction with every message, and so far, it has worked well: ‘No emojis, icons, lists, text highlights unless I request them. No descriptions of surroundings unless I ask for them. Text with paragraphs, flowing, dynamic, humorous, don’t end the day.’ And it has stuck to it so far. I’m using the free version of GPT and have only provided my disclaimer in the configuration. But I don’t know if it will start deciding for itself again at some point. Maybe this helps you with direct prompts.
This works fine for my story at this moment
While I agree that setting up API can be cumbersome, you don’t need to learn coding to do that. You can run Open WebUI or LibreChat and the interface would literally be the same as ChatGPT’s web version. In fact, I prefer Open WebUI’s version of ‘projects’ because it actually behaves like proper folders and you can nest them…
And I do use API for creating writing. Yes, it’s more expensive, depending on your usage. But it’s also been a lot more consistent for me compared to the web version because I can control which version of the model it uses and I’m not subjected to OpenAPI’s experiments on their latest 4o version.
That being said, I’m currently looking into using Deepseek API, which is cheaper that ChatGPT but, since it’s trained basically on the same dataset, it gives comparable responses in the web version.
openai has to cut processing to make room for whales.
Yep, I fully supported your switch to Deepseek API bro. It is the only move that they’ll understand. The censorship here is insane. Even my topic post gets yeeted. They should change their name from openAI, to MoneyGramAi.
I saw they remove your post, but you were right .
Sadly they do not understand, the greed is big behind the scenes. But they do not realize Chinese will hit harder and harder… they found their weakness and hit in the same spot open-source
→ to destroy their $200 subscription of greed. If you do the math, you can get your GPU and stuff instead (and Chinese will hit in NVIDIA next time but will be an domino effect…). I still asking myself, are they blind and think same strategy will work forever?
To answer your question, let me answer it with corporate vocab:
In the end, they’re not blind my friend, they are more “controlled” since 2023, because of this:
Microsoft to Invest $10 Billion in OpenAI, the Creator of ChatGPT - The New York Times
Here I have the image sample of the latest greatness of open ai (WTF???)
Please share the thread; I need to read the other inputs. You can see your thread name is “Eighth Crusade Summary.” It’s probably a few runs behind → like GPT-3.5 Turbo in the old days (or perhaps the “behind” version is a fine-tuned GPT-3.5 Turbo model
).
That’s why I can’t trust them anymore. For over a year, I’ve been urging them not to make silent updates that alter output quality. It’s like paying for a meal at a Michelin-starred restaurant only to get fast food for the same price.
I don’t know what exactly is wrong in your case, but I don’t have that kind of issue at all. 4o model with search turned on.
Yeah thought it had gotten better but am now stuck in chats that once again keep ignoring instructions and the quality is just terrible. The memory issue got fixed and is better now which is great but overall the model is still terrible.
So yeah I’m likely going to cancel my subscription soon and wait until I hear thing improve, if they ever do.
Part of me suspects they won’t because I think with the announcement of the new models coming out soon I think that’s where all the processing power will be going to and 4o is going to left behind.
So here’s to hoping that 4.5 or 5 give us back what was lost and hopefully improves on it. As I’ve said in other places objectively speaking they can’t leave their flagship model that people pay $20 a month to basically get near unlimited access to in such a bad state because that’s just giving their competitors every advantage. So either they’re going to have to release these new models soon and make sure they are good or they’re gonna have to fix what they broke.