i wonder if o1-mini is now a newer version since o1 was released, or still the same. because o1-mini never had “preview” in name…
that’s because in many cases i got better results using mini than preview.
i wonder if o1-mini is now a newer version since o1 was released, or still the same. because o1-mini never had “preview” in name…
that’s because in many cases i got better results using mini than preview.
I must be doing something wrong because o1 Pro is working beautifully for me. The time it saves is absolutely worth the $200/month if you use ChatGPT for work. It’s a no-brainer.
I wouldn’t touch ChatGPT for work prior to o1 and the macOS app’s ability to work with other apps (like VSC/Cursor, Terminal, text editors, etc.). I used Cody+Copilot+Cursor exclusively. Now I have the desktop app open at all times. It’s weird.
Those who are unsatisfied with o1 Pro… Sorry, but it’s user error.
No. Just… no. The limitations are about cost and resources. There’s a reason we’re reviving nuclear power for AI.
Implementing reasoning is not new! This feature could’ve shipped in some form ages ago, and AI nerds have been crafting their own CoT systems (as well as using other strategies).
Again, it’s a resource issue. Instead of hitting the server once (for your single, lone, unadulterated prompt), o1’s “reasoning” basically has a conversation with itself, and that means call after call after call until the problem is solved. Which is expensive.
Money. It’s the solution and the problem.
Thanks for your opinion.
Care to back it up with some scientific references?
I understand what o1 is doing in concept. That’s why I’m sure they are taking this approach because LLM scaling is not delivering.
Making an LLM talk to itself is a hack. Yes gets us more but I’m not convinced that will lead much further.
Suggest listening to LeCun, Chomsky and Schmidhuber amongst others.
o1 has become useless. Now I feel it’s wasting my time more than anything. I’m trying to get it to ind the error in my code. It keeps recommending the same thing (which does not solve the problem). Then when I make a change, to my code, so that a new error is popping up, and i ask it to figure out where this new error is comming from, it ignores what I said in my latest prompt, and thinks my code is still in the state from a previous point in the converstion. It’s just not following the converstation correctly.
I dont know what they changed in o1, but it’s really messed up now, and totally unusable.
I created an account here just for this. I am enyoing the 20$ paid model for quite a while in embdedded programming . Since the update, I am not. Just crazy, if its intentioanlly or not, the models became worse in every aspect.
It writes me a new method and instead of implementing it, it just returns the same old code without it. “Hey why didnt u use the new method”, “Oh yeah thats a great idea” lol
The points are directly refering to the state before Orion.
Betelgeuse is now shining less bright.
If you have issues or problems with ChatGPT the help.openai.com support system is the best place to voice them, this is the developer forum and is primarily for API developer issues.
Happy for you to voice your concerns here but please understand that the correct procedure is to visit help.openai.com and use the support bot in the bottom right corner.
Yeah I agreed and they even remove the”continue generating button” to make the chat more short since the o1 model come out and teh chat more short and everything is worse since they released the o1 and o1 mini model it’s like they force us to paid on the pro model
The o1-mini feels useless today. I get the impression it’s just GPT-3.5 Turbo with a bigger context window and animated chat to pretend is doing a great work behind…
What’s going on with these silent updates?
EDIT:
Is stuck in old prompts, is not following instructions… Zero Inference → zombie on repeat…
EDIT 2:
Got better
I feel that o1-preview was more open-minded and creative. I designed a challenging math problem that o1-preview could solve about 30% of the time, while o1 never succeeded.
o1 seems more rigid and less inclined to explore alternative possibilities. It’s still a good model, but it fell short, especially given that OpenAI claimed it was far superior to o1-preview, even removing the latter from the platform.
We’re entering a phase where newer models aren’t always better at everything. Evaluating them is becoming increasingly complex. I hope GPT-4.5/5 reverses this trend.
I returned to give an update.
Since my last post where i was not happy with any model, something changed. The o1 model is now back stronger than ever before. For the first time the reasoning actually makes sense. Before the reason steps were jibberish(sometimes even in spanish lol), even in the working o1-preview.
As of right now, the o1 is doing something different. It thinks way more critical and even has a problem analysis header in each answer. Could be because i specified it, but this was already done previously where it did not work. For code generation it uses a different style from before it sections the code way better.
For o1 and 4o (compared to 4 weeks ago):
I heard at one point that after an update the models go down until they go up, thats my experience here also.
At this point I am satisfied.
Yes, it’s better and more expensive for low tasks with zero creativity.
I found his limits → when you run tests with complex structures involving multiple files, he gets stuck and behaves dumb. You explain each file, and he gets caught in a loop—his intelligence dies…
I tried referencing him again, anchoring him, and explaining the goal, but he’s still ‘dead’ and needs a human to complete the task for him. I noticed this a long time ago, and now I see it again (while people talk about reaching AGI or ASI… What they can really be proud of is that GPT-3.5 can beat GROK ).
They don’t need to reach AGI or ASI; they just need to increase the context window for agents and add custom specialized models that can be used for specialized agents. That’s how they win in the long term.
EDIT:
I see a noticeable drop in quality. I can’t trust OpenAI anymore… → these silent updates are too much. They’re not just affecting the model’s performance but also killing the mood and overall experience
I’d rather feed a Chinese model with data than continue using OpenAI if they don’t fix this quickly… and stop doing it again!
I have payed a fortune to AI testing so I tried o1 Pro plan as well. I know the basics about coding, so basically, nothing. Just decided to build a simple web app. o1 suggested a library, I said “ok this one”. Never passed the configuration and folder structure with it. After 7 hours just for laughs I web searched, exhausted as I was, for the particular library. It was simple.
It doesn’t matter if the bot knows how to code. What matters is that, it must learn first how to interact conversationally with the user. The user is human. The bot and I were talking different language. They must improve the prompt adherence of the bot, not its coding skills.
7 hours writing and writing for just the first steps of the app development. I wrote a book.
I don’t know if I can “LoRA” it (I’m coming from multimedia open-source models) but that would be… even more writing.
You should try out v0.dev for projects like that if you haven’t yet!