o1-mini-2024-09-12 is now deprecated in the API.
Please, please, please, do not depreciate o1-2024-12-17.
The amount of hallucinations in o3-2025-04-16 makes it nearly unusable as seen in section 3.3 of the OpenAI o3 and o4-mini System Card April 16, 2025
And the tricky part is o3 is more conversational and better at convincing people it’s hallucinations are correct.
I am also disappointed that it is no longer available in the ChatGPT interface, which I assume is a cost cutting measure, but to me, it feels like enshittification.
Im my humble opinion, it has to do on how we guide it and how complex the issue is, if its complex he Will fill the blank generating non factual or non existing cases but also Open a door when it comes to training specific data, if you train a model purely to work jurisdiction Will be less prone to hallucinate on those topics
Do you mean the “Developer message” that is generally how you’d guide the model? That’s usually where work jurisdiction is set.
Or are you talking about the training itself?
The training itself, this case here Australian lawyer caught using ChatGPT filed court documents referencing ‘non-existent’ cases | Australia news | The Guardian is a good example, i dont think u can do a general APP for niche legal áreas, it has to be trained on specific legal domains (e.g., tax law, intellectual property) reduces hallucinations by narrowing the data scope, enhancing familiarity with relevant terminology and precedents, but still remains a mystery to me why the AI invents plausible-sounding but false information…
1 Like