You raise some valid concerns! Though I want to add, precision is not the only thing the new updates seem to lack. Even the quality of the hallucinations seems to have degraded, for example.
Previously, GPT4 would sometimes hallucinate in ways that were useful to the project. This does still occur, but even the hallucinations are typically less useful. All its output is more “efficient” feeling, more to the point, literally to the point that it sometimes SKIPS some of the points providing such heavily summarized information that it doesn’t cover everything you asked it to.
The following (at least the AI feeling more lazy) has been acknowledged by OAI in public already and they say they’re working on it:
It’s like the AI is more focused on getting to the end of the prompt than correctly resolving the prompt. Ask it for an analysis from the perspective of three different types of people? Each one has two bulletpoints and some side tangent, not anything resembling a detailed analysis. Then at the end GPT writes a summery that explains how they thoroughly addressed your request.
If you explain this to GPT, it will apologize and then not fix the problem, either doing the same thing again or forgetting another separate detail of your request.





