Hi everyone,
I’ve been testing the “Projects” feature and noticed a consistent difference in response behavior that might interest others.
Observation: When I start a project without any initial instructions or files, the model (GPT-4o) tends to produce shorter, less detailed, and seemingly lower-quality answers to the same basic prompts compared to a regular chat session using the same model.
In addition:
- Vision capabilities (e.g., analyzing images) also feel less reliable in Projects. I encountered more hallucinations and misinterpretations in image analysis when using GPT-4o inside Projects, versus much more accurate results in regular chats.
This makes me wonder:
- Is the context behavior of GPT-4o somehow limited or sandboxed in Projects?
- Are there model configuration differences we’re not aware of?
- Or does the lack of initial instructions trigger a default behavior that’s more conservative or minimal?
Would love to hear if others have experienced the same.
Any insights from the OpenAI team on this behavior?
Thanks!