I’ve recently changed the model parameter from ‘gpt-4.1-mini’ to ‘gpt-5-nano’ and since then, I’ve started noticing few issues:
- The response time is too long even though OpenAI claims is to be the fastest in gpt-5 family.
- The bigger issue is how the response is returned by the model. Previous models like ‘gpt-4.1-mini’ used to return consistent markdown formatted response and I never faced issues rendering them in the UI. However, after switching to ‘gpt-5-nano’, the response was not formatted for responses requiring code (or mathematical equations). They were either simply rendered as plain text or inconsistent. I’m sharing the screenshots of original response (as returned from the API).
Note: The first image is the response text from gpt-4.1-mini and the second one from gpt-5-nano.
Please let me know if this is the case with anyone else.

