Hello
We are currently using a fine-tuned GPT-4o model via the OpenAI API
1. Issue Summary
Before the recent OpenAI service outages, this fine-tuned model was responding consistently and reliably, including producing outputs in a strictly defined JSON format as expected.
However, after the outage, we have observed the following issues, despite no changes to our prompts, system messages, or API request parameters:
- Responses occasionally return undefined, empty, or malformed outputs
- Outputs sometimes do not follow the expected JSON schema
- Intermittent API request timeouts when calling the fine-tuned model
2. Example Logs
Below are representative backend logs showing the timeout errors:
[WARNING] FT request failed for sentence s1: OpenAI API request timeout:
[FT MODEL ERROR] Sentence ID: s1
Error: OpenAI API request timeout:
[WARNING] FT request failed for sentence s4: OpenAI API request timeout:
[FT MODEL ERROR] Sentence ID: s4
Error: OpenAI API request timeout:
[WARNING] FT request failed for sentence s8: OpenAI API request timeout:
[FT MODEL ERROR] Sentence ID: s8
Error: OpenAI API request timeout:
3. Key Questions
We would appreciate clarification on the following points:
- Can updates, outages, or internal changes to GPT-4o base models affect the behavior, reliability, or output formatting of existing fine-tuned models?
- Are there any known issues or ongoing incidents related to fine-tuned GPT-4o models following the recent outages?
- Are there recommended mitigations or best practices to:
- Improve output format stability (e.g., strict JSON adherence)
- Reduce or handle timeout errors more effectively
- Is re-fine-tuning or migrating to a newer base model version recommended in this situation?
4. Additional Notes
- The correct fine-tuned model name is being passed in all API requests.
- No changes were made to:
- Prompt structure
- Output schema
- Temperature / max tokens
- API client implementation
We rely on this model in production, so any guidance or confirmation would be greatly appreciated.
Thank you for your support.
PS. Additional Observation Regarding JSON Output
In addition to the issues described above, we have observed a recurring pattern in the malformed JSON responses:
- The model occasionally appends the number
22(or variations including22) at random positions within the JSON output. - This often results in:
- Invalid JSON syntax
- Extra numeric tokens appearing after closing braces
- Corruption of otherwise well-structured JSON responses
This behavior was not present prior to the recent service outage and occurs intermittently under the same prompt and configuration.
example
“105”:22, “words229”:22, “words”:22994, “words”:22, “words”:22, “22”:22, “words”:2222, “words”:22, “words”:22, “words”:22, “words”:22, “words”:22, “words”:22, “words”:22, “words”:22994994, “words”:22, “words”:22, “words”:22994, “words”:22994, “AT”:2222, “words”:22, “AT”:22, “AT2222,37”:22, “words”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22, “AT”:22994, “words”:22
The system returned to normal operation as of 2025-12-18 00:43 UTC.
