A fine-tuned model (ft:gpt-4.1-2025-04-14:cvm:v1-optimized2:CtvBgA1l) that previously accepted vision inputs (image_url) now rejects them with error 400.
Error code: 400 - {
‘error’: {
‘message’: ‘Invalid content type. image_url is only supported by certain models.’,
‘type’: ‘invalid_request_error’,
‘param’: ‘messages.[1].content.[0].type’,
‘code’: None
}
}
It was working when I had image inputs last week, January 27th 2026. I have run my same source code without any changes utilizing the same fine-tuned model with image + text input, but have been getting this error. I am certain that I have not mistakenly used another model and this specific fine-tuned model.
Thanks for the confirmation, I should’ve tested that before posting.
Just to clarify, your screenshots are showing that it’ is not a problem on my end?
In that I get the same API rejection using a fine-tuning gpt-4.1 model with image input on the Chat Completions API endpoint that was working before, it’s not just you.
Yep. I tested my finetuning model (ft:gpt-4.1-2025-04-14:personal::CaD5lQEs) in the playground. Whenever i add an image input to it, it replies with a empty assistant response. Additionally, at first it was spamming empty assistant message like your earlier replication, but now it just returns one, and then a message like “Rate limit reached for ft:gpt-4.1 in organization org-xxx on tokens per min (TPM): Limit 800000, Used 800000, Requested 765.
We have also been seeing this issue for the past week and are unable to hit a finetuned model with image urls now, where previously(before Feb 2nd) it was working fine.
Is OpenAI aware of this/planning to restore functionality? @OpenAI_Support
If so, where can we track progress?
It’s 100% a bug on their end. I’ve been running four fine-tuned 4.1 models with images for 4 months and never had any issues, but since February 2nd, none of them accept images anymore. Just to be sure, I decided to run a new fine-tune using images on a 4.1 model to test. OpenAI ALLOWS you to fine-tune a model with images, but then doesn’t allow you to use images with it afterwards LOL.
Start requesting refunds for all the gpt-4.1 training credits in dollar figures - and it doesn’t have to be one trained on vision, because this makes any ft model unavailable for use in the image input pattern you want. Might get fixed.
Apologies for the this inconvenience. I understand this behaviour can be vague but after taking a look. It seems like officially, image input support is model-dependent and while the base gpt-4.1-2025-04-14 does support Text + Image input. Vision fine-tuning (today) is documented as supported on gpt-4o-2024-08-06. So if your fine-tuned model is ft:gpt-4.1-..., the platform is effectively treating it as not vision-enabled for the image_url message type (hence the error “only supported by certain models”). As workarounds you can use base gpt-4.1 for vision inference and keep your structured extraction prompt there, or Re-train using vision fine-tuning on gpt-4o-2024-08-06 if you truly need “fine-tuned + vision.” Apologies again for the handling. I will request our team to have better handling so its not vague.
That is not an “inconvenience”. Using images with fine-tuning gpt-4.1 has worked since day 1.
How about if you ask fine-tuning team at OpenAI and not documentation why inference is failing and output is going into loops of empty messages on applications that have been working in the long term - and 4.1 is clearly documented under “vision fine-tuning” on Azure?
I suggest you refund the fine-tuning training fees for every single gpt-4.1 DPO and supervised fine-tuning that has ever taken place by any org if you are going to change the supported features after a year if removing the real-world utility and that is an official position and not what an AI told you.
I haven’t used my fine-tuned models extensively at all, having only fine tuned them just recently., and as a student, the ~$200 fine-tuning cost is not insignificant for me. Since the model previously supported vision inputs and that capability appears to have changed, I wanted to ask whether there is any possibility of a refund or credit adjustment in this situation. I would be very grateful for any guidance.
I hope you will revert everything with major sorry to all the API users.
Otherwise, you should be ready for the majority of enterprise users to go and fine-tune Gemini models, because they don’t add a 50% fee for them, and there won’t be any such headache with any model in the future when it just stops working after people invested a lot of effort in your product.
Hey everyone, Apologies for the inconvenience. We looked into this issue with our Fine tune team and we have fixed this issue now. Can someone please take a look and confirm. Thank you!