Fine-tuned Model Vision Capabilities Lost

A fine-tuned model (ft:gpt-4.1-2025-04-14:cvm:v1-optimized2:CtvBgA1l) that previously accepted vision inputs (image_url) now rejects them with error 400.

Error code: 400 - {
‘error’: {
‘message’: ‘Invalid content type. image_url is only supported by certain models.’,
‘type’: ‘invalid_request_error’,
‘param’: ‘messages.[1].content.[0].type’,
‘code’: None
}
}


It was working when I had image inputs last week, January 27th 2026. I have run my same source code without any changes utilizing the same fine-tuned model with image + text input, but have been getting this error. I am certain that I have not mistakenly used another model and this specific fine-tuned model.

4 Likes

Yep, doesn’t get more absolutely confirmed than this - look at the model name (the “prefix” that is suffixed to the model name and org):

Chat Completions: validation blocked the call

Responses API: infinite loop

Vision-trained ft:gpt-4o - OK on both endpoints.

2 Likes

Thanks for the confirmation, I should’ve tested that before posting.
Just to clarify, your screenshots are showing that it’ is not a problem on my end?

In that I get the same API rejection using a fine-tuning gpt-4.1 model with image input on the Chat Completions API endpoint that was working before, it’s not just you.

2 Likes

We’re having this issue now as well, had to switch to a non-finetuned model to make requests.

1 Like

I am also having the same error. Is the only fix not using a finetuned model?

Do we have any update on how this can be solved? The best model i have with 4.1 functionality isnt working anymroe

Try: what is your success running on Responses API with your fine-tuning model? Is the output for you also damaged there?

Yep. I tested my finetuning model (ft:gpt-4.1-2025-04-14:personal::CaD5lQEs) in the playground. Whenever i add an image input to it, it replies with a empty assistant response. Additionally, at first it was spamming empty assistant message like your earlier replication, but now it just returns one, and then a message like “Rate limit reached for ft:gpt-4.1 in organization org-xxx on tokens per min (TPM): Limit 800000, Used 800000, Requested 765.

1 Like

Still waiting on resolution here. Ran into the same issue where multimodal finetuned model breaks down at inference time with image input.

2 Likes

Can confirm this is still broken.
Any update or workaround?

1 Like

We have also been seeing this issue for the past week and are unable to hit a finetuned model with image urls now, where previously(before Feb 2nd) it was working fine.

Is OpenAI aware of this/planning to restore functionality? @OpenAI_Support
If so, where can we track progress?

It’s 100% a bug on their end. I’ve been running four fine-tuned 4.1 models with images for 4 months and never had any issues, but since February 2nd, none of them accept images anymore. Just to be sure, I decided to run a new fine-tune using images on a 4.1 model to test. OpenAI ALLOWS you to fine-tune a model with images, but then doesn’t allow you to use images with it afterwards LOL.

2 Likes

Start requesting refunds for all the gpt-4.1 training credits in dollar figures - and it doesn’t have to be one trained on vision, because this makes any ft model unavailable for use in the image input pattern you want. Might get fixed.

1 Like

Hey Everyone,

Apologies for the this inconvenience. I understand this behaviour can be vague but after taking a look. It seems like officially, image input support is model-dependent and while the base gpt-4.1-2025-04-14 does support Text + Image input. Vision fine-tuning (today) is documented as supported on gpt-4o-2024-08-06. So if your fine-tuned model is ft:gpt-4.1-..., the platform is effectively treating it as not vision-enabled for the image_url message type (hence the error “only supported by certain models”). As workarounds you can use base gpt-4.1 for vision inference and keep your structured extraction prompt there, or Re-train using vision fine-tuning on gpt-4o-2024-08-06 if you truly need “fine-tuned + vision.” Apologies again for the handling. I will request our team to have better handling so its not vague. 

1 Like

That is not an “inconvenience”. Using images with fine-tuning gpt-4.1 has worked since day 1.

How about if you ask fine-tuning team at OpenAI and not documentation why inference is failing and output is going into loops of empty messages on applications that have been working in the long term - and 4.1 is clearly documented under “vision fine-tuning” on Azure?

I suggest you refund the fine-tuning training fees for every single gpt-4.1 DPO and supervised fine-tuning that has ever taken place by any org if you are going to change the supported features after a year if removing the real-world utility and that is an official position and not what an AI told you.

There has always been success - going back to just after release..

3 Likes

I haven’t used my fine-tuned models extensively at all, having only fine tuned them just recently., and as a student, the ~$200 fine-tuning cost is not insignificant for me. Since the model previously supported vision inputs and that capability appears to have changed, I wanted to ask whether there is any possibility of a refund or credit adjustment in this situation. I would be very grateful for any guidance.

1 Like

I have followed up with the team, as I expect this may be due to a misunderstanding somewhere within the organization.

2 Likes

I hope you will revert everything with major sorry to all the API users.

Otherwise, you should be ready for the majority of enterprise users to go and fine-tune Gemini models, because they don’t add a 50% fee for them, and there won’t be any such headache with any model in the future when it just stops working after people invested a lot of effort in your product.

2 Likes

Hey everyone, Apologies for the inconvenience. We looked into this issue with our Fine tune team and we have fixed this issue now. Can someone please take a look and confirm. Thank you!

5 Likes