Evals with custom endpoint model

Hi there,
I’m trying to run an evaluation against a Qwen/Qwen3-VL-4B-Instruct model that I’ve hosted on a local machine using vllm.

I’ve setup a custom model provider and the test from UI works properly, I can see that a message is sent from openai to my server.

Then I try to setup an eval run with this code

client.evals.runs.create(
    name="Image Input Eval Run",
    eval_id=eval_object.id,
    data_source={
        "type": "completions", # tried both completions and responses
        "source": {"type": "file_content", "content": evals_data_source},
        "model": "Qwen/Qwen3-VL-4B-Instruct",
        "input_messages": {"type": "template", "template": input_messages},
        "sampling_params": {
            "seed": 42,
            # "text": {"format": {"type": "json_schema", **OUTPUT_SCHEMA}},
        },
    },
)

But I always receive

Error code: 400 - {'error': {'message': 'Image inputs are not supported for sampling model: Qwen/Qwen3-VL-4B-Instruct. Try again with a vision model.

Even though my model does indeed support images in the chat completion API.

I’ve noticed that even if my server is offline the error is the same, am I correct to assume that I can’t send images to an external model at all or am I doing something wrong?

I’ve also tried sending requests to Qwen3 models based on OpenRouter but I receive the same error basically