Is gpt-4-1106-vision-preview also getting lazy

Hi Community,
I have a quick question. Since Openai has released a new GPT-4-Turbo model( gpt-4-0125-preview ), which solves the laziness problem, I was wondering if gpt-4-vision was also facing the same problem.
I was using gpt-4-1106-vision-preview for the chat application and suddenly sometimes it didn’t follow all the instructions, I changed the prompt multiple times, and it worked for some duration but again it stopped following all the instructions.

This sounds a bit like a temporary issue, could you provide some examples of this:

Exactly all new models are becoming a joke.

GPT4 Turbo can’t follow simple tasks. The image creation the same.

2 Likes

As you can tell from the name, that’s still the lazy denial machine.

Especially, it hasn’t been trained that it has vision, but has lots of fine-tune training on denying people the ability to look at and process images.

You need a robust system prompt that tells it all about its vision capabilities and what it can do with them. And then continues into providing fulfillment instructions where a user is being denied.

System instruction following is poor. In most cases you will want to let the user specify what the AI is supposed to do.

2 Likes

Noticed that sometimes Vision preview refused to actually read text from website-screenshots. This didn’t happen before. I managed to work around this with some prompt-magic, but it’s weird nonetheless.

1 Like

Suddenly, I am experiencing issues with gpt-4-1106-vision-preview. a lower level of data accuracy than I had with that model early days. How i fix that?

Might be asking too much of the model within a single request and without using seeds. Your responses can vary. Usually, it is best to chain your requests in my experience.