GPT-4 with Vision: Mirror Image Perception Issue?

I’ve noticed that GPT-4 with Vision seems to be reversing right and left consistently.
Does this model perceive things in a mirrored way?
Does anyone know about this?


Hi and welcome to the Developer Forum!

Interesting finding, do you have some examples?


Not OP but here’s some from my testing, whenever looking at a car it says LH and RH reversed to industry terminology.

So I tried it with the same picture of that car and got the same result. The AI is referring to the drivers perspective.

But its not mirroring pictures:

1 Like

I can understand why it refers to the damage as being on the “left side”, but in my preliminary prompts it knows I work in collision repair. Guess I was hopeful that that would be enough to activate the weights from any automotive-related training.

I can say for a fact, having worked in both the US and the UK, that the universal left and right on a car is from the perspective of the drivers seat inside the car, facing forward. The idea of left and right being different depending on LH or RH drive (the side the steering wheel is on) is nonsensical.

Guess it still needs some work.

But I specifically asked it for the damaged side from the drivers perspective and it said “left” which is just wrong. It had no trouble with the arrow, so its still not clear to me.

1 Like

Sounds like a good ol’ hallucination to me. It’s made a decision that the damage is to the left and is now justifying its own faulty logic.

1 Like