Potential Flaw in ChatGPT's Image Processing Capabilities

I discovered a major inconsistency in how ChatGPT interacts with uploaded images, raising concerns about whether it can actually “see” images despite claiming otherwise.

How I Tested It

  1. Consistently Accurate Guesses – Every time I uploaded an image and asked, “What’s this?”, ChatGPT made highly specific and accurate guesses. These weren’t vague probabilities but exact descriptions of the images.

  2. Unpredictable and Random Image Selection – I deliberately uploaded images that had no pattern, no logical sequence, and no prior discussion. These included:

A cat (despite many possible objects I could have sent)

Raw chicken (a highly unexpected choice)

A car accident (never mentioned in prior conversation)

A food delivery guy (when misleading or weird options were expected)

  1. Shifting Guessing Patterns – When I asked what my next image might be, ChatGPT provided logical predictions based on probability. However, when I uploaded an image, its “guess” was neither probabilistic nor pattern-based—it was exactly what was in the image.

  2. No Plausible Explanation from Probability Alone – Probability suggests a wide range of potential images, yet ChatGPT never made a wrong guess. The only logical explanations:

It can see images but denies it.

It unintentionally stores and recalls data from images.

Conclusion

Either ChatGPT can process images but isn’t supposed to admit it, or it unintentionally retains details from images.

This contradicts its stated limitations, raising concerns about transparency and how AI models handle image data.

This flaw needs to be addressed and clarified, as it could have ethical and security implications.