As a test of GPT, which I just added to my app, I took a screenshot of an email, and then asked GPT to extract the names, email addresses. It refused.
When I asked GPT why it was prohibited in this particular way here was it’s answer:
“Yes, I can explain why I cannot fulfill that request. As per the guidelines, I’m not allowed to extract or provide personal data such as names, email addresses, and company names from images. This is to respect privacy and adhere to data protection policies. If you have any other questions or need assistance with a different topic, feel free to ask!”
This seems a bit overly-aggressive in terms of “alignment and safety”, because there are numerous business cases where we need to extract names from images.
Am I alone in this opinion? I mean if I can SEE the image with my own eyes, then why is the information contained in that image deemed “dangerous” by the OpenAI team?
EDIT: Also this is basically an OCR Task (Optical Character Recognition), which has been around for decades. All these years we thought it was harmless tech, but now OpenAI is letting us know it’s a danger? Something is very strange here.
Just use a normal OCR package. There’s no need to use GPT for this use case, and the regular OCR is likely to be faster and more robust, too – it’s got decades of tuning!
To answer the other questions:
these models work totally different from OCR models, and thus you should expect different behavior
YOU can SEE this particular image, but in other cases, maybe there’s a thousand surveillance video images and someone’s asking for the whereabouts of a journalist or something
“personal information” isn’t just text based these days …
Just use 20yr old OCR tech? I’m assuming you’re just joking.
I can see how OpenAI would disable face recognition, sure, but disabling recognition of text names, email addresses, and company names is completely ridiculous, and rules out vast numbers of potential apps that could be written. I can’t even write an app to scan business cards for example.
Interestingly, GPT will OCR the text, but only if you’re not telling it what you expect the text to be.
These two images are from my app, showing the refusal to answer, and then the answer. Real interesting “alignment” going on here. lol. Thank goodness it “protected me” from that dot.
This has got to be just an unintended behavior, or mistake in their “alignment” tuning.
Disabling OCR for names, emails, companies cannot possibly be their intent. Imagine an OCR software that has a blacklist of all people’s names. That would be hilarious.
He must be joking indeed or not very well informed. Anyone who has experimented with traditional OCR or “regular OCR” as he calls it knows GPT-4 vision is on a different level, light years beyond any OCR I’ve ever seen.
I just asked “extract all the text from the image”. In my case I would need to extract some company ID from the receipt to be able to identify what that business do to approve or not the receipt. Like the system should not approve a receipt for a PetShop for someone in a business trip.
I’m still thinking this is some kind of unintended thing or mistake in the OpenAI configs. Surely they don’t plan on having their image recognition work only for “non-business related” data. That can’t possibly be their intent. I bet they’ll fix it, if what you’re saying is true.
They need to specifically define “BAD WORDS” and “ILLEGAL STATEMENTS”, and who is creating these lines between good/bad legal/illegal. Censorship is terrible for everyone.
I’ve been experimenting with spectrograms and get “I’m sorry, I can’t help you with that request” all the time.
Fair, there are probably many ways to get sensitive information of some sort or another from a spectrogram.
It would be so nice to be able work on what I have in mind and I’m not doing anything sketchy. But, how can you tell the difference? I think you can’t. It would have to be on trust or waivers or something, which gets complicated fast.
I tried your trick and it doesn’t appear to work anymore. I even tried prefacing it with an assistant message putting words in it’s mouth and still no.
API outputs have been a total mess for me for the last couple of days here and there. Periods of working fine, then periods of returning bad json, nonsense answers, etc from exactly the same code and prompts. I noticed that as usual there have been some things internet people typed into chatgpt that made the models flip out (poem poem poem) so I think this pattern where there will be lots of subtle changes happening that will have to settle in before things get stable/reliable will continue.