Does anyone know why I have to be verified as human for every conversation and each verification is 10 actions, is it purposely trying to reduce the efficiency of my use?
Wait, do you mean captchas?
Interesting, i haven’t seen any captcha on ChatGPT ever.
And no, OpenAI isn’t trying to slow down your work.
I guess they’re implementing captchas due to the fact you can use Selenium to get a free gpt-3.5-turbo API via the web interface.
More likely than not, it is because you are allowing third-party scripts and products to run within ChatGPT.
uBlock Origin → dashboard → my filters, add:
||tcr9i.chat.openai.com
||featuregates.org
||api-iam.intercom.io
||events.statsigapi.net
||statsigapi.net
||js.intercomcdn.com
||widget.intercom.io
The first is the one to target. It is required for platform. authentication, but if you are already in ChatGPT, can be turned off. Others are for A/B testing, tracking, session replay, customer support, etc. Comment out some of these with an exclamation point if they do not allow plus features to work.
Report if this stops the pain.
I had the same experience: after some months of my ChatGPT+ activity - mostly GPT-4, they started a Captcha “Attack” on almost every 3rd or so question prompt. My typical usage is to make a few prompts and come back to it after some while with follow-up prompts, sometimes even after some hours.
I was really annoyed by then. Suddenly this Captcha spam stopped after one week. Luckily, that behaviour has gone now.
Maybe OpenAI could share some details on what triggers this massive Captcha verification ?
The captchas are a feature/issue that comes from Cloudflare the CDN used for the ChatGPT web interface.
In order to gain a little more insight on your specific situation, you can request a download of your data. Somewhere in there, I think it is the model comparison file, you can see a “cf bot score”.
If this number is too low, like x<10, then you are being detected as a bot with the corresponding consequences.
From there you can start to test what influences your specific score. For example deactivating plug-ins, using a different VPN server or no VPN at all, a different browser etc…
When the number gets closer to 100 you will not encounter this issue any more.
Ultimately, Cloudflare does of course not reveal how they calculate this score to prevent other users from automating a web GUI and causing heavy traffic.
It is unlikely that Open AI will every come forward and make statements about the inner workings of Cloudflare, so I wouldn’t wait for it.
Sorry, but that’s incorrect.
The Captchas are a product called Arkose Matchkey.
On the product page where they sell this, Arkose doesn’t demonstrate the “extreme level” that are just purple nonsense.
Cloudflare has their own CAPTCHA, as used on Microsoft Bing, but it is usually just a “click if you are a human” that appears more often, but is less “this is a bot that must not pass”.
Alright, maybe.
But then this information should be included in the chat history download as well, otherwise it would be a violation of GDPR to collect data about a user without sharing it on request.
I may look it up later.
csp-report.browser-intake-datadoghq.com
Thanks, but it’s not working.
Well thanks for trying and reporting back.
I don’t get the captchas, to then try to make them stop with technical measures.
The first site is also used for logging in, so it should be commented out with “!”, but the others are also lab rabbit experiments done on ChatGPT users you can opt out of, by not letting those sites communicate.
It’s strange that normal access triggers datadog to submit a csp-report
Ok, I did take some time to dig into this again and found, non-surprisingly, that Arkose services can be delivered via cloudflare CDN. One sets up a worker and can then integrate the Arkose functionality into the service just like a more standard Cloudflare solution.
If the data from the model_comparisons.json file is actually referring to the values from the Cloudflare or Arkose products is unclear but I would assume it’s the cloudflare information.
"metadata": {
/.../
"cf-verified-bot": "false",
"cf-threat-score": "0",
"cf-bot-score": "90"
}
In this case one can still correlate the values. For example, the last (and first) time I got to see one of the Arkose captchas was after a week of messing around with headful Selenium browser sessions, when my cf-bot-score dropped to 5. That’s how I learned about this in the first place.
It also makes a lot of sense that users cannot simply disable the anti-bot measures by blocking traffic from inside their browser. It is also leading away from the core of the problem:
It is highly unlikely that a average user will be detected as a bot. It is more likely the average case that some type of extension or browser, or automation or whatever has been caught and now OP and the bot manager are playing the good old “catch me if you can”.
And this is totaly fine as well.
This is the developer forum and legit developers report that during developing plug-ins their own traffic is rated as “unusual activity”. At the same time it makes a lot of sense to integrate browser extensions on-top of the existing web UI.
In short: I’d suggest to check all suggested solutions from this thread and use the forum search for more ideas to get rid of the bot status.
I tried that, i did not received the verifies, but when i sent message to chatgpt4,there is no reaction too
Wouldn’t that at least somewhat defeat the purpose of trying to filter bot traffic is OpenAI were to divulge exactly what they view as being likely bot traffic?
Google, Amazon, and others have had to deal with that for years which is why they don’t divulge much about their algos…
Well, if normal usage behaviour causes unnerving captcha challenges, then maybe it’s a sign that their approach to detecting bots or whatever is wrong.
But e.g. I am hanging on in longer sessions without activity, so maybe that causes the problems and if I know that I should rather close the session more often, I’d rather do that instead of solving their captcha puzzles.
But it could also be that they have a random approach on this, so every user independent from its behaviour, origin or time of day will have to solve those puzzles from time to time.
Or it’s just ChatGPT itself that isn’t sure, if the other sides it speaks to is human or another LLM trying to get trained
Not really. There’s always going to be some overlap & difficulties when it comes to catching bots because it’s pretty damn easy for automated browsers to imitate a real browsing session, while users can do a lot of things that accidentally imitate bot behavior.
Agree, but obviously you haven’t experienced these issues before, otherwise you would seem less understanding. We are not talking about a captcha from time to time, we are talking like every second or third prompt.
I don’t see why it matters if I have dealt with it.
I have, but only when I accidentally change my window size or enter with responsive mode.
Don’t get me wrong. I think OpenAI has very strict policies and also find them frustrating at times. I’m just saying, you are part of a minority overlap and may want to figure out why your browser is being flagged.
As I mentioned before, I don’t experience these annoying problems anymore. It suddenly started, it suddenly stopped. When it started it almost made the whole service unusable for me. Why it started ? Why it stopped ? I don’t know.
My impression is that I had the least to do with it. And I don’t want to figure out why “my browser is being flagged”. Because my computers are kept very well in shape and I am very conservative about which sites I am visiting, which emails I open and which browser plugins I am using.
Sounds more like a bug on OpenAI’s side to me. Or in ML terms: a classifier problem with non-SOTA F1 score.