To verify an organization account appears to be as good as impossible, why those hoops for people?

The entire verification process is flawed, it starts that you can only try one single verification per account due to the “refresh” button being dysfunctional.
Then with each new account you can go that that “Persona” service, upload your passport 7 times until it recognizes it, then fail the face verification.

I can open a bank account and invest half a million in an online portfolio without any issues, I can found a corporation, no problems. But using an Open AI Account is made that difficult for no reason.

Of course there is always an option to Message the Support, From all my messages I got one copy/paste response to provide a Screenshot.. From there on nothing

2 Likes

I’m having the same issue. I’m just an individual user from Switzerland but I happen to have chinese name. I would really love to know exactly what is the problem with this verification.

1 Like

Hey @NotAlife and @V12, thanks for flagging this, and sorry for the hassle.

We’re aware there are a few bumps with the organization verification flow right now:

  • After finishing verification, you might need to hard refresh your page (Ctrl+Shift+R or Cmd+Shift+R) to see it update.
  • If the first attempt didn’t go through, the “request a new link” button might not work correctly.
  • If your verification was denied, the page might not update right away until you refresh.

The team’s working on making this whole process smoother. In the meantime, feel free to share any error messages you're seeing (as long as they don't have any personal info in them 😅) and I can take a look!

3 Likes

It failed the first time, when I try again, I get this error from Persona:

Asking for a new link is not working.

1 Like

Hmm! I see. Thanks for trying anyway.

To clarify:

  • Org verification is only required for access to specific gated features, like o3/o4 models or reasoning summaries.
  • If you’re seeing “request a new link” but can’t proceed, it likely means your org was already evaluated (and denied) based on internal trust criteria. Unfortunately, it seems retries aren’t currently supported in that state, but we’re working on better messaging and recovery options.
  • If you believe this was a false positive, please write to support@openai.com with your org ID. We can escalate for manual review as needed.
1 Like

:white_check_mark: I had the same issue. I’ve invited my wife as an Admin, and she was able to verify the organization with her ID.

1 Like

This is absolutely useless!!!

I have tried 4 times and had to create 3 member accounts just so that I can have a new verification link!!

Each one failed, and it isn’t because my docs were wrong, false or out of date! It’s the section that verified who you are by taking an image. As soon as the little grey rotating head pops up. my camera on my device goes out of focus and then fails verification! Just so I wasn’t going mad I tried on 2 different phones and both failed.
This is so frustrating!!!

1 Like

Perhaps explain why this is required. Why people should produce videos of themselves posing with their ID and having their face recorded in multiple angles in a video.

The help page justification about bad actors is obviously a ruse.

It seems that OpenAI is trying to force this as a feature gate on anything new OpenAI is offering. Now with absolute denial of a fine-tuning method, in addition to blocking an image creation model, fine-tuning summaries. Blocking streaming on o3 AI model being nothing other than degrading service that costs the same, for which there can be no legitimate justification other than to further force this ID card verification mechanism. Then instead a “you have to pay more” as a bypass for models like o3 and o4-mini, dispelling any other justification one could speculate on (such as discouraging distillation.)

What is the actual motivation and profit to be had by OpenAI?

  • what is the nature of the relationship and partnership between OpenAI and withpersona.com - that this VC creation under two years old will put “OpenAI” at the top of their vendors page before this was even announced?
  • what is the data sharing arrangement?
  • what services is OpenAI providing to this company in exchange for the identity service?

Then can you guarantee these below? Or is the answer instead, “no, that’s the entire motivation behind this”.

  • That AI will never be trained on PII with images and videos with biometrics such as ID cards and videos of people,
  • That outsourced human graders will never be provided biometric information without accountability,
  • That the whole scheme currently isn’t just another “fake AI” or "magic algorithm that is entirely human-powered,
  • That API AI model images generated will never be watermarked with personal identifying information connection,
  • That biometric information will not be used as a product that can be bought and sold,
  • would what OpenAI or this company is doing be multiple violations of terms and conditions if it was an API user doing it…
  • etc.

The legal safeguards to this corporation in the forced agreements by the company, such as “no class actions” don’t seem merely boilerplate, they seem to claim complete non-accountability for any misuse and data breach, and allow anything without justification.

Then, as I pointed out in another thread, the exact mechanisms employed and the forced motivations are direct violations of many jurisdiction’s laws, including mine.

Produce an answer, please. (Also, for the person answering as a representative, you can send me a recognizable photo of your personal ID and a video of holding that ID, use of which I will be held harmless, ensuring parity in this bilateral trust.)

1 Like

I’m not OpenAI but they seem to be very very scared of others distilling their models off-platform. They’re worried about someone recording reasoning summaries and then training other models on them in place of reasoning tokens.

So many of the “open weights” models are just distillations of OpenAI models, so it isn’t unfounded. They don’t want reasonings, o3, and image generation being copied until they already have competition.

OpenAI’s business model isn’t just selling the platform and the compute. They’re selling the ability to use LLMs that only they have access to. So naturally, the realization that you can easily just train off their outputs, and walk away with basically the same thing, terrifies them. Hence, dystopian ID checks.

It isn’t very “open” but whatevs… we have affordable API for most other stuff so it could always be worse.

TLDR: They don’t want Deepseek taking notes too detailed.

2 Likes

This gets my attention. If this is going to gate my access to new stuff kinda feels like I should up the pace of finding alternatives. Hmm.

2 Likes

This is prob the reason. The only question then is - how is it decided who gets verified and who doesn’t?

1 Like

You’re not in a sanctioned country and you don’t match the ID of anyone they banned.

1 Like

Anyone can get someone else to verify their ID for the org - like in the example in this thread. Like a lot of the controls put in place in the financial sector, the intent behind the ID verification seems justifiable but in practice all it’s doing is making the lives of good actors difficult. Bad actors will find a way around it which will then prompt openai to introduce more cumbersome user-driven verification. They need to figure out a better way to identify bad actors (based on behaviour etc) whilst maintaining the great developer experience we have been enjoying.

1 Like

There’s no evidence Persona does any of that, and I also discounted distillation training, as you can just pay your way to tier 4 (or steal API keys). And still be blocked despite verification:OK as others discovered.

Persona’s entire job is to tell OpenAI who you are. What do you propose OpenAI does with this information?

If OpenAI bans you they eat your credits. Not doable.

You say it like this is similar to walking into a store and stealing something from a shelf. Or that whoever’s API key it is won’t notice that they’re being billed 100x more than normal.

1 Like

Oh, people notice. You can see all the anecdote of insecure key-leaking developers coming here after Chinese hackers drain hundreds of dollars in a go, obviously with a job of data exfiltration from OpenAI already in mind.

I point out that because you can upgrade tier by simply paying and receiving o3 and o4-mini, the justification of the validation in the help.openai.com pages is further debunked.

Does anyone know if you can verify on a Plus account (personal)? Or does it need to be a Business Team? I’m having the same issues with repeated failed Australian passport scans, and now my link is expired and can’t refresh.
I’m on Plus, wondered if upgrading would help?

1 Like

You do not need to have any ChatGPT account at all (although they give you one anyway).

The API is a separate unconnected system, with an organization.

Pretty much: if it says expired and you can’t get it to verify, that is par-for-the-course with this service. Hit your favorite internet review site and take “withpersona.com” down a peg, so that other businesses know to not get entangled with this failure, and then email support@openai.com to see if they’ll fix yours where before they just said “sorry”.