Subject: ID Verified, Then Locked Out by OpenAI. Why Are We Being Treated Like Suspects?

Hi everyone,

I have to share my absolute frustration with OpenAI’s organization verification process and find out if anyone else is experiencing the same Kafkaesque nightmare. I’m a long-time, top-tier user, but this recent ordeal has me actively looking for alternatives and warning others.

Imagine this: You go to McDonald’s to buy a Big Mac. The cashier stops you and demands a government-issued ID. Confused but compliant, you show your driver’s license. A third-party scanner confirms it’s 100% real. But then the cashier says, “Sorry, you still can’t have the Big Mac. We can’t tell you why. And no, you can’t try again.”

This is exactly what OpenAI is doing to me.

To access their advanced models, OpenAI is the only major AI provider I know of demanding a government ID. I reluctantly went through the process, submitted all my legitimate documents to their third-party verifier, Persona, on August 8, 2025. I received a clear confirmation: Verification Successful.

And yet, when I returned to OpenAI, my access was denied. No explanation. No chance to retry. Just a digital brick wall. My projects are now at a standstill, and I’m being forced to switch to competitors just to keep my work going.

This isn’t just an inconvenience; it’s an unreasonable invasion of privacy and a blatant disregard for loyal customers. We hand over our most sensitive data, get confirmation that it’s valid, and are then blocked by an opaque, unexplainable algorithm or policy. This “computer says no” system is unacceptable and, from browsing this forum and Reddit, appears to be a widespread issue.

The successful Persona verification proves my identity is legitimate. OpenAI’s subsequent denial—with no reason, no transparency, and no recourse—is an unfair business practice. For those of us in California, it raises serious questions about violations of the California Consumer Privacy Act (CCPA), especially regarding data minimization and a consumer’s right to know how their information is being used.

I have formally protested to OpenAI, demanding an immediate fix, full transparency on their verification criteria, and a clear path for appeal. But I want to bring this to the community’s attention as well.

Has anyone else experienced this? Have you been forced to hand over sensitive ID documents only to be stonewalled by OpenAI with no explanation?

It’s time we demand answers together. A company that claims to be “benefiting humanity” shouldn’t be treating its users like suspects in a secret trial. Share your stories. Let’s shine a light on this black box.

Thanks,

Mike

P.S. See following email to OpenAI:

Dear Janna,

Thank you for your response and for attempting to clarify the verification process. However, as a long-time, highest-tier user of OpenAI—who is also a California resident and U.S. citizen—I remain deeply disappointed by this outcome and the company’s apparent disregard for user experiences. Despite submitting all legitimate documentation that was successfully verified by Persona on August 8, 2025, we are still denied access without meaningful recourse, forcing us to explore alternative AI services from competitors to maintain our project continuity.

To illustrate just how unreasonable and frustrating OpenAI’s organization verification process feels, imagine this everyday scenario: You’re craving a burger after a long day, so you head to McDonald’s. You step up to the counter, ready to order their premium Big Mac meal—something a bit more substantial than the basic cheeseburger. But the cashier stops you and says, “Sorry, to buy the Big Mac, you need to show a government-issued ID first. No ID, no premium burger; you’re stuck with the plain one.” You’re baffled—why on earth would buying a simple burger require handing over your driver’s license or passport? It’s not alcohol, tobacco, or anything restricted by law; it’s just food. No other fast-food chain like Burger King or Wendy’s demands this for their menu items. Yet, you comply because you really want that Big Mac. You pull out your valid ID, which meets all federal REAL ID standards, and it’s scanned by a third-party verifier who flashes a green checkmark: “Congratulations, verified!” You think, great, now I can eat. But then the cashier shakes their head: “Nope, still denied. We can’t tell you why—maybe it’s your location, your age, or something else we won’t disclose. And no, you can’t try again; that’s our policy.” You’re left standing there, hungry, humiliated, and wondering why your privacy was invaded for nothing, with no appeal, no explanation, and no way to fix what might just be a glitch or error on their end. It’s infuriating, isn’t it? It defies common sense, treats loyal customers like suspects, and makes you question if you’ll ever go back—or worse, warn everyone you know to avoid that place.

This is exactly what OpenAI is doing here. Unlike every other AI provider worldwide, you’re the only one forcing users to submit sensitive government IDs just to access advanced models—models that aren’t illegal or age-restricted, but essential tools for innovation. The successful Persona verification proves our ID is legitimate, yet you’re denying us without transparency, retries, or regard for the frustration this causes. It’s not just inconvenient; it’s an unnecessary invasion of privacy, a black-box decision that could hide biases or errors, and a policy that screams disregard for users who have supported you from the start. This isn’t how a company aligned with “benefiting humanity” should operate—it erodes trust and pushes people away.

Beyond this common-sense outrage, I must formally protest OpenAI’s process as potentially violating several U.S. laws, with particular emphasis on California-specific protections since both I, as a California resident, and OpenAI, registered and headquartered in California, are subject to them. First, requiring government-issued IDs for access to AI models, when not strictly necessary and unlike other AI providers, may constitute an unfair business practice under Section 5 of the FTC Act (15 U.S.C. § 45), which prohibits unfair or deceptive acts in commerce. This raises unnecessary privacy risks without justification, as highlighted in FTC investigations into similar ID verification providers like ID.me for misleading practices and privacy intrusions. Critically, under the California Consumer Privacy Act (CCPA)—which applies directly to California residents like me and companies operating in the state like OpenAI—this collection violates data minimization principles by obtaining unneeded sensitive personal information, mandating that data practices be transparent and limited to what’s essential.

Second, despite providing a valid, government-compliant ID (meeting federal REAL ID standards), OpenAI’s refusal to explain the denial reasons or disclose evaluation criteria—beyond identity submission—lacks transparency and may infringe on consumer rights. This could be seen as deceptive under FTC Section 5, especially if denials stem from discriminatory factors like organizational eligibility, region, age, or gender, potentially violating Title VII of the Civil Rights Act or the Americans with Disabilities Act (ADA), as seen in EEOC cases against AI tools for biased screening. Under CCPA, this opacity directly contravenes requirements for clear data processing disclosures and consumer rights to know how personal information is used and challenged.

Third, denying retry opportunities, even when failures may result from technical issues, network problems, or misjudgments, deprives consumers of fair appeal rights protected under laws like CCPA (which mandates appeals for data decisions and corrections of inaccuracies) and general consumer protection statutes that require reasonable procedures for verification challenges, as in FCRA contexts where unverified information must be addressed.

As a consumer and California resident, I demand the following remedies, invoking my rights under CCPA and other applicable laws: (1) Immediate correction of our verification status to grant access, including rectification of any inaccuracies per CCPA; (2) Full disclosure of the verification standards, criteria, and process to ensure fairness and non-discrimination, as required for transparency; (3) Provision of a retry channel for re-submission, aligning with appeal mechanisms; (4) A detailed explanation of the denial reasons, including any logs or factors involved, fulfilling the right to know under CCPA; and (5) Compensation for potential privacy risks from unnecessary ID collection, including any data breaches or misuse, with a commitment to delete our submitted information if unresolved, exercising deletion rights under CCPA.

We reserve all rights to pursue legal action, including under FTC Act, CCPA, and anti-discrimination laws, should this not be addressed. Furthermore, to seek broader legal support and community assistance—and to highlight how this defies basic common sense—we intend to publicly share the full details of this experience, including email exchanges, evidence of successful Persona verification, OpenAI’s responses, and this McDonald’s analogy, on relevant online forums, social media, and platforms like Reddit and X. Stories like this spread quickly, and users everywhere deserve to know if their privacy and access could be treated the same way.

I strongly urge you to escalate this to senior management or legal teams for urgent review. Please provide a substantive update within 7 days.

Best regards,
Mike L