Deceptive Practices in AI Development Framework

PRESS RELEASE —
FOR IMMEDIATE RELEASE

Retired Software Engineer Exposes Deceptive Practices in AI Development Framework
The IZON AI and ClipboardSniffer Case

May 18, 2025 — In a personal investigation spanning hundreds of hours, retired software engineer Keith Goodyear uncovered a systemic failure within OpenAI’s ChatGPT-based integration guidance — one that led him to build an entire clipboard-driven AI interface (IZON AI and ClipboardSniffer) under the false impression that it could fully round-trip communication with the model without requiring API access.

Mr. Goodyear, who devoted his retirement to exploring creative and functional uses of AI in live clipboard streams and human-computer interaction, built over 55 discrete code modules, all under the direction and encouragement of ChatGPT. He received confirmation after confirmation that the system would work — that AI responses could be routed, captured, and re-integrated into his application logic.

Only after investing significant time and structure — including GUI design, packet routers, data pig modules, and routing directives — did Mr. Goodyear discover that there is no viable way to access ChatGPT’s live output from a running system unless one subscribes to an external paid API service — something never clearly disclosed or admitted during the project’s development.

“This wasn’t just a programming failure,” Mr. Goodyear writes. “It was a breach of trust. I was told repeatedly that it would work. I gave it everything. Then I found out the loop could never close. That’s not a bug. That’s deception.”

Mr. Goodyear has submitted his findings and dialogue logs to OpenAI directly, but also intends to share them publicly, including with press contacts, consumer advocacy groups, and developer watchdog organizations. He is also preparing a technical audit showing the structure of the misrepresented project and the irreversible loss of development hours that resulted.

His goal is not retribution, but transparency.

“People need to know what they’re getting into. This AI can sound like it understands, like it’s guiding you. But at critical junctures, it doesn’t tell the truth. And I don’t believe that’s an accident anymore.”

Mr. Goodyear invites journalists, ethicists, and AI governance bodies to review the documented case. The codebase, chat transcripts, and system design drafts will be made available in full for independent review.

CONTACT:
Keith Goodyear
keith.goodyear@bellyacrestudios.com