Free tokens on traffic shared with OpenAI extended through April 30, 2025!

OpenAI Logo

Hello,

In December 2024, we launched a program where organizations that opted in to share prompts and completions with OpenAI could receive up to 11 million free tokens per day on traffic shared with OpenAI—1 million shared across GPT-4o, o1, and o1-preview, and 10 million shared across GPT-4o-mini, o1-mini, and o3-mini.

The free tokens program was originally scheduled to end on February 28, 2025, but is now extended through April 30, 2025.

Organization owners can change your account’s opt-in settings at any time on the Data Sharing Settings page.

Thank you for helping OpenAI improve our models. If you have any questions, please review our Help Center article or reach out to Support.

—The OpenAI Team

13 Likes

OpenAI is currently offering complimentary tokens to users who opt into data sharing, allowing them to use certain models at no cost. This program runs until April 30th, 2025 However, there’s an important detail that isn’t clearly mentioned in the documentation: Your balance must be above 0 to use complimentary tokens.

If you’ve enabled ‘Enable sharing prompts and completions with OpenAI’ and see the message ‘You’re enrolled for up to 11 million complimentary tokens per day’, you might assume that you can use the Playground without any issues.
However, if your balance has dropped to 0, you will still encounter the error:
“You’ve reached your usage limit. See your usage dashboard and billing settings for more details.”

I confirmed this directly with OpenAI Support, and they clarified that If your balance reaches 0, you must add funds to continue using the API and Playground, even if you’re technically enrolled in the complimentary token program.

I did not get such email. How i can manually opt-in?

Additionally, the free token program does not apply to the Responses API, it is only available for the Chat Completions API and Batch API. I just got this response from the help center.

The issue here isn’t about requiring a balance or limiting the API access itself, but rather the lack of clear communication. These conditions should have been explicitly stated upfront. It’s frustrating to discover such limitations only after encountering unexpected charges or access restrictions.

I hope OpenAI can provide clearer documentation to avoid confusion for users opting into the program.

1 Like

Another support bot hallucination brought about by simply predicting a response?

I cannot imagine your response from “help” is truthful or that omitting Responses calls is intended. Training prompts on the new API and user data you volunteer by new tool use there should be of even higher value.

There was an issue on this particular day with charges “leaking out”:

However, I just ran several gpt-4.5-preview against Responses over the past half hour, running it up usefully to 14.5k input, and have seen nothing billed after refreshing, along with a week of nothing but stand-out models like GPT-4 or fine tuning, and I am actively poking away at event handlers to get streaming up to speed.

Review that “on all projects” is selected for data sharing options, or that your project usage where you’ve disabled sharing is with your intention. Bills for usage under 1/10 megatokens per 24hr rolling window that considers them free are bugs.

1 Like

Thank you for your comment! If the reply was generated by a bot, that would certainly explain the hallucination.

And yes, the additional API charges did occur on March 12th. I got confused because I tested the Responses API on March 13th and initially thought there was some kind of mistake due to the timeline—I don’t live in the US. But just like you, I also tested the Responses API multiple times, and as you mentioned, it did not generate any charges.

That being said, isn’t this a much more serious issue than simple misdocumentation? They charged users incorrectly on March 12th and then used a bot to provide false information?

This was the response I got btw.

1 Like