During the private beta we want to learn as much as we can. Among other things, we want to learn how the product is used, what needs improvement, and what safety concerns exist. To learn as much as we can we will be prioritizing access based on a range of criteria, such as:
Is the use-case outlined in the application promising and does it fall within our use-case guidelines? Is the applicant likely to push and explore the product’s capabilities in broad, novel, and safe ways? Will the applicant be a positive member of a diverse community and will they be an effective beta tester?
We do not know how long it will take for you to gain access. As we refine the product based on feedback, and gain confidence in the quality of the experience, we’ll be gradually increasing the number of people with access. We’re excited for everyone to try this product, but it will take some time before we’re able to meet this goal.
I have access to OpenAI Codex on my personal account, can I get access for my team on our organization account as well?
If you are interested in acquiring Codex access for your team or organization please have them submit the Codex waitlist form.
OpenAI Codex will start as a free trial at no additional cost to you.
OpenAI Codex usage will appear on your Usage page. You can select a specific day on the graph to view all requests made to Codex models on that day.
Yes. We ask that you follow the approval process for Going Live before releasing your app publicly. You can begin this process by reviewing the Going Live page and submitting a Pre-Launch Review Request. Please note that OpenAI Codex is still in beta preview and you use it at your own risk. If you choose to use code generated by Codex for commercial purposes or in production environments, we recommend that you carefully test, review, and vet the code.
No. When planning to release a new application or new functionality within an existing application, please submit a Pre-Launch Review Request.
For example, if your GPT-3 application was previously approved and you would like to incorporate Codex functionality into it, you will need to submit a new request and gain approval before releasing it to your users.
Yes, the Sharing and Publication policy apply to OpenAI Codex, however, there is an exception that allows you to livestream demonstrations of the Codex API. See the Livestreaming and demonstrations policy for full details.
Codex will include two models at release: davinci-codex and cushman-codex. The maximum context length (length of the combined prompt and completion) is 4096 tokens for davinci-codex and 2048 tokens for cushman-codex.
2-3 concurrent calls are ok occasionally — if you have a request larger than that, we’re happy to discuss further. Please contact us through our help center.
The current rate limit is 600 requests per minute or 150,000 davinci tokens. If you hit this limit you will receive a 429 error.
OpenAI Codex is a descendant of GPT-3 and was trained on data that includes public source code from GitHub and natural language. This gives the model a broad knowledge of programming, and the ability to generate code and comments in multiple programming languages. You can read more about earlier versions of Codex in the OpenAI research paper “Evaluating Large Language Models Trained on Code.”
Currently, the Codex model cannot be fine-tuned via the fine-tuning API.
At this time Codex users cannot fine-tune the OpenAI Codex models for other languages. If there is a language you would like the model to better understand, please reach out to us through our help center with details about the language and your use case.
Feedback, questions or concerns of any kind? Please contact us through our help center