Something exciting is coming—let’s watch together!
The livestream starts 2025-05-16T15:00:00Z
A research preview of Codex in ChatGPT:
Join Greg Brockman, Jerry Tworek, Joshua Ma, Hanson Wang, Thibault Sottiaux, Katy Shi, and Andrey Mishchenko as they introduce and demo Codex in ChatGPT.
Watch along with your fellow community members, chat live, and discover what’s next.
This looks super promising, like a more targeted and precise use for software devs. I wonder if it’s a killer of all these vibe coding tools out there, or if they will just wrap around it
That’s the official repo for the Codex CLI, which runs on hardware of your choice.
Today’s update lets you run multiple agents in parallel in the cloud.
I saw on Twitter that using other models was accepted via a PR—pretty cool feature, IMO!
I didn’t watch the full livestream but clicked around.
Is it all single-shot calls under the hood? Are there multi-turns?
I’m curious as to what the actual LLM structure is in the sense:
is each task a multi-turn broken into stages?
Or is each task a single LLM call until user continues?
I.e. is it only initial generation → user input → task activity #1 → waits for next input?
Or is it multi-call/agentic/tool calling under the hood allowing for complex task completion from a single user-input to make modifications, re-check those modifications, etc. and essentially loop indefinitely until the task achieves deliverable completion?
It looks pretty awesome on the surface - and I wouldn’t say exactly totally “dev oriented” in the sense that it seems the UI itself isn’t necessarily customizable or modularized in a sense - more like this is what you can do, that it’s, but as an interface for a powerful backend flow it’s pretty cool to see all that integrated seamlessly - still just curious about more of what’s going on underneath…
And maybe I didn’t watch the right part, but what is it pricing wise? Just unlimited for pro/enterprise tiers, etc? Or are there limitations? Are there rate limits? Is there a plan for it to be API accessible or somehow integrated with assistants SDK?
I had some initial struggle with environment package/dependency setup, the docs were vague about how to specify the initial setup scripts. You need to go to Environments on Codex app, pick your environment Edit->Advanced and put dependency installation scripts there.
In case it’s of use to anyone, here is how I bootstrap codex projects with AGENTS.md and other initial wireup. (github) knavillus1 / codex_bootstrap
I upgraded to Pro just for Codex and I am finding some bugs, where would be the best place to send them?
One example is no container is being to download anything from the internet:
error: failed to get `anyhow` as a dependency of package `yawl-core v0.1.0 (/wor
kspace/yawl/core)`
Caused by:
download of config.json failed
Caused by:
failed to download from `https://index.crates.io/config.json`
Caused by:
[7] Could not connect to server (Failed to connect to proxy port 8080 after 30
65 ms: Could not connect to server)
Anyone could please check if you can disable your MFA again once it is turned on?
Codex requires Multi-factor Authentication to proceed.
AKA “password-dummy-protection”
I don’t want to be pestered every time, beyond just now challenged on every uncookied environment.
(Using those private factors that are already completely unchangeable for an account, which is just as daft as using unchangeable biometrics that everybody can see and government IDs for authentication.)
Then: does it only work with GitHub, or is it of the same quality with what I might directly upload? For GitHub is also requiring I share an authentication method, and trusting others that have “no class action” “forced arbitration” provisions is the problem.