Join @romainhuet and Channing Conger from OpenAI’s research team to explore how Codex gets better when it can visually check its work. Watch them sketch ideas on a whiteboard, then bring them to life using vision, voice, and Best-of-N.
The vision capabilities make the iteration loop much tighter. Instead of describing UI changes, you show Codex what you want.
Try your first multimodal tasks: chatgpt.com/codex
Docs: https://developers.openai.com/codex
Let us know what Codex creates for you or what you’d like to see next ![]()