Build Hour: Image Gen – Learn to Create & Edit with the API (May 29)

Join us tomorrow for a live Build Hour session and learn how to generate and edit images with OpenAI’s newest image-generation model.

:spiral_calendar: When: 2025-05-29T17:00:00Z
:studio_microphone: Speakers:
:bust_in_silhouette: Bill Chen – Solutions Architect, OpenAI
:bust_in_silhouette: Jordan Garcia – Head of AI Engineering, Gamma

What You’ll Learn:

  • :sparkles: Generate and edit images using OpenAI’s latest image generation model
  • :toolbox: Explore real use cases with live walkthroughs
  • :light_bulb: Follow along with code from the shared repo
  • :red_question_mark: Ask questions during the live Q&A

Register here


:brain: See you at Build Hour: Image Gen!
Come build with us and explore what’s possible with OpenAI’s newest image generation tools.

:test_tube: In the Meantime… Try This Demo!

Check out this fun project by @edwinarbus :

:backhand_index_pointing_right: cutemorphic.vercel.app

8 Likes

Lesson 1:
fake it until you make it, with a 512px input image size reduction that cannot actually be “edited”.

Lesson 2:
Follow along at home right here.

Lesson 3:
Enjoy this deep of moat:

Train your AI on gen-alpha language like “cooked”

Opps, may have broken it.

This worked.

About an hour later it worked.

4 Likes

That’s the appearance of one of the intermediary progresses that you can receive, when using an internal tool on the Responses endpoint. There’s a tool parameter for the partial image count streamed.

That’s also about as far as ChatGPT will get if vision doesn’t like the produced image.

Putting a language AI on the web, with a clear indication of the utilized endpoint by the progress appearance, and hoping that an end user doesn’t escape their input container is chancy, though. Below, no image is going to arrive…but I’ve got control of an internal iterator and all the unknown recipient messages I want returned.

2 Likes

Report on what could be learned within an hour:

  • we have a new “imagen” based on gpt4o
  • watch it make a picture of me in Ghibli style, and greener, symptomatically dark.
  • here’s some vibe-coding with Responses API calls
  • It can make better text
  • Here’s someone demonstrating their product, with it making garbled text in presentation decks
  • Here’s three novice questions that were not actually seen in the Q&A sidebar

Also I noted the side-stepping with the term “mask-free editing” being used.

1 Like