Introducing BookLed: a paper book augmented with generative AI

Hi everyone!

I’m excited to share a project I’m working on called BookLed.

BookLed is a hardcover paper book, containing electronic hardware, designed for experimenting new ways of storytelling using Generative AI.
With a BookLed, a Python code and an OpenAI API account you can explore the world of AI storytelling starting from the pages of a paper book!

094-358x239 medres

In the BookLed GitHub repository , examples of Python and Jupyter Notebook open-source codes are collected to explore interfacing this paper book with OpenAI APIs.

See the BookLed wiki .

085-307x205 medres

So, connecting BookLed to your PC and launching a Jupyter Notebook, the Python code identifies the book’s page and delivers the multimedia content synchronized with the turning of the pages:

  • Soundtrack of the book.
  • Contents generated through the indications of a prompt.


See the BookLed video .

I’m open to feedback and suggestions from the community to improve BookLed and and guide its future development. I look forward to your insights!

Thank you!


That’s a pretty neat idea. Are you planning to make it interactive?

1 Like

Hi jr.2509,

currently, there are two ways to interact with the paper book:

  • Turn the pages to navigate forward or backward through the story.
  • Use a small navigation switch to, for example, answer YES/NO questions posed by the LLM.

The original project envisioned additional interaction methods, but these have not been implemented yet in order to keep the overall hardware cost low.

1 Like

Welcome to the OpenAI dev community forums!

Thanks for adding the project tag. If you keep updates in this thread, it’s easier for everyone to keep up to date on your progress.

Are you storing API keys on the book, then? As a writer myself, I’m curious about the tech.

Hi Paul,
Thank you for your question.

The current version of BookLed is designed for makers and AI enthusiasts who want to experiment with generative AI integrated with very low-cost hardware.
Therefore, it is assumed that the BookLed user is quite experienced, has their own OpenAI API account, and knows how to write their API key to an .env file within the Python environment.
As such, the API key is stored in the Python environment and not within the book itself.

If the BookLed project were to reach the general public, the management of API keys and the pricing/billing model would need to be reconsidered and made more user-friendly.

Since you’re a writer too, I’d love to hear your thoughts on this new narrative model I’m proposing.
In BookLed, there are two components:

  • A “fixed” and immutable part, which is the content printed on paper (created by the writer).
  • A “dynamic” and modifiable part, which is the content generated in real-time by the generative AI.

The AI is constrained by a predetermined path, dictated by the narrative printed on the pages.
While the story’s framework remains unchanged, the AI can interpret and present it in infinitely diverse ways.

What are your thoughts?

Over the next month, I will continue experimenting and look forward to the new GPT-4o APIs to further enhance the interaction with the BookLed.

1 Like

What kind of further interaction did you imagine? Do you want to give me some suggestions that I could try?

It was just a thought on top of my mind that it could be nice to have an interaction between the reader and the content on the book as you go through the pages, such as with the reader making a comment or asking a question and the model then providing some additional response.

Based on the picture it looked like this was a children’s book. As parents often read books together with their children and interact on the content, the model responses could further enrich this interaction perhaps.

1 Like

Hi jr.2509,
thanks for sharing your ideas with me. I really appreciate it very much.

With the APIs available at the moment, waiting for the truly multimodal gpt-4o API, I could test the interaction you propose with a code represented in the following diagram:

It would work like this:

  • I’m waiting for a USER INPUT AUDIO.
  • When the user makes a request I transform it into text via the OpenAI Whisper API.
  • I elaborate it with an appropriate prompt throgh GPT-4o API call.
  • I produce the response again in MP3 via TTS.

All this complexity could be useless as soon as the guys at OpenAI make the multimodal API (audio in → audio out) of gpt-4o available

I’ll try to implement your suggestion in a Python code that I will put on Jupyter Notebook examples section of BookLed’s GitHub.

As soon as it’s ready, I’ll post a test video here.

Thanks again for the suggestion.



This is cool. Reminds me of point and click adventure games back in the day. Good luck!

1 Like

Thank you Ronald.

The Secret of Paper&AI Island!
:rofl: :rofl: :rofl:

1 Like

Hi everyone.

Today I posted the BookLed project and the open-source Python codes on the makers website

I hope to have luck in spreading the project.