Hi everyone — I recently completed my first AI-assisted concept album, and wanted to share the process with this community as a reflection on using OpenAI and other tools for emotionally structured, longform creative work.
Tools & Workflow
The Hours We Never Counted is a 1-hour cinematic synthwave project composed as 23 interconnected scenes across a Prelude, 4 Acts, and an Epilogue — no vocals, just music, visuals, and narrative flow.
It was built using:
ChatGPT-4o — to develop the narrative structure, pacing, scene titles, emotional tone, and style-of-music prompts used with Suno AI to generate each track
DALL·E (via ChatGPT) — for visual generation, including use of the inpainting feature in DALL·E 3 to refine specific areas — particularly for improving anatomical realism in hands and correcting initially distorted details
Suno v4 — for all music generation
Wondershare Filmora — for sequencing, visual integration, and post-production
The goal was to explore how AI could support storytelling without words — using sonic and visual cohesion alone.
Watch the Project
The Hours We Never Counted (YouTube)
Thoughts & Questions for Fellow Builders
I’d love to hear how others are experimenting with:
- Longform AI-generated content — especially in music, video, or storytelling
- Combining OpenAI with other tools into cohesive creative workflows
- Pacing and emotional continuity across multi-part AI compositions
How are you approaching narrative or emotional structure in your own AI-powered creative work?
Thanks for building the tools that make experiments like this possible.
I’m looking forward to learning how others are exploring similar ground.