I used GPT-3 to write poetry and I loved the results

It started off as an experiment to see if I can use it to make something like an AI “Dungeon Master” that can act as an RPG story teller for a text game. But somehow I started trying to get it to write songs.

Some of the results were just so cool I decided to keep going with it.

My favourite one is a hacker rap:

Prompt:

When I'm tappin' I'm trapped in the screen
I escape when the codebase is clean

Completion:

I can write code, I can code all night
You are my code and I am your light
We're in this together from dusk 'til dawn.
We got the skill to make things run

It figures out the verse and rhyme structure, and the content also makes sense.

Here are the rest of the completions and prompts I used in this experiment: GitHub - pixegami/gpt-3-poetry-results: Here are some creative writings by GPT-3 (poetry and lyrics).

I even made a video about it so I can talk about some interesting completions a bit more and introduce the power of creative AI to my non-tech friends.

Anyway, this stuff is super cool and I’ll be continuing to explore it in the creative-writing areas.

4 Likes

Did you get it to rhyme? This is one thing it’s not very good at. You can give it clear rhymed couplets, but it doesn’t follow that. There is a reason for this see these links
Wang J, Zhang X, Zhou Y, Suh C, Rudin C (2021). There Once Was a Really Bad Poet, It Was Automated but You Didn’t Know It. arXiv:2103.03775v1 [cs.CL]. Available at:

And Gwern Branwen (who’s on this group) has written about it too. Good luck.

3 Likes

Thanks for sharing those. I had a shallow read and it’s pretty interesting stuff.

The resulting limericks satisfy poetic constraints
and have thematically coherent storylines,
which are sometimes even funny (when we
are lucky).

I also found that luck was a big factor. In the samples I tried, maybe about 10-15% achieved a rhyme, but I’m not convinced it wasn’t just luck.

Still it was some interesting output, and I can see it useful for things like idea generation or writer-assist.

Across multiple different test set-ups, the authors found that “people are not reliably able to identify human versus algorithmic creative content.”

1 Like