Write two haiku about pizza.
Pizza is great
I love to eat pizza
Pizza is my favorite
Write two haiku about pizza.
Pizza is great
I love to eat pizza
Pizza is my favorite
Try it as a one-shot?
Yeah, it seems to get words that are used in poetry, including rhyming or the typical length of a stanza, but it doesn’t understand syllables.
I have the same issue with a Soneto
It’s true! GPT cannot count. Here’s an example
Task: Count the 1 's.
1000101 = 3
101010000000001 = 4
11011001 = 5
00001 = 1
0010001010001001011111111110001 = 16
1110 = 3
0010101 = 3
000000000001111111111100000000000 = 16 (GPT's answer. There are actually 11 one's)
GPT failed in a number of other cases too, for example:
11100 = 4
100000000010000100000 = 5
10101= 5
Edit:
Actually it does way better when you separate each digit into a new token
Task: Count the 1 's.
1 0 0 0 1 0 1 = 3
1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 = 4
1 1 0 1 1 0 0 1 = 5
0 0 0 0 1 = 1
0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 1 1 1 0 0 0 1 = 16
1 1 1 0 = 3
0 0 1 0 1 0 1 = 3
1 0 1 0 1 = 3 (GPT's response, correct!)
1 1 1 0 1 = 4 (GPT's response, correct!)
1 1 1 1 0 0 0 1 1 1 1 0 0 1 0 1 1 1 1 1 0 0 0 0 0 1 = 15 (GPT's response, correct!)
I asked all of the assistants to “write haiku”. Here are the results:
DaVinci: Beneath the trees | Cool shade and fresh breeze | a perfect summer day
Curie: A quiet evening | Only the moon and stars | And the sound of my breathing
Babbage: A cool breeze in the air | A leaf falls from the tree | A bee buzzing around
Ada: A star-filled night | The moonlight shines | On all those stars | And makes their light dances |To the love that is in me.
None of them are right!
Davinci original or text-davinci? Curious how the text-davinci would do.
text-davinci-002. I also used text-curie-001, and so on.
Count the 1 's.
ab11cd = 2 (training response)
abcd111efg = 3 (correct)
hi111jklmn111op = 6 (correct)
111111111111111111111111111 = 31 (wrong, 21 ones)
000000000001111111111100000000000 = 21 (wrong, 11 ones)
25 days later, and not a single admin fixed GPT.
I’m having the same issue with trying to get the AI to write a haiku. I chained 2 more prompts to try to check its understanding, but not sure why it’s not getting it. Counting syllables in a collection of words seems pretty straightforward.
# prompt: 'Write a haiku about: the sun'
Haiku:
Rays heat the Earth below A bright fire in the sky A warm peace for all to know
# prompt: 'How many total syllables are in the following words? {haiku from 1st prompt}'
Haiku syllable count:
17 syllables #wrong, there are 19
#prompt: 'Does the following poem fit the definition of a haiku?: {haiku from 1st prompt}'
Is the AI's haiku a haiku?
No, this poem does not fit the definition of a haiku. Haiku is a form of Japanese poetry consisting of three lines. The first line has five syllables, the second line has seven syllables, and the third line has five syllables.
GPT-4 is better…but alas not perfect either…
All the models have problems with tokens/words/syllables usually because they’re trained in tokens which aren’t really syllables per se.
If you haven’t checked it out yet, this page by Gwern has a lot of great stuff…
Few-shot, in context learning is your friend and will get you the results you want (or at least close to them).
Agreed. I haven’t found many cases where 2 or 3 examples at most can’t get it done. It’s more expensive, but it works…
I’m new to working with LLMs and AI so training by tokens is new to me. I’m not actually that interested in writing haikus, just using it as a way to learn the concepts. So the workaround is appreciated, but as a newcomer it seems like a gaping hole that GPT-4 is unable to understand syllables despite an otherwise robust understanding of the English language.
Although the AI understands words in terms of tokens, isn’t there a way the AI could be taught the syllables of words based on a dictionary that simply links an integer to each word?
Thank you for explaining some of the concepts involved. With taking your approach of providing a few examples where I break up the syllables explicitly, I’m getting a much better rate of 17-syllable haikus, although the AI still falters part of the time. It seems there are some types of words that aren’t considered with the right amount of syllables. Of course it would be exhausting to provide examples that accounted for all different syllabic patterns in English word formation.
By providing examples in our prompt, is this a form of “training” the AI via the API, or does the “learning” from my example haikus only persist from my application’s prompt ending at the AI’s response?
Another question about the AI is, aside from the AI’s initial trouble with word syllables vs words in their token form, why doesn’t it already understand the writing model of a haiku? (of course, underlying this would be a grasp of syllables) For example, we commonly see people lately utilizing AI to complete repetitive forms of writing like cover letters, and that form of writing seems to be “known” to the AI - i.e. we don’t need to provide examples of how a cover letter is structured before the AI is able to respond with a piece of writing we would passably consider as a cover letter.
Let me know where my misunderstandings lie here. Thanks in advance!