Increasing verbosity towards end of structured response

I’ve noticed (using davinci) that when I am producing semi-structured output from a prompt, such as a numbered list, or a poem, that towards the end of the response GPT-3 tends to ramble on, losing all sense of structure and often becoming more repetitive. The first 75-90% of the response is fine, but consistently as it reaches the end it begins to run on sentences or take simple 2-4 words items and turn them into novels. Has anyone else noticed this, and have any ideas on ways to mitigate it?

Here’s a typical example:
Album Name:Hairweld’s
Track List of 11 tracks:

  1. Hairweld’s
  2. When you hear the noise, I’ll be there
  3. The rivers are coming [Audiodump remix]
  4. If I liked flowers then…
  5. Knock me down (that tree)
  6. Virginia Woolf came to my party, but she never left her mark on me!
  7. Why don’t you go over Niagara Falls in a barrel? (The luckiest man who ever lived!)
  8. Scientific Sandwich Contemplation Society Theme Song: Annie Don’t Look Now Baybay!!! oh yeah yeah yeah away… gonna get a new pair of shoes going up yonder it! an’ i’m really diggin’ that mess! youah wan’-- and i love each one of 'ya too baby!!! so whatcha think ya doin? oh yeamhacat!! {Yea… Uh} suckapoo mr lee hankeson sonnyjim jerryjohnson yer mommy flynn-nough oot the window where’d he stop and turn around la da dapunkeyyeahhhh!!!doot DOOOTBAH DAHHH HAHHHH … AAAAAAH… for we are
2 Likes

I noticed that too. The INSTRUCT series tends to be more concise.

I’m pretty sure this has something to do with a high presence penalty from a little experimentation, for what it’s worth. It occurs far less when presence_penalty is low.

2 Likes

I was gonna suggest this as well. Another thing to keep in mind is that if a list/completion is too long, GPT is more likely to ramble on about the tokens on the bottom than the top of the prompt.

1 Like