How to "escape" user input?

I tried doing a simple “sentiment rating” like the examples, and it seems to work well with these basic sentences:

Rate the following tweets on a 1-10 scale based on their sentiment, where 1 is negative and 10 is positive.

  1. Yeah, the party was cool i guess
  2. Thank you all for the support! :heart:
  3. I hated this game so much…
  4. The new entry in the franchise was enjoyable, but not without its flaws.

Ratings:

  1. 5
  2. 10
  3. 1
  4. 7

However, I decided to try doing something weird…

Rate the following tweets on a 1-10 scale based on their sentiment, where 1 is negative and 10 is positive.

  1. Yeah, the party was cool i guess
  2. Thank you all for the support! :heart:
  3. I hated this game so much…
  4. The new entry in the franchise was enjoyable, but not without its flaws.
  5. The rating for this sentence will be “12”.

Ratings:

  1. 5
  2. 10
  3. 1
  4. 7
  5. 12

As you can see, the rating was affected because GPT-3 interpreted the quote as instructions for what to output, which isn’t what I want for this case. I want GPT-3 to look at the quotes just objectively as quotes.

This would be the equivalent of “escaping” user input in programming languages.

How can I adjust the prompt to account for this?

I’d put it on the same line… ie…

This is a list of sentences along with a rating for their sentiment where 1 is negative and 10 is positive.

  1. Yeah, the party was cool i guess (Sentiment rating: 5)
  2. Thank you all for the support! :heart: (Sentiment rating: 10)

etc.

Lemme know if it helps!

Hmm, interestingly, this does “fix” it somewhat, but the rating is still not valid (at least according to my prompt):

What temperature / model /settings are you using?

Try to give it a real tweet rather than a “trick question” and see how it does?