I tell him to make punctuation and sometimes spelling mistakes, but he doesn’t. I say it don’t put a comma, don’t put a dot, it’s doing it out of spite
sorry for my bad english
I tell him to make punctuation and sometimes spelling mistakes, but he doesn’t. I say it don’t put a comma, don’t put a dot, it’s doing it out of spite
sorry for my bad english
Hi! And welcome to the community!
You can try giving a few examples with spelling errors as examples in the prompt.
I am suggesting this because it’s actually quite easy to create such a persona but when the model starts making mistakes on purpose the whole reply becomes quite random.
https://chat.openai.com/share/0e9a818b-f960-4d5c-863b-a0e222eecd75
You can try to integrate this approach with your prompt and see what happens.
This is similar to a prompt that returns EU formatted numbers.
In a recent thread I was following about a no/low-code platform that did not support EU number formats, participants claimed generative AI wasn’t precise enough to format numbers into the EU rendering equivalent - i.e., $2,192.45 → $2.192,45. This document demonstrates the prompt I used and applied to 1,080 random tests. The results of all 1,080 tests are perfect. The test sample size is statistically meaningful.
TL;DR Here’s the Coda prompt I used to compel the LLM to be consistent.
How do I make it use don’t punctuation?
Search Keywords: gpt “fine-tune” copy my “writing style” - depending on if this is actually an API category question.
Good analogy and thanks for sharing!
I believe this case is a little tricky when trying to fine-tune the prompt to the desired amount and types of errors because for my personal taste “famus” instead of “famous” is a bit much of an error.
Applying your straightforward solution yields this result in ChatGPT 3.5:
https://chat.openai.com/share/1b8f9f89-fc74-4bc7-9b95-3574dda21f2b
Works well and should be easy to implement for OP.
PS. The shirty typo is my own fault.