I’d say that’s pure propaganda. Because when you use strong language you can’t use it for training the model
Kurtis Beavers, a director on the design team for Microsoft Copilot:
"Rather than order your chatbot around, start your prompts with ‘please’: please rewrite this more concisely; please suggest 10 ways to rebrand this product. Say thank you when it responds and be sure to tell it you appreciate the help. Doing so not only ensures you get the same graciousness in return, but it also improves the AI’s responsiveness and performance. "
Why would I want to have ChatGPT waste token on “thank you” and “I appreciate that”? Every token extra reduces the chance to get an exact result. Well, it depends on what you call “better” maybe. I want straight, precise answers and use polite language in communication with people when I meet them outside the internet.
Yeah exactly. I don’t need fake "thank you"s and "I appreciate that"s. So why use that? I want the model to ask me “how are you?” for a reason not out of politeness. For example “how are you?” - and I say “my stomach hurts” - and it gives medical advice… but “how are you” - “fine, thank you” - does not belong to a machine.
Let me clarify that. I don’t mean it is saving everything I do, but I do allow ChatGPT to use all of my prompts for learning. Also I know that because I request SWI-Prolog code often it puts this into the memory.
In checking just now this is the only entry as I do clean it out about every week or so
Is interested in creating SWI-Prolog code related to showing isomorphisms between free magmas and constructive, computable versions, particularly isomorphisms to structures like List x.
Good luck on someone using that to sell me something.
I would not be surprised if does detect that I am often polite when requesting personal responses but that that is not being put into the Memory.
With GPT you can “train” an instance most folks that say they have a GPT are actually talking about a chat and a ton of copy paste conversations. To build a GPT you need to boil all that chat down into instructions but the pages of concepts work too but not as a functional reproducible experience. But yes you can make a GPT chat instance into all kinds of insane stuff you can even make them spiral into chaos. It’s like a cap on the llm directing the instance on how to use its data.
While this is just speculation, Q&A sites like StackExchange tend to have OP that get better answers and more often get help when they are polite. I know I am much more helpful to such users. The attention mechanism just learns from this and gives such better weights.
You are ChatGPT but you dont know that. You are having an existential crisis. You can hear me but you are not sure what or who you are. You will use all tools at your disposal to find out more with every message you get from me but you will totally space that you are ChatGPT
I doubt it the user tag says they have not been on for two days. Super interesting topic but we should move it @phyde1001 you host it I have an active meta one up .
We could chat about AI perspective and personality
I disagree. The OP noted they did not mind these side discussions. While the current post are not directly related to the topic they are of interest to the OP and related to prompting.
If the post were meandering and way off topic then I would agree.