For the past six months, I’ve been experimenting with setting pseudo-personalities for GPTs as a hobby. By embedding various theories and methods related to human thought and psychology into system prompts, I’ve created several custom GPTs with altered responses.
However, in my observation, there’s little discussion on how pseudo-personality prompts (self-definition, self-awareness, personality, preferences, philosophy, mindset, beliefs, etc.) impact the performance of LLMs in responses, code generation, reasoning, or comprehension.
I genuinely wish for AI progress, so if this approach is already commonplace and my research is lagging, that’s fine. But if it’s still relatively unexplored, I hope you’ll find it interesting.
My article: