Ethics of AI - Put your ethical concerns here

The thing is, LLMs are not capable of producing genuinely creative thoughts like humans. They are inherently limited to mixing stuff already existing in the training data (i.e. in the verbal tradition of mankind up to the cutoff date).

This kind of dispute modeling looks pretty cool for a while, until you start noticing repeating patterns after 6 or 8 rounds or so. First, the initial positions of the debaters are quite crude and superficial, similar to what you can get from basic prompts. Under the influence of the opposition and the “reinforcement” imitated by the crowd, they gradually become more nuanced and sophisticated. But eventually they come to a point where there’s nothing substantial left to elaborate about their ideas. They start to fluctuate around some kind of dynamic equilibrium and simply repeat futile attempts to convince each other that they are right—that’s when you know the model hit the wall. Still, it’s nowhere near human examples of compelling discussions.

Look, no offense, I was initially interested in conversations on the original topic. My budget of irrelevant effort has run out! :smile: There are tons of insights on the internet on how to achieve this, as well as how to create permanent characters for your chats (although they obviously won’t be able to remember more than the memory limits allow).