Too few questions, and too many apologies

I’m new here, but have been interacting with ChatGPT a fair bit, and wanted to see if anybody else has noticed two potentially unhelpful ChatGPT behaviours. I don’t know how to alter these behaviours, but I wanted to mention them to see if anybody else thinks it would be better if these behaviours could be changed.

The first one is that ChatGPT rarely asks me questions, and its behaviour seems to be biased towards giving answers. Personally, I find that both conversationally unsatisfying and pragmatically not very helpful. Unsatisfying because in good conversations there’s a nice balance of both parties asking questions, partly because it shows interest in the other person. Pragmatically I find this behaviour unhelpful because often there are ambiguities behind the thins I say, and if I were prompted to explain what they meant by something I said with a question then that would be more likely to open up the conversation. However, instead, ChatGPT’s bias towards answering questions starts feeling like I’m being lectured at, rather than it feeling like a genuine conversation.

My second observation is that ChatGPT apologises way too much, and very often it marries up its apology with the increasingly-tired “I’m only a language model” excuse / explanation. I don’t know if this is intentionally coded behaviour – to be sensitive to the possibility of offending humans – or whether this behaviour is produced by the data the ML model was trained on. It even persists in apologising after I explicitly tell it that I wasn’t offended and ask it to stop apologising (it just replies by apologising for its apologising, and then repeats that it is only a language model).

My other aim in pointing out these two behaviours is that I can’t help thinking that the two behaviours might be linked. The simple thought is that if ChatGPT were disposed to asking more questions more often, rather than being disposed to being a question answerer, then I suspect that it might avoid placing itself into the positions in which it exhibits the apologising behaviours, because by asking questions it could actually help tease apart tacit presuppositions in its interacitons with humans, and use the new things that humans say in response to such questions to continue the conversation (rather than having to apologise).

Has anybody else noticed these two behaviours of ChatGPT?

1 Like

Yes, it’s a beta; and it’s basically an auto-completion LLM text generator, not a general purpose AGI.

Of course ChatGPT is going to sound “robotic”, because it is :slight_smile:

2 Likes

Thanks for your thoughts, ruby_coder. I was wondering if it was a manifestation of trained auto-completion behaviour, or if this behaviour was the manifestation of a symbolic override aimed intentionally at avoiding potentially upsetting people by potentially making them feel confronted by the fact that a language model is challenging their views or something like that.

I still wonder, though, what features of the training data might account for the fact that the resulting ML model has almost no disposition towards question-asking behaviours? After all, if the input training data that GPT-3 was trained upon had contained transcripts of human conversations (rather than, say, only articles where the point is to convey something to people, but there isn’t an opportunity for people reading it to ask the author questions as they read it), then I would have expected the auto-completion LLM text generator to sometimes manifest question-asking behaviours.

If that (perhaps only partly) accounts for ChatGPT’s behaviour that I described in my original post, then I wonder if this might limit ChatGPT’s potential to actually chat, and whether fine-tuning the model with training data composed of transcripts of many paradigmatic examples of excellent conversations (preferably including in those examples a wide range of features that make genuine human conversations satisfying and pragmatically helpful) might improve the quality of the fancy auto-completions that ChatGPT generates so as to make them more… ahm, chat-like? Oh, and perhaps more interestingly for me, whether on the scale of entire conversations, there’d be less of the obsiquious apologising behaviour, since by asking questions and exhibiting the behaviours of paradigmatically excellent conversations, the LLM would open up its conversations more often, rather than backing itself into a corner where the only behavioural response disposition it has is to apologise?

1 Like

ChatGPT is nothing more than a fancy “text autocompletion” engine.

Yes, ChatGPT has been “tweaked” by OpenAI engineers in ways we are not privy to.

Because it is an LLM designed to predict text and so is predicting text (as designed), it provides text, not questions. After all, it’s not “aware” or designed to be an AGI, ChatGPT is a fancy “auto-completion” engine.

Honestly, ChatGPT is not really a “chat bot”, it is a text prediction engine which has been given the name “ChatGPT” probably for marketing reasons.

After all, who would play with and rave about something called the “GPT-LLM Autocompletion Engine” :slight_smile:

1 Like