I’ve noticed some odd behavior where the responses I’m getting look a lot like the prompts I’ve been giving it. Is this possible? Does the system have some kind of feedback mechanism?
You need to understand the nature of the “response” and how this tech works.
When GPT3 generates texts to the inputs, what’s really happening is it’s trying to predict what text should be coming next based on the input and the settings. It’s not unusual that the subsequent text reminds you of the input because it is based on it.
Now if you mean the generated text reminds you of your previous prompts, it’s a total coincidence. It must be due to similar context used for the prompt.