-
Consider: Would OpenAI put a supervisor or special training on ChatGPT so that it cannot be employed for repetitive data processing tasks or operations that look like training data extraction? That’s one plausible scenario without seeing exactly what’s going on with all of your inputs and seeing what kind of refusal you are receiving. You can prompt against that behavior to make the AI always output particular text to start instead of having the choice of saying “I’m sorry”.
-
The AI uses sampling to choose randomly from probabilities of output for each word token it emits. If the first token is 50% “call a function”, and then after that, 75% the knowledge function, you are going to get very different answers on runs. That is by design. Always the same output means an answer will never have a better variation, and there will never be something better for you to press the upvote button on. The sequences of words become very inhuman.