I’ve been trying to gauge the variance of a prompt to a set of data in GPT-4, and I’m limited by the 25msg/3h limit. ‘Regenerate Response’ seems to be a way around that limitation, but my protocol so far has been to clear the chat entirely, and I don’t know if regenerating is equivalent.
Has there been any official word? Are those pathways equivalent?
From my observations the regenerate response feature represents a clean slate. My assumption is that only the previous messages in the chain (!) are injected as context.
For example: create a step by step task list. Then prompt to finish the first task. Then go back, rewrite the prompt to finish the second task, repeat until done. You will notice that a) the results of the previous task completion have to be manually passed in as context in the rewritten request to resolve the next task or the model will start hallucinating as to how the solution for task 1 looks like.
On a different note, I am part of the faction who claims to experience severe performance issues with ChatGPT 4. My solution is to regenerate the bad responses and I often observe the higher quality on the second attempt, without providing any additional input, of course.