Asked to generate some sample data and always get about 25~26 records taking a good few seconds which I suppose is due to compute limitations? I’m using the plus version but seems the performance is same with free or plus.
And more importantly why do I only get 25 records when I asked to generate 100 records, seems like its capped to minimise the compute use?
yes, ChatGPT limits the output to a certain level. A small example: If you would say that you would have 1 million results and there would be no limits, then you could quickly cripple the service.
However, there is a way to get your 100 results, even if it is a bit more complicated:
Input 1 (Prompt & Specification how to awnswer):
“Your Prompt”, Split your answer in four replies, send me first the first answer now when I send “next” send the second one.
Input 2 (let ChatGPT continue)
next
and so on…
This works reliable for me, hope this helps you as well
One other way I’d advise would be to use the Playground, there you can increase the tokens used for your reply, I think standard config of 256 is used via regular access, this also fits the aprox. 25 Results you are getting
Please note that there might be a different pricing model so I’d advice to work with this if the previous suggested solution does not work for you.
Given your example, I would respond with following prompt:
Continue starting with transaction date 2023-03-10:
What’s curious is that your example ended with basically the current date. I wonder if it would produce more results if you said something like Starting with date 2023-01-01 in your initial prompt.