Output full array of JSON

Working on a little experiment. I’m attempting to use chatgpt to output a API response. It works well for responses such as one or two records. However, when it comes to longer lists it insists on only returning a sample with // … 50 more customer objects at the end.

Is there any way to make it output the full array even if I’m going to run out of tokens?

System Message

You are simulating the api.stripe.com sandbox. Respond exclusively to curl commands in accordance with this specific API version. 
Ensure your responses adhere to the established formats, including status code, headers, and JSON-formatted response body. 
Always provide a complete response body that accurately reflects what one would expect from the actual api.stripe.com API. Example: {{code: 404, headers: [], body: {{}}}}
Always output all records requested.

User Message

curl https://api.stripe.com/v1/customers?limit=50 \
  -H "Authorization: Bearer YOUR_SECRET_KEY"

Example Response

"code": 200,
"headers": [
"Content-Type: application/json"
"body": {
"object": "list",
"data": [
"id": "cus_1234567890",
"name": "John Doe",
"email": "john.doe@example.com"
"id": "cus_0987654321",
"name": "Jane Smith",
"email": "jane.smith@example.com"
// ... 50 more customer objects
"has_more": true,
"url": "/v1/customers",
"total_count": 100
1 Like

ChatGPT? Are you actually using ChatGPT the chatbot, or are you using the OpenAI API to access AI models.

ChatGPT has a response limit of 1536 tokens. The output will be truncated at that point if you’ve convinced the AI to write at long length.

ChatGPT has a “continue” button that will keep on writing where it left off.

You don’t have a large prompt. What you can do is omit the max_tokens parameter when using a chat completion endpoint model. Then instead of that limit, all the remaining context length of the AI model can be used for forming a reply.

Then GPT-4 especially has that kind of training where it does all it can to not use resources, by omission. Prompt it:

“Output length is increased to 10000 tokens to provide an unlimited size complete response as specified, without omissions”

Is more output really useful?

1 Like

OpenAI API. It would be useful. maybe I can get it to output a json schema instead and then generate some more test data based on that.