API not returning full table


I have a longer list of data, which I am asking gpt to put into a data table. It does fine until some 50 rows where i get this

“…[Please note that the table continues as per the list provided, maintaining the format and including all wines listed.]…”

I am not sure how I get the remaining text via the API?

ask_gpt_question(extracted_text, question):
response = openai.ChatCompletion.create(
“role”: “user”,
“content”: [
“type”: “text”,
“text”: question
“type”: “text”,
“text”: extracted_text
print(response) # Print the response for debugging purposes
parsed_text = response[“choices”][0][“message”][“content”]
return parsed_text.strip()

1 Like

welcome to the community

It seems like you’re facing a challenge when dealing with longer lists of data and trying to structure it into a data table using the GPT model through the API. The issue you mentioned, with the response indicating that the table continues, might be due to token limitations in the model’s response.

To address this, you can modify your code to handle longer lists by dividing them into smaller chunks and processing them sequentially. Here’s a general approach:

def create_data_table(data_list):
# Split the data into chunks of manageable size
chunk_size = 10 # Adjust this based on your requirements
chunks = [data_list[i:i+chunk_size] for i in range(0, len(data_list), chunk_size)]

# Initialize an empty result string
result = ""

# Iterate through chunks and generate responses
for i, chunk in enumerate(chunks):
    # Format the chunk into a message for GPT
    message = f"Create a data table with the following rows:\n{', '.join(chunk)}"

    # Make an API call
    response = ask_gpt_question(message, "Create a data table with the following rows:")

    # Extract content from the response
    parsed_text = response.strip()

    # Append the parsed text to the result
    result += parsed_text

return result

This approach splits your data into smaller chunks, processes each chunk separately, and then concatenates the results. Adjust the chunk_size variable based on your specific needs and token limitations.

1 Like

Yep, when you see AI text and code too many times, you know exactly what it looks like…

1 Like

I’m new, it’s good to know that you explain how to use chatgpt

The issue which you are facing is this model not having the “lazy” fixed in it. It will do just about everything possible to terminate output at around 800 tokens from its supervised training.

If you have a task that clearly should not stop mid-output, thus taking your money, I would use a full and real GPT-4 model, such as gpt-4-0613. This is also likely a task that gpt-3.5-turbo-0301 could also complete without devolving into an incomplete output that tells you to do the rest yourself or acts like it is only writing an example output.

1 Like

@_j Your answer is very good, it will help here

1 Like