How can I get the GPT-4 API to avoid displaying detailed code to shorten the response time?

I use the GPT API’s function call to first read data through the external function queryData, and then use this data to draw charts. Because the amount of queried data is relatively large, after submitting the query results, the API interface provides the code for drawing the chart. However, because there is a lot of data, listing each data point individually is very time-consuming. How can I get the GPT-4 API to avoid displaying detailed code to shorten the response time?"

def call_required_functions(client,required_actions,projectid,thread,run,sessionid,news_api_key,object_list,running_status,assistant_id):
if not run:
return
tool_outputs =
print(“required_actions的长度:”,len(required_actions[“tool_calls”]),required_actions)
for action in required_actions[“tool_calls”]:
print(“action”,action)
func_name = action[“function”][“name”]
arguments = json.loads(action[“function”][“arguments”])
if func_name == “queryData”:
outputs_dataframe = queryData(arguments[“object”],arguments[“parameter”],arguments[“starttime”],arguments[“endtime”],projectid)
outputs = dataframe2str(outputs_dataframe)
send_msg(sessionid,projectid,outputs,running_status)
tool_outputs.append({“tool_call_id”:action[“id”],“output”:outputs})

Your formatted code is back, above.

I think I understand the issue:

  • You ask the AI to write Python code compatible with the dataframe being returned by a function call.
  • The AI does what you ask, writing Python that includes all the data sent to it.
  • Perhaps the AI even repeats back what it received first (which could be reduced by prompting, even prompting in the function return).

I think I understand a solution been sought:

You don’t want full code with data - you just want working code with placeholder data, and you can record what the function sent as full data separately?

  1. Do we have enough information to provide a truncated and elided version of the return data with the same structure?
    Yes. From the existing code snippet, we can see that the data is returned as a dataframe, then converted to a string and sent to GPT-4. This is enough to insert a truncation or summarization step that yields a smaller dataset while preserving the same columns and data types.

  2. Proposed solution (high-level):
    • After receiving the full dataset in your Python application, create a secondary “mini-dataset” (e.g., only the first 5–10 rows of the DataFrame). Convert that mini-dataset to a string for GPT-4.
    • In parallel, keep the full dataset in Python memory (or saved on disk). GPT-4 will generate charting code or other logic with placeholders referencing that smaller example.
    • You then apply that code to your full dataset locally, thus maintaining the same structure but without flooding GPT-4 with huge arrays or lists.
    • This approach shortens the GPT-4 response and avoids the overhead of verbose code that includes every data point.

  3. Example amendments to code:

Below is a conceptual example of how you might modify your existing function call logic. Imagine a helper function, truncated_dataframe2str, that returns only a small (or representative) subset of the data for GPT-4’s use:

import json

# Example helper function to produce an "elided" version of the data
def truncated_dataframe2str(df, max_rows=10):
    # Take only the first `max_rows` rows
    truncated_df = df.head(max_rows).copy()
    truncated_str = truncated_df.to_csv(index=False)  # or any other format
    # Optionally add some note indicating data was truncated
    truncated_str += "\n[Data truncated for AI output. Full dataset is stored locally.]"
    return truncated_str

def call_required_functions(client, required_actions, projectid, thread, run,
                            sessionid, news_api_key, object_list, running_status,
                            assistant_id):
    if not run:
        return

    tool_outputs = []
    print("required_actions的长度:", len(required_actions["tool_calls"]), required_actions)
    
    for action in required_actions["tool_calls"]:
        print("action", action)
        func_name = action["function"]["name"]
        arguments = json.loads(action["function"]["arguments"])
        
        if func_name == "queryData":
            outputs_dataframe = queryData(
                arguments["object"],
                arguments["parameter"],
                arguments["starttime"],
                arguments["endtime"],
                projectid
            )
            
            # Store the full dataset locally or pass it to the rest of your app
            # e.g., full_data = outputs_dataframe.copy()
            
            # Generate a small/truncated dataset for GPT-4
            truncated_output_str = truncated_dataframe2str(outputs_dataframe, max_rows=10)
            
            # Send only the truncated data back to GPT-4
            send_msg(sessionid, projectid, truncated_output_str, running_status)
            
            # Keep track of which tool call got which truncated data
            tool_outputs.append({
                "tool_call_id": action["id"],
                "output": truncated_output_str
            })
    
    return tool_outputs

Notes:
• queryData and dataframe2str in your current code would be replaced or supplemented by truncated_dataframe2str, as shown above.
• You could further refine truncated_dataframe2str to add dummy entries or placeholders, so the schema (column names, data types) still matches but the actual data is minimal.
• In your prompt to GPT-4, you can specify that it should reference these shortened data arrays only as illustrations, not attempt to handle the full dataset in-line.
• After GPT-4 generates the chart code (or other logic) using the truncated dataset, you can apply that code to your full dataset in your local environment.

By combining internal (full) vs. truncated (example) data, you ensure GPT-4 doesn’t spin off massive code listings but still obtains enough data structure to generate meaningful chart or processing instructions.

Of course, with only one snippet of code, and no prompts or other information about external functions, guessing is required.

1 Like

Thanks, it is ok to truncate some data to display in chart when the task is to query data and create chart. But in the task of query data and make data report with full data , the data can not be truncated. Some time before the gpt api does not show the python code block in api, but now it has, it really impacts the response time.