Using 3.5 turbo to interpret business data and charts

Hi,

I have an app that generates plots like this one

The comments “Sales of technology…” is generated with this code

def run_Open_Ai(prompt,nberOfResponses):
    namingParams=get_naming_params()
    configParams=get_config_params()
    openAiKey=configParams[namingParams["openAiKey"]]
    openai.api_key = openAiKey
    completion=openai.Completion()
    response=completion.create(
                    engine="text-davinci-003",
                    prompt=prompt,
                    max_tokens=300,
                    n=nberOfResponses,
                    stop=".",
                    temperature=0.6
                )
    return response

My prompt contains the data of the chart.

Explain the dataframe in one short sentence. Show what best explains the change in sales between 2011 and 2014 years. Do not consider items called other or unknown. ): Technology Furniture Office Supplies Period
2011 8.276521e+05 7.559110e+05 6.755911e+05 2012 1.023442e+06 8.579435e+05 7.950575e+05 2013 1.277305e+06 1.117724e+06 1.010590e+06 2014 1.616102e+06 1.378056e+06 1.305652e+06

It would like to upgrade to 3.5 turbo engine and I have modified my code like so. I did not touch the prompt

def run_Open_Ai(prompt,nberOfResponses):
    namingParams=get_naming_params()
    configParams=get_config_params()
    openAiKey=configParams[namingParams["openAiKey"]]
    openai.api_key = openAiKey
    completion=openai.ChatCompletion()
    response=completion.create(
                    model="gpt-3.5-turbo",
                    messages=prompt,
                    max_tokens=300,
                    n=nberOfResponses,
                    stop=".",
                    temperature=0.6
                )
    return response

I get a “Invalid request error”`

File “/root/mparanza_code/variance_analysis.py”, line 25189, in adjust_stacked_column_plot
message=generate_message_with_open_ai(df,chosenChart,paramDict,chartDict,key,message)File “/root/mparanza_code/variance_analysis.py”, line 31862, in generate_message_with_open_ai
response=run_Open_Ai(prompt,nberOfResponses)File “/root/mparanza_code/variance_analysis.py”, line 31966, in run_Open_Ai
temperature=0.6File “/root/anaconda3/lib/python3.7/site-packages/openai/api_resources/chat_completion.py”, line 25, in create
return super().create(*args, **kwargs)File “/root/anaconda3/lib/python3.7/site-packages/openai/api_resources/abstract/engine_api_resource.py”, line 160, in create
request_timeout=request_timeout,File “/root/anaconda3/lib/python3.7/site-packages/openai/api_requestor.py”, line 226, in request
resp, got_stream = self._interpret_response(result, stream)File “/root/anaconda3/lib/python3.7/site-packages/openai/api_requestor.py”, line 623, in _interpret_response
stream=False,File “/root/anaconda3/lib/python3.7/site-packages/openai/api_requestor.py”, line 680, in _interpret_response_line
rbody, rcode, resp.data, rheaders, stream_error=stream_error`

Is this because 3.5 turbo cannot process this kind of data or is there an issue in my prompt?

Thanks

Fabio

Solved it by simply reading the docs…No issue at all