Error with OpenAI Python Library when Using API with stream=True

Are there any known issues with the OpenAI Python library when using the API with the stream=True parameter?

I’m consistently encountering the following error, while using the library after few hits:

“Must provide an ‘engine’ or ‘deployment_id’ parameter to create a <class ‘openai.api_resources.chat_completion.ChatCompletion’>”.

I would greatly appreciate any suggestions or guidance regarding this. I’m unsure if I’m making any mistakes in my implementation.

this is my streaming function in python if it’s of any use to you

def stream_openai_response(prompt):
    response = openai.ChatCompletion.create(
        model='gpt-3.5-turbo',
        messages=[{"role": "user", "content": prompt}],
        max_tokens=1000,
        stream=True,
    )
    for event in response:
        yield event

If you can share your function call, that would be really helpful in debugging the problem

Thanks @Foxalabs and @udm17 for your suggestions.

this is how my openai call looks like.

def openAI_call(model,message,max_token):
    response = openai.ChatCompletion.create(
        model=model,
        messages=message,
        max_tokens=int(max_token),
        temperature=0.7,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        n=1,
        stream=True
    )
    for chunk in response:
        choices = chunk.get("choices",[])
        if choices and len(choices)>0 and "delta" in choices[0]:
            delta = choices[0].get("delta",{})
            #print(delta)
            if "content" in delta:
                text  = delta.get("content",{})
                yield str(text + '\n').encode('utf-8')

Are you using the async client? if yes, try this -

...
async for chunk in await response:
   choices = chunk.get("choices", [])