Bugs the using OpenAI api with Celery/redit

Hello,
There seem to be a bug when using the Openai API with Celery/redit shared task. When running the function

@shared_task
def some_ai_function():
message = "Hello, write a message about testing "
print(message)
completion = openai.chat.completions.create(
model=“gpt-3.5-turbo”,
messages=[
{
“role”: “user”,
“content”: message,
},
],
)
print(completion.choices[0].message.content)

I get the following error :
ERROR/MainProcess] Process ‘ForkPoolWorker-2’ pid:86900 exited with ‘signal 5 (SIGTRAP)’
ERROR/MainProcess] Task handler raised error: WorkerLostError(‘Worker exited prematurely: signal 5 (SIGTRAP) Job: 0.’)
Traceback (most recent call last):
File" …
/python3.11/site-packages/billiard/pool.py", line 1264, in mark_as_worker_lost
raise WorkerLostError(
billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 5 (SIGTRAP) Job: 0.

I’m not sure exactly why. When In use langchain with mistral, or with my own models it works really well, but it crashes when I use anything touching the openai api. Could someone tell me why ?

There seems to be a massive issue with celery in general (does not have to do with reddit)…

I am using celery with SQS

100% issues is related to openai, because it became a thing right after I updated the library

(WorkerLostError(‘Worker exited prematurely: signal 11 (SIGSEGV) Job: 0.’),)

By the way… it is specific to the use of client.chat.completions with client coming from OpenAI

Issue does not present when running completion with MS Autogen

I cannot downgrade