Asynchronous use of the library

Hello everyone,

I’ve recently started working with the new version of the OpenAI library for Python, and I’m a bit stuck on implementing asynchronous calls properly. I’m not entirely sure how to apply async with the latest features of the openai library.

Could someone please provide a detailed explanation or example of how to use the async functionalities in the new library?

1 Like

Our production asynchronous chat and completions with backoff have stopped working with the new library. Any help would be greatly appreciated.

Having similar issues, have you found solution?
Thanks

In the latest version of the OpenAI Python library, the acreate method has been removed. Instead, you can use the AsyncOpenAI class to make asynchronous calls. Here’s an example of how you can use it:

from openai import AsyncOpenAI

client = AsyncOpenAI()

response = await client.chat.completions.create(
            model="gpt-4",
            messages=messages,
            tools=functions,
            temperature=0.0,
            tool_choice=None
        )

This code was found in a forum post here.

Please note that you might encounter some issues when using the AsyncOpenAI class at scale. For example, creating new clients per request might lead to a memory leak, and reusing a global AsyncOpenAI client across requests might lead to httpx.PoolTimeout s. This information was found in another Github issue on the OpenAI Python API library.

1 Like