OpenAI python module: using Azure and OpenAI at the same time

Hi,

Currently some properties likes api_type, api_base, api_key, etc. are set at module level which makes it harder to use both Azure and OpenAI at the same time from same python program.

Ideally it should all be implemented as a class so multiple instances could be instated with different api_key, etc. (or other method).

1 Like

Have you find a way to do this yet? I am also trying to run both services at the same time

Just a hacky fork + resetting module params

Same !
I tried doing a manual http request for openai calls, but somehow I can’t get the stream to work properly…
Could you share your “hacky fork” @cuties06.roam ? :slight_smile:
Thanks

I ended up doing the below.
(the code is for using DALL-E from OpenAI while using all other APIs from Azure - I no longer use it given latest pypi/openai module now works with Azure/Dalle).

    # XXX: temp hack
    import openai
    import pickle
    if openai.api_type == 'azure':
        pipe_read, pipe_write = os.pipe()
        pid = os.fork()
        if pid != 0:
            os.close(pipe_write)
            urls = pickle.loads(os.read(pipe_read, 4096))
            os.close(pipe_read)
            os.waitpid(pid, 0)
        else:
            import threading
            for thread in threading.enumerate():
                if thread is not threading.current_thread():
                    thread._stop()
            openai.api_type = 'open_ai'
            _api_base = openai.api_base
            openai.api_base = 'https://api.openai.com/v1'
            _api_version = openai.api_version
            openai.api_version = None
            openai.api_key = os.environ.get('OPENAI_API_KEY')

            urls = openai.image_create(...)

            os.write(pipe_write, pickle.dumps(urls))
            os.close(pipe_write)
            os._exit(0)
    else:
        urls = openai.image_create(...)

Each API in the library accepts per-method overrides for the configuration options. If you want to access the Azure API for chat completions, you can explicitly pass in your Azure config. For the transcribe endpoint, you can explicitly pass the OpenAI config. For example:

credit to stackoverflow user Krista :

import os
import openai

api_response = openai.ChatCompletion.create(
    api_base=os.getenv("AZURE_OPENAI_ENDPOINT"),
    api_key=os.getenv("AZURE_OPENAI_KEY"),
    api_type="azure",
    api_version="2023-05-15",
    engine="gpt-35-turbo",
    messages=[
    {"role": "user", "content": "Hello!"}
    ],
    max_tokens=16,
    temperature=0,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0,
)
print(api_response)



audio_file = open("minitests/minitests_data/bilingual-english-bosnian.wav", "rb")
transcript = openai.Audio.transcribe(
    api_key=os.getenv("OPENAI_API_KEY"),
    model="whisper-1",
    file=audio_file,
    prompt="Part of a Bosnian language class.",
    response_format="verbose_json",
)
print(transcript)