Issue with OpenAI API Migration on Windows: Error with Version 1.51.2

Hi all when i try to run my code it gives me the following error I have been researching solutions but nothings working , Error: You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28

A detailed migration guide is available here:

Migration on windows not supported i am running the latest version of open ai Version: 1.51.2 and the downgrades depreciated

code:

import os
import openai
from dotenv import load_dotenv, find_dotenv

# Load the .env file to get the OpenAI API key
load_dotenv(find_dotenv())

# Set up the OpenAI API key from the environment
openai.api_key = os.getenv("OPENAI_API_KEY")

# Configuration for model and generation
temperature = 0.3
max_tokens = 500

# Chat-based model input (for gpt-3.5-turbo or gpt-4)
chat_messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "list out 10 facts about AI"}
]

# Prompt for text-based models (for text-davinci-003, text-curie, etc.)
text_prompt = "list out 10 facts about AI"

# Function for chat-based models like gpt-3.5-turbo and gpt-4
def get_chat_completion(model):
    try:
        completion = openai.ChatCompletion.create(
            model=model,
            messages=chat_messages,  # Chat models use 'messages'
            temperature=temperature,
            max_tokens=max_tokens
        )
        return completion['choices'][0]['message']['content']  # Get the response
    except Exception as e:
        return f"An error occurred: {e}"

# Function for text-based models like text-davinci-003
def get_text_completion(model):
    try:
        completion = openai.Completion.create(
            model=model,
            prompt=text_prompt,  # Text completion models use 'prompt'
            temperature=temperature,
            max_tokens=max_tokens
        )
        return completion['choices'][0]['text']  # Get the response
    except Exception as e:
        return f"An error occurred: {e}"

# Define the model you want to use
model = "gpt-3.5-turbo"  # Use for chat models (or gpt-4)
# model = "text-davinci-003"  # Uncomment to use a text-completion model

# Check if the model is chat-based or text-based and call the appropriate function
if "gpt-3.5" in model or "gpt-4" in model:
    print(get_chat_completion(model))  # For chat models like gpt-3.5-turbo
else:
    print(get_text_completion(model))  # For text models like text-davinci-003
1 Like

Chatbots cannot write new required code. Your code doesn’t seem based in the current API reference.

This working operational snippet can serve as an example…and also will generate the same symptoms of non-understanding in its request for Python code from the AI as you demonstrate.

import openai;print(openai.chat.completions.create(
messages=[{"role":"user","content":"Python OpenAI chatbot program?"}],
model="gpt-4o-mini").choices[0].message.content.strip())

You should refer to the API reference on the forum sidebar, starting with chat completions. Then you’ll be able to say “my code” when you can’t make THOSE usage examples work.