GPT-4: Discrepancy between the standalone Python script and the Django script

Happy New Year, All

I wrote a Python code (see below), and ran it in Windows PowerShell within openai-env VM, it works ok with expected result.

from openai import OpenAI
client = OpenAI()

completion =
{“role”: “system”, “content”: “You are a programming assistant.”},
{“role”: “user”, “content”: “what is Java?”}


However, when I used very similar code in Django (see below), error handing returns “‘ChatCompletion’ object is not subscriptable”. sometimes I got another error “You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0”.
Can any GPT4/Django expert please help to figure out where’s wrong? Thanks for advance.

from django.shortcuts import render
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
from openai import OpenAI

def index(request):
return render(request, ‘ChatbotTestApp/index.html’)

@csrf_exempt # Only for testing, in production use CSRF protection
def handle_ajax(request):
if request.method == ‘POST’:
data = json.loads(request.body)

    # OpenAI.api_key = 'my key has been stored in system environment'

        client = OpenAI()
        response =
                {"role": "system", "content": "You are a programming assistant."},
                {"role": "user", "content": data['data']}

        # Extract the response message
        if response['choices']:
            response_message = response['choices'][0]['message']
            response_message = "No response from AI."

    except Exception as e:
        # Handle any exceptions and return an error message
        response_message = str(e)

The problem is mostly that the object is not subscriptable.

The return object from the chat.completions.create method of the new python openai library is not a dictionary, it is a pydantic model. You did it right in the first code.

You can also check the library version before continuing, getting a report before wasting tokens on errors.

Here’s a version check that doesn’t require further imports.

import openai

def old_package(version, minimum):
    """detect library versions, returns True if too old"""
    version_parts = list(map(int, version.split(".")))
    minimum_parts = list(map(int, minimum.split(".")))
    return version_parts < minimum_parts

requires_minimum = "1.5.0"  # library version
if old_package(openai.__version__, requires_minimum):
    raise ValueError(
        f"OpenAI library {openai.__version__}"
        f" is less than the minimum version {requires_openai}\n\n"
         ">> You should run 'pip install --upgrade openai'")
    client = openai.OpenAI()
    print(f"openai {openai.__version__} python client ready.")

Thanks Jay.
I ran your code, it gave me “openai 1.6.1 python client ready.”, which is good.

I replaced “from openai import OpenAI” in Django to “import openai”, restarted Django server and tried again. Unfortunately I got “‘ChatCompletion’ object is not subscriptable” again. It seems Django doesn’t support “chat.completions” and convert it to ‘ChatCompletion’ .

I then changed to use “response = openai.ChatCompletion.create”, rebooted server and tried again, I got following message:
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0
You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface. Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28

I don’t want to degrade my openai to that old version. Looks like I’m trapped in dead loop.
Any ideas?

I’ll leave this here if all else fails

only for the rebels
import requests
import json
import os

# Ensure you have your OpenAI API key set in the environment variables
openai_api_key = os.getenv("OPENAI_API_KEY")
if openai_api_key is None:
    raise ValueError("OpenAI API key is not set in environment variables.")

url = ""

headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {openai_api_key}"

data = {
    "model": "gpt-4-1106-preview",
    "temperature": 1, 
    "max_tokens": 256,
    "messages": [
            "role": "system", 
            "content": "You are the new bosmang of Tycho Station, a tru born and bred belta. You talk like a belta, you act like a belta. The user is a tumang."
            "role": "user",
            "content": "how do I become a beltalowda like you?"
    "stream": False,

response =, headers=headers, json=data)

# Check if the request was successful
if response.status_code == 200:
    #print("Raw response from OpenAI:", response.json())
    print("Error:", response.status_code, response.text)```

The problem is in how you are attempting to parse the output when everything else is done correctly.

My version “checking” code that you can just leave at the beginning of your script creates a client after an import of the whole library:

client = openai.OpenAI()

Having the whole library available also allows you to check for openai.(ErrorTypes), of which there are many specific types of errors one could handle.

Your issue is that this line attempts to access dictionaries of the old return object, which does not work (“not subscriptable”):

response_message = response['choices'][0]['message']

This conversion would work to get the AI’s text like that:

response_dict = response.model_dump()
message = response_dict['choices'][0]['message']

but why not just use the native pydantic method?


1 Like

YAY… changed to native format, it is working now. Thank you so much, Jay, really appreciate.