There seems to be a rather strange and persistent issue:
ChatGPT will commonly suggest using the OpenAI API to solve a particular problem - especially so if you happen to ask it to use “a” cloud LLM for this purpose.
Here’s a script that it just generated for a text formatting task:
import openai
# Set up OpenAI API key
openai.api_key = 'your-api-key'
# Function to process a chunk of text
def process_chunk(chunk):
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant that fixes formatting issues."},
{"role": "user", "content": f"Please fix the formatting in this text:\n{chunk}"}
]
)
return response['choices'][0]['message']['content']
# Read your large text file
with open('large_text_file.txt', 'r') as file:
text = file.read()
# Split text into chunks (adjust chunk size based on token limits)
chunk_size = 8000 # Adjust according to model's token limit
chunks = [text[i:i+chunk_size] for i in range(0, len(text), chunk_size)]
# Process each chunk
processed_chunks = [process_chunk(chunk) for chunk in chunks]
# Combine processed chunks
processed_text = ''.join(processed_chunks)
# Save processed text back to file
with open('formatted_text_file.txt', 'w') as output_file:
output_file.write(processed_text)
At first glance, it looks okay, but when I tried running it (after adding my API key, of course):
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
I had a hard time spotting what was wrong in the API docs, but it seems to be a simple capitalisation issue, as this (further) ChatGPT output suggests:
In OpenAI's Python library version 1.0.0 and above, you need to instantiate a client and use client.chat.completions.create() instead of openai.ChatCompletion.create(). Below is an updated version of your code:
My point is this:
I understand that the models have training cutoffs (or at least those not using real time search do). But … you’d think OpenAI could figure out a way to keep the web UI models in “sync” with the latest syntax in their APIs, especially given that there aren’t that many of them.
Consider it a feature request!