I figured out why some configurations could delete and others couldn’t in a brainwave that was “duh, obvious given the error message”:
Send the API key, along with matching project AND organization, in the delete call.
This Python SDK script will
- use the model name you hard-code
- dump out your environment variables set in the
openai
client. - Then uses the model endpoint to make sure the model exists, with info.
- Then deletes only if you type 999
- Then checks the model endpoint again.
The environment variable values to set are self-documenting:
import openai
client = openai.OpenAI() # Set environment variables first
print(f" org: {client.organization}\n", # OPENAI_ORG_ID
f"proj: {client.project}\n", # OPENAI_PROJECT_ID
f"key: {client.api_key}\n") # OPENAI_API_KEY
model_to_delete = "gpt-3.5-turbo-0125" # YOUR ft: MODEL ID (to lazy for UI)
try:
status1 = client.models.retrieve(model_to_delete)
print("retrieved model info from API")
for key, val in status1:
print(f"{key}: {val}")
except Exception as e:
raise ValueError(e)
choice = input("*"*40+"\nif all info looks correct, "
"enter '999' to irreversibly delete the model: ")
if choice == "999":
try:
status2 = client.models.delete(model_to_delete)
except Exception as e:
print('-- delete operation failed!!')
print(e)
try:
status3 = print(client.models.retrieve(model_to_delete))
print("Model is still there!!")
except Exception as e:
print("Model is gone, call returned error!!")