Your self-answer didn’t appear; only one line of CURL.
I dumped a curated openai.yaml (the origin of API reference) into an AI to have it give us more overview than the terse list of parameters:
The OpenAI API documentation provided includes several endpoint methods under various categories. Here’s a list of each endpoint method documented, along with a brief description of their purpose:
General Endpoint Methods:
Create Batch (/batches [POST])
Purpose: Creates and executes a batch from an uploaded file of requests. This allows for large batches of API requests to run asynchronously.
List Batches (/batches [GET])
Purpose: Lists all batches created by your organization, providing a way to monitor and manage multiple batch operations.
Retrieve Batch (/batches/{batch_id} [GET])
Purpose: Retrieves detailed information about a specific batch using its unique batch ID.
Cancel Batch (/batches/{batch_id}/cancel [POST])
Purpose: Cancels an in-progress batch, useful for stopping a batch execution that is no longer needed or was created in error.
Detailed Methods for Listing and Monitoring Batches:
List Batches (/batches [GET])
This endpoint method is used to list all the batches that have been submitted under an organization. It provides paginated results and includes various filters to narrow down the list based on criteria like creation date or status. Key parameters include:
after: A pagination parameter to get batches created after a certain batch ID.
limit: Limits the number of batches to retrieve at once (default is 20).
The response includes details about each batch, such as the batch ID, status, and counts of total, completed, and failed requests.
Retrieve Batch (/batches/{batch_id} [GET])
This method is specifically for retrieving detailed information about a particular batch identified by batch_id. It gives comprehensive details about the batch’s execution status, including timestamps for various stages of the batch process (like when it was created, when it entered progress, when it was finalized, etc.), and any errors encountered.
This is useful for monitoring the progress and outcome of a specific batch operation.
The endpoints provided for batch operations in the OpenAI API allow users to efficiently manage large-scale asynchronous processing of API requests, which can be crucial for applications requiring bulk data processing or batch inference tasks.
(The real URL base is https://api.openai.com/v1/batches, which is only mentioned over half-a-dozen times in the input context for GPT-4-Turbo to get right…)
Here is a plausible Python utility. I don’t have any batches to actually list and then retrieve by a menu and show by this tool.
import requests
def fetch_batches(api_key):
"""Fetches all batches from the API and returns them as a list."""
url = "https://api.openai.com/v1/batches"
headers = {"Authorization": f"Bearer {api_key}"}
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.json().get('data', [])
else:
print("Failed to fetch batches")
return []
def display_batches(batches):
"""Displays the batches in a numbered list."""
print("Select a batch to view details (or 99 to exit):")
for index, batch in enumerate(batches):
print(f"{index + 1}. Batch ID: {batch['id']}, Status: {batch['status']}")
print("99. Exit")
def fetch_batch_details(api_key, batch_id):
"""Fetches and displays details of a specific batch."""
url = f"https://api.openai.com/v1/batches/{batch_id}"
headers = {"Authorization": f"Bearer {api_key}"}
response = requests.get(url, headers=headers)
if response.status_code == 200:
batch = response.json()
print("Batch Details:")
print(batch)
else:
print("Failed to fetch batch details")
import os
def main():
API_KEY = os.getenv("OPENAI_API_KEY") # Fetch API key from environment variable
if API_KEY is None:
print("Error: OPENAI_API_KEY environment variable not set.")
return
while True:
batches = fetch_batches(API_KEY)
display_batches(batches)
try:
choice = int(input("Enter your choice: "))
if choice == 99:
break
elif 1 <= choice <= len(batches):
fetch_batch_details(API_KEY, batches[choice - 1]['id'])
else:
print("Invalid choice, please try again.")
except ValueError:
print("Please enter a valid number.")
if __name__ == "__main__":
main()
Is there a way to “delete/archive” previous batches ? (i don’t think so, but maybe it’s not documented yet)
And i also suppose there is not a listing of batches anywhere in the web interface, right ?
I haven’t tried myself but it’s probably worth trying to send a DELETE request similar to the Files API. Previously the listing was working but just not documented yet; if you’re lucky it’s the same for deletions!