Hi there,
I am currently fine-tuning GPT-4 Mini via the OpenAI dashboard for a binary classification project. Once the model is ready, I plan to evaluate its performance using a test dataset of 1,000 samples. However, since the free request limit is set to 200 requests per day, I have restricted my evaluation to 200 texts.
I encountered the following error:
RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o-mini in organization org-jg2v1DbkC2MArlIpxJtnPnze on requests per day (RPD): Limit 200, Used 200, Requested 1. Please try again in 7m12s. Visit https://platform.openai.com/account/rate-limits to learn more. You can increase your rate limit by adding a payment method to your account at https://platform.openai.com/account/billing.', 'type': 'requests', 'param': None, 'code': 'rate_limit_exceeded'}}
I have attempted to use the delayed_completion
function as described in the OpenAI Cookbook, but unfortunately, it did not resolve the issue.
Also, as recommended I put the max token to 1.
Is there any way to manage the rate limits effectively using free requests without increasing the limit? Any suggestions or alternatives would be greatly appreciated.
Thank you!
Here is my code:
import time
# Calculate the delay based on your rate limit
rate_limit_per_minute = 20
delay = 60.0 / rate_limit_per_minute
# Define a function that adds a delay to a Completion API call
def delayed_completion(delay_in_seconds: float = 1, **kwargs):
"""Delay a completion by a specified amount of time."""
# Sleep for the delay
time.sleep(delay_in_seconds)
# Call the Completion API and return the result
return client.chat.completions.create(**kwargs)
# Function to classify a text using the fine-tuned OpenAI model
def classify_text(text):
response = delayed_completion(
delay_in_seconds=delay,
model=model_id,
messages=[
{"role": "system", "content": "Your task is to analyze the text and determine if it contains elements of propaganda. Based on the instructions, analyze the following 'text' and predict whether it contains the use of any propaganda technique. Return only predicted label. ['true', 'false']."},
{"role": "user", "content": text}
],
temperature=0,
max_tokens=1
)
# Extract the prediction from the response
prediction = response.choices[0].message.content
return 1 if prediction.strip() == "true" else 0
# Collect predictions
predictions = [classify_text(text) for text in texts]
# Compute Precision, Recall, F1 for Macro and Micro averages
precision_macro, recall_macro, f1_macro, _ = precision_recall_fscore_support(true_labels, predictions, average='macro')
precision_micro, recall_micro, f1_micro, _ = precision_recall_fscore_support(true_labels, predictions, average='micro')
# Display the results
print(f"Macro-F1: {f1_macro:.4f}")
print(f"Micro-F1: {f1_micro:.4f}")
I appreciate any help!