After printing the training file id :Training_file_ID: file-la2K7W0nne2yt0Cd1hMUtu4A
the traceback is: Traceback (most recent call last):
File "C:/Users/lingo/python_scripts/gpt_finetune/finetune_test.py", line 11, in <module>
response = openai.FineTuningJob.create(training_file=training_file_id, model="gpt-3.5-turbo")
File "C:\Users\lingo\AppData\Local\Programs\Python\Python38\lib\site-packages\openai\api_resources\abstract\createable_api_resource.py", line 57, in create
response, _, api_key = requestor.request(
File "C:\Users\lingo\AppData\Local\Programs\Python\Python38\lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\Users\lingo\AppData\Local\Programs\Python\Python38\lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "C:\Users\lingo\AppData\Local\Programs\Python\Python38\lib\site-packages\openai\api_requestor.py", line 765, in _interpret_response_line
raise self.handle_error_response(openai.error.InvalidRequestError: File file-la2K7W0nne2yt0Cd1hMUtu4A is not ready
Is this due to system overload, or have I made a mistake I just don’t see? Any ideas appreciated
The current status of the file, which can be either uploaded, processed, pending, error, deleting or deleted.
You can only proceed with fine-tune if the file is processed.
To run fine-tune once the file is processed, you can make an API call to the Retrieve File endpoint with exponential backoff to check the status within the file object it returns, if it’s processed you can run create the fine-tune job.
I have now resolved this issue by modifying my app to ensure that there is a brief time between my “Upload files” command and the start of fine tuning. The fine-tuning command was being given while the training file status was still “uploaded” but not yet ready.
I just went off and made a coffee and when I got back I was able to run the fine tuning job. In a following test I just counted up to 20 pressed the “Run” button and it ran. I guess it depends on the server load at the OpenAI end.
Hey @terencelewis06 , since you are you using the openai python library to run your scripts you can also use the helper function wait_for_processing as such
It will block until the file is processed and you can then proceed with the fine-tune. It has a default wait time of 30 minutes, but can be adjusted by setting the named function parameter max_wait_seconds, like so