How can I train and use a fine-tuning model with CUDA GPU?

I was able to set cuda for machine learning models with pytorch. Now I’m wondering how I can set it for OpenAI fine-tuning models. Because GPU is way way faster than CPU for machine learning.

You can’t train the GPT-3 models on your own machine, but you can still fine-tune GPT-2 on your own machine, I believe.

Welcome to the forum.

1 Like

Welcome @im.zlimkt

I’d highly recommend that you read fine-tuning docs

Hi Paul, thanks for your reply. I consider the creating a fine-tuning model part
“!openai api fine_tunes.create -t “train.jsonl” -m ada” as training. I attached a screenshot of the output here.
Screenshot 2023-05-03 102145

Because it does take some time. I experimented a training data set with only 10 lines. It took several minutes. If I use a 1000 lines training data set, I guess it will take several hours. That’s why I am wondering whether I can use GPU to make it faster. Feel free to correct me if I didn’t understand the fine-tuning models well. I’m still exploring it :slight_smile:

Hi sps, thanks for your reply. I did read the fine-tuning docs. I consider ```
openai api fine_tunes.create -t <TRAIN_FILE_ID_OR_PATH> -m <BASE_MODEL>

The fine-tuning isn’t happening on your computer, it’s happening on OpenAI servers and just displaying the progress. So no control over hardware/speed of process.

Oh, got you. I thought it’s the same as hugging face models. Now I understand.

A follow-up question: if so, it means I’m totally feeding data to OpenAI. Is there any data security concern?

Up to you to decide based on your security requirements. The model is only available to your Org account, but is stored on their servers.