wonder about the mechanism of the GPT3.5’s fine tunning. Does it mean that if I make a fine tuned GPT3.5, there will be a new weights file stored for me in OpenAI’s server ? If so, isn’t the storage cost high? What’s more, fine tuning for GPT4(and maybe in future, GPT5) is on the way, will the size of new weights file quickly consume the disk of server ? Thanks a lot !
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| How can I fine tune a gpt model | 1 | 423 | October 18, 2023 | |
| GPT-3 Model's Fine-tuned Layer and Storage | 0 | 2171 | May 28, 2023 | |
| ChatGPT fine-tuning as a service | 17 | 13857 | December 13, 2023 | |
| When is fine-tuning available for the gpt-3.5-turbo? | 7 | 15217 | December 13, 2023 | |
| Is the GPT 3.5 fine-tuning service prompt tuning? | 8 | 4685 | September 13, 2023 |