Mechanism of the fine tunning of GPT

wonder about the mechanism of the GPT3.5’s fine tunning. Does it mean that if I make a fine tuned GPT3.5, there will be a new weights file stored for me in OpenAI’s server ? If so, isn’t the storage cost high? What’s more, fine tuning for GPT4(and maybe in future, GPT5) is on the way, will the size of new weights file quickly consume the disk of server ? Thanks a lot !