Fine-tunes visibility across multiple organizations

Hi there!

My OpenAI account belongs to two organizations. The point to do so was to use one org for creating fine-tunes and dev experiments, and another one for production so that the prod one wasn’t affected while we play around with dev. But despite the user belonging to both organizations, fine-tunes are only visible to the organization they were created in (by the way is it expected behaviour? it looks a bit weird).

Does anyone know any alternative way that would allow us to split development/experimentation and production usage of fine-tunes?


Hi @sergei.perminov

Welcome to OpenAI dev forum.

Fine-tunes belong to the org that they were created in. In my knowledge the can’t be moved from one org to another.

According to my understanding you can have multiple fine-tunes in the org and use whichever one you want in production while using the other ones for testing/dev, once you have tested and want to use the model for production, simply switch the model name within your code.

1 Like

Do you mean that the same model would have to be created twice, in the sense of fine-tuning two separate models but with the same settings?

Edit: found it

I suppose that’s an easy solution. You could also consider building a gateway to separate/distinguish the usage and consumption from the second organization that is using the prod version of the fine-tuned model from the organization responsible for creating the fine-tuned model.

1 Like

Successively fine-tuning a model used in production isn’t an issue because fine-tuning creates a new model and doesn’t affect the previously fine-tuned one.

1 Like

Hi folks, thanks for your answers!

I suppose I failed to describe my question well, let me re-formulate it:
our intention is to request the same fine-tune model for two purposes: dev experiments and prod usage, and we want to “separate” these scenarios so different quotas and rate-limits would be applied.

Organizations seemed to be perfect for that purpose, as each org has its own quota, but it turned out that fine-tunes are not shared between organizations even if my OpenAI account has access to both.

Train two fine-tunes using same dataset wouldn’t be ideal solution as, if I’m not mistaken, there is always random element while training and two fine-tunes will never be exactly same.

This, right here, is the problem with what you’re asking for. You want to, effectively, be able to double your rate-limits.

You can’t do that.

Taken to the logical conclusion, this would have the effect of making rate-limits meaningless—just make 1,000 organizations!

1 Like

Hi, very interesting thread!

I understand and get your answer @elmstedt regarding the rate limiting.

But I’m facing the same problem as @sergei.perminov mentions and it is more a “docker” way approach with openai models: build once, use it in different environments and ensure it’s the same entity(code/app/model) that runs.

Do you have any advice to achieve something like that?

Is the output of a fine-tuning job exportable from 1 organization to another? This would allow to reuse a model as-is from one org to another one (I pretty sure they aren’t)

If you fine tune a model using e.g.: pytorch, you can save the model with its parameters and reload it. That would be a cool feature at least :slight_smile: