What do you think the likelihood is that OpenAI will offer a service to fine-tune chatGPT on your company’s own data? Feeding it your company’s code, documents, slack messages, etc… will largely solve the documentation problem that most businesses have. AWS and Google already offer services like this, but obviously they’re not as good as chatGPT yet.
Go watch Sam Altman’s keynote from a few weeks ago. He says that he expects that the next unicorn companies will focus on fine-tuning models for various verticals. I think OpenAI is going to focus on building these LLM, and let third-parties deal with the fine-tuning.
I’m exploring gpt-index right now to build indexes of our company source code. So far I’ve been able to build an app that works on the command line and builds a working context index from my console input. I can load files into index, or directories, then query directly against it. It’s all prompt engineering, but I’m going to be incorporating results and refinements into the workflow so that I can curate and fine-tune certain models.
What Sam Altman was referring to here is that people were going to take open source models and fine tune them (or re-train them) to do their own thing. This is already happening! I don’t think OpenAI is going to let third parties fine-tune their models, because I don’t think OpenAI will open source their models and let others run a fine tuned model without their API. It would be cool if they did. But we’ll see. I know they have open sourced some of their models, but their big models (GPT-3), no way.
You can do this to a limited extent. ChatGPT for instance maintains a context vector of about 3,000 words to keep track of the current chat. That of course is not a lot but, it can be helpful to feed in new data (just ask ChatGPT to read it). I have been able to get it to write an article this way about a subject it did not have enough specific data for.
i tried to train a few models, with best practise training approach. My expectation was, that a fine-tuned model extends the exiting models from openAI, so that i can use the abilitiy to communicate very well PLUS the knowledge of my fine-tuned data. Maybe its a silly question, but testing my fine tuned models results in nothing really great, in best way they find the source of knowledge related to the prompt and put something around that out - but not in high quality manner i would expext from GTP3-4.
Is it right, that a fine tuned model is NOT an extension of the whole openAI knowledge of the world?