starting from today we have the possibility to fine tune GPT against our data.
Reading the fine-tuning page i don’t perfectly understand if with fine tuning now is it possible to add knoledge to GPT.
After fine-tuning GPT, does it answer against our data (and only against our data) without the need to give context in PROMPT?
Fine tuning adds new ways of processing information, not the information itself.
The best analogy I know is that of a writer, if you fine tuned the model on their life’s work, you would then be able to generate literature in their style, but not ask questions about their books contents. You may get incidental inclusion of information, key words used commonly, perhaps catchphrases, but it would not become an encyclopaedic repository of their work.
Embeddings (OpenAI Platform) are a better way to store a large corpus of work for later retrieval by semantic similarity.
yes, this was my understanding too: but why then openai sends e-mails like this? If it is like you said what’s the news? It is always been like that
Starting today, you can now fine-tune GPT-3.5 Turbo for custom use cases. Read more about the new fine-tuning capabilities in our latest [blog post]
Fine-tuning use cases
Since the release of GPT-3.5 Turbo, developers and businesses have asked for the ability to customize the model to create unique and differentiated experiences for their users. With this launch, developers can now run supervised fine-tuning to make this model perform better for their use cases. In our early results, we have seen developer achieve:
- Improved steerability
- Reliable output formatting
- Consistent custom tone
In addition to increased performance, fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.
Fine-tuning costs are broken down into two buckets: the initial training cost and usage cost:
- Training: $0.008 / 1K Tokens
- Usage input: $0.012 / 1K Tokens
- Usage output: $0.016 / 1K Tokens
The news is that you can now fine tune 3.5-turbo, the conversational chat model, prior to today it was only for the completion base models.