Fine-tuning myths / OpenAI documentation

Like @RonaldGRuckus said, OpenAI themselves add knowledge with embeddings not fine-tunes! In particular, semantic search with embeddings, stuff the prompt with this information, and ask GPT to use this as context when answering a question.

NOW, however, we have seen GPT answer questions via fine-tunes, if when you train it, you set your epochs really high, at least 16 (from the default 4). You can certainly try that too! It is possible, but then you are locked into a fine-tune that you can’t add knowledge too easily, assuming your knowledge changes.

You can do it with fine-tunes! But it’s not optimal.

Try both! But I, and most folks prefer the embedding route, even though it is more work.

Here is the Mega-Thread on high-epoch fine-tunes, good luck, and don’t overfit!

1 Like