We train our own ‘model’ using ChatGPT, for which it is considered as a conversation. When accessing the API, can it answer based on this conversation?If so, how should I set the parameters of the API? Or can I only achieve it by passing the training content as history to GPT?
For now, Fine-Tuning is only available for “complete” models aka davinci-3. The “chat” models such as gpt3.5 and gpt4 are not fine-tunable for now. So yeah, you will need to give the conversation earlier in your prompt. You can also use embeddings to have more context but it’s also not available for 3.5 and 4.
Maybe you can try using davinci 3.0 for now and fine tune it using gpt3.5/gpt4
Thank you for your response. So, if we want to use a language model, we can only provide the historical conversation content to achieve the desired effect.
When you interact with language models, it treats each conversation as an individual instance, meaning it doesn’t inherently recall previous interactions. Therefore, to keep a conversation’s context, you need to include the relevant parts of the previous interactions each time you ask a new question.
But you should check embeddings/vectorization and finetuning.