I tried to used Chatgpt to develop a FAQ chatbot, by using a Chinese Q&A knowledge base to fine-tune the model.
During our trial, we encountered a few issues listed below:
- The way we used the chatgpt api to fine-tune the model by using our Chinese Q&A knowledge base is the same as the one in the following links: a. Fine-tuning GPT-3 Using Python to Create a Virtual Mental Health Assistant Bot | by Amogh Agastya | Better Programming; b. Google Colab. So my question is whether it is the right way to use the Chatgpt api to fine-tune the model on the given dataset?
- Given the same question, run the Chatgpt model several times, but get different answers. Is it possible to get the same answer as the one in the Q&A database for the same question?
- Given a question, run the Chatgpt model, sometimes get the answer missing some info compared to the answer to the same question in Q&A database, or on the contrary, sometimes get the answer with some additional info. Is it because of the model settings of Chatgpt, such as max_token = 100? If yes, how to fix the issue to make sure the answers are the same as the one in Q&A database?
- Given the same questions, we tried to use both Chatgpt portal and Chatgpt api (both use the default model without fine-tuning), it seems the model in Chatgpt portal gives more reasonable answers than the one in Chatgpt API. Why are there huge differences in the answers between Chatgpt Portal and Chatgpt API?
Looking forward to someone’s answers! Really appreciated!