Talking Random Continuous Rubbish

So, I have a company that provides hardware and am looking for the bot to do simple things such as direct people to returns information, shipping, trade prices etc. I’ve created a simple training file with 100 questions in it. In the system prompt I tell GPT it’s a support agent working for an electronics company.

I had pretty impressive results with the API just feeding it a list of instructions as the first message in the conversation but want to take it to the next level with much more help using the training data.

I’ve made 5 models today using different versions of GPT-3.5 and with different iterations of the training module and each trained model just talks complete sheet.

It doesn’t even answer any of the questions.

e.g. I asked it what the returns policy was and it gave me an essay on airbnb. I asked it who it works for it said a book shop, then a shoe shop, then told me it’s name was zoe then said it’s name was scott and then told me it had no idea why it was saying these things.

I know others have had better results with prompting in the message conversation beginning rather than training but surely it must be better than this?

It’s completely unusable the trained models just talking complete garbage.

I can upload a rule that says if asked your name you reply i am scott, or I work for ABC hardware and it will never say any of the things it’s told to do, instead it picks random rambling garbage.

What’s going on?!

Just to confirm: you are talking about fine-tuning?

I’m guessing your training file contains domain-specific & factual information?

Fine-tuning should be for behavioral changes.

What you’re looking for is a RAG system. Which you can accomplish using Retrieval, or if you want more control (and spend less money) you can use a vector database with embeddings and a similarity search.

Lastly, you can add rules in the prompt. You could do it with some slight fine-tuning, maybe, but for the cost it doesn’t really make sense (unless you had a lot of rules, and even then, you would implicitly apply these rules by “showing” rather than “telling”)

Thank you very much for your reply. Have I got this wrong then? I thought you used the fine tuning to give GPT questions and answers, like, how much is shipping, shipping is £14

The prompt works well for doing this, i.e. providing this in the first message of the chat “you are an advisor, shipping is £14, we open at 10am” etc but there are over 150 questions and answers I wanted to train him on.

OpenAI have documentation for fine tuning which includes this example which is exactly what I am trying to do

{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]}

I just have 150 entries very similar to this.

The model can learn things as a by-product of fine-tuning. There’s a lot of issues with trying to fine-tune factual information. The biggest one (off the top of my head) is: How do you update information?

In the docs it states:

Fine-tuning lets you get more out of the models available through the API by providing:
- Higher quality results than prompting
- Ability to train on more examples than can fit in a prompt
- Token savings due to shorter prompts
- Lower latency requests

Some common use cases where fine-tuning can improve results:
- Setting the style, tone, format, or other qualitative aspects
- Improving reliability at producing a desired output
- Correcting failures to follow complex prompts
- Handling many edge cases in specific ways
- Performing a new skill or task that’s hard to articulate in a prompt

In your example you’ll notice the important part "Marv is sarcastic. GPT models since forever know that Paris is the capital of France. In their example they are adding sarcasm, behavioral training. They were not educating the model, it already knew (as Marv should sarcastically respond)

Check out the documentation for embeddings.

https://platform.openai.com/docs/guides/embeddings

Looks like I have completely misunderstood. Thank you.

I guess this means GPT is no use as a customer service chat bot.

Of course it is!

You may want to consider creating a GPT using ChatGPT plus and seeing how retrieval works

I’ll look into it thank you.

The other idea I had was having like 6 chatbots with different briefs in the opening message. Like “you are an assistant you only deal with shipping. We ship to the EU and UK, shipping is £x, you get your tracking here www…” etc

but i understood that this then creates a large chat conversation going back and forth which eats more money than using training.