How can we make the answer concise with fine tuning?

I am trying to create a customer chatbot for a facility, what kind of data would be most efficient to train the model on?
We know that we need to include hundreds of data.
For example, if I am asked where the restrooms are located,
prompt:Where is the restroom?
completion:It is on the first floor.
prompt:Where is the bathroom?
completion:It is just inside the entrance.

Would multiple answers to a single question like this make the model smarter?

Hi, Using langchain, gpt-index, you can train with data like work, pdf files.

Hello! When training a customer chatbot, it’s important to provide a diverse range of data that covers various types of user queries and scenarios. Including multiple answers to a single question can indeed help improve the model’s performance and make it more versatile in handling different responses.

In your example, having multiple answers to questions about the restroom or bathroom locations can be beneficial. By including different variations of responses, such as “It is on the first floor” and “It is just inside the entrance,” you allow the model to learn that there can be multiple correct answers to similar queries. This can help the model generate more accurate and contextually appropriate responses in real-world conversations.

Here are a few tips for training your customer chatbot effectively:

  1. Diverse and real-world data: Include a wide range of customer queries, covering different topics, intents, and phrasings. Incorporate real interactions, FAQs, and scenarios specific to your facility.
  2. User-friendly language: Train the model with data that reflects the language your customers use while interacting with the chatbot. This ensures the bot’s responses are more relatable and natural to users.
  3. Properly structured prompts: Create prompts that clearly indicate the user’s intent or question and provide enough context for the desired response. Well-structured prompts help guide the model’s understanding and generate accurate answers.
  4. Handling variations: Include different phrasings, synonyms, and variations of questions to help the model generalize and handle user queries with slight variations.
  5. Negative examples: Include examples where the expected response is not applicable or when the chatbot should ask for clarification. This helps the model learn to handle cases when it doesn’t have the necessary information to provide a specific answer.

Remember, training a chatbot is an iterative process. You may need to experiment, fine-tune, and validate the model’s responses with real users to achieve the desired level of performance and accuracy.

import openai

# Set up your OpenAI API credentials
openai.api_key = 'YOUR_API_KEY'

# Define your training data
training_data = [
        'prompt': 'Where is the restroom?',
        'completion': 'It is on the first floor.'
        'prompt': 'Where is the bathroom?',
        'completion': 'It is just inside the entrance.'
    # Add more training data with various prompts and completions

# Prepare the training examples
examples = []
for data in training_data:
    prompt = data['prompt']
    completion = data['completion']
    examples.append({'input': prompt, 'output': completion})

# Fine-tune the model
model = openai.Completion.create(
    engine='text-davinci-003',  # Choose the appropriate engine
    epochs=10,  # Adjust the number of training epochs
    batch_size=4,  # Adjust the batch size based on your requirements

# Test the chatbot
user_input = 'Where can I find the restroom?'
response = openai.Completion.create(
    engine='text-davinci-003',  # Choose the same engine used for fine-tuning
    max_tokens=50,  # Adjust the response length as needed

# Print the chatbot's response

In this example, we define a training_data list that contains multiple dictionaries, each representing a training example with a prompt and a completion. We then prepare the training examples by formatting them as input-output pairs in the examples list.

Next, we fine-tune the model using the openai.Completion.create method, specifying the engine, examples, and other training parameters like the number of epochs and batch size.

Finally, we test the chatbot by providing a user input and using the OpenAI API’s completion capability to generate a response. The generated response is printed to the console.

Note that you’ll need to replace 'YOUR_API_KEY' with your actual OpenAI API key, and choose the appropriate engine for your task (e.g., text-davinci-003).

Make sure to refer to the OpenAI API documentation for more details on using the API and the available parameters:

Remember, fine-tuning models with the OpenAI API may incur additional costs based on usage.

Thank you very much.
So you are getting smarter by asking and answering more questions about toilet locations.
If we can learn multiple answers to the question "Where is the toilet? If I train multiple answers to the “Where is the toilet?” question, will it combine the learned answers when I ask that question?
I am creating the data in CSV format, converting it to a jsol file and then tuning it.
Is it possible to fine-tune text-davinci-003?
I read on the official page that only four basic models can be tuned.

Hi @michael.simpson555

Looks like this message was written by ChatGPT.

Please refrain from sharing ChatGPT’s response to the problem posed by @shinnosuke1056 as your reply unless you have verified that’s it’s correct. If you do so, explicitly mention it like “Response from ChatGPT:”.

The message has glaring errors in the code and explanation.


  1. The following code snippet shared by you claims to fine-tune the model with completions endpoint by using a parameter "example" that doesn’t even exist.
  1. This is not the link to docs.

ChatGPT is known to hallucinate. Replies such as this create more confusion which is counter to what this community is intended for.

1 Like

Hi @shinnosuke1056

If I my understanding is correct, your use-case is factual answers (including references, links etc). In this case embeddings is better than fine-tune and much economical as well.

Simply store a json object with embeddings mapped to facts, and inject factual context into your call to chat completion model for the closest embedding to the user message.

1 Like

I’ve had success getting short responses by coming up with a prompt and completion format with a stop sequence and then training it to use that format. Models are stateless so they don’t get “smarter” unless you feed the previous context into the prompt. Using your example, I’d fine-tune using something like this:

  prompt: "Where is the restroom?\n\nBOT:",
  completion: " It is on the first floor.\n\nUSER: ",
  prompt: "Where is the restroom?\n\nBOT: It is on the first floor.\n\nUSER: Where is the bathroom?\n\nBOT:",
  completion: " It is just inside the entrance.\n\nUSER: ",

By formatting each completion as "\n\nBOT: <response>\n\nUSER: ", it teaches the model to have short answers by reinforcing the newlines and the user prompt. Otherwise the model may just keep continuing on the same response.

Also, you’ll see that the second prompt includes the entire conversation up to that point, so the model will have context about the past when performing subsequent completions.

When running completions on the model for your app, you’d want to add “USER:” as a stop sequence in the request, and you’d want to append “\n\nBOT:” to every user prompt before sending it to the model. I call this bit of text the “completion assist” because it assists the model towards generating the type of desired completion (although I’m guessing there’s an actual term for that).

This question has been asked a thousand times, but I’ll ask again: In a question and answer scenario like the one in the initial post of this thread, which is best? Fine-tuning or embedding? According to this video: it is definitely embedding. However, I still see posts like yours suggesting some success with fine-tuning.

To me, the golden solution would be to be able to train gpt-3.5-turbo with all of our questions and answers, then use chat completions with embedded context against the trained model. But it doesn’t appear that is possible.

My understanding is that embedding is best for populating a “knowledge base” from which to draw answers, and fine-tuning is best for training the model to respond in a certain style or format.

My answer was specifically in response to “how to make the answer concise with fine tuning” so I was focused on that part rather than where the actual answers should come from.

For this use case, you could ideally use both: embeddings to semantically search for the answer and a fine-tuned model to deliver those answers the way you want (short answers).

One thing I’ve been looking into is sequential fine-tuning, where you fine-tune a base model with unstructured data (similar to embeddings) and then fine-tune again (on top of the first layer) with Q&A samples. This technique is briefly mentioned in the OpenAI draft fine-tuning guide. I’d be curious to see how that first layer compares to embeddings.

1 Like