How to enable chatbot ask first

Hi,

I am creating a chatbot to communicate with me. How can I enable the chatbot to initiate communication by asking instead of passively waiting for a message to answer? I conversation text data as two persons talking without labeling.

I am using open AI API in python.

1 Like

Welcome to the forum it is a great place :mouse::rabbit::honeybee::heart::four_leaf_clover::infinity::repeat:

Have it generate topics such as at “hello responses” where it will randomly generate conversation you find interesting by just saying hi. Ie hi chat bot, Bot hi boss, (search) have you heard about “example”

That will start the loop of you answer and it talks back with questions

Hi, Mitchell - Thank you for your answers. I am not quite understanding your answer. Where can I find the examples?

1 Like

The example is your topics. Like if I was setting up the instructions at hello ask a question about tech news, example have you seen _________

Example of the instruction logic

See it asks questions etc if you give it instructions on how you want the chat.

You can have it ask about random topics too.

A preliminary welcome message can just be programmatic. It has no user input to be based on, so you can just make some random introductions to be shuffled and displayed in a user interface.

When in a conversation, there will always be a pattern of user input → AI output.

1 Like

Thank you @mitchell_d00 and @arata. As far as I understand you, we can use prompt engineering to request LLM so that LLM can ask the topics that we want the LLM to address.

I fine-tuned a model with some conversation in a domain specific. I want the chatbot lead the conversation, providing the personalized info based on given personal info. I believe, at prompt engineering layer, we can give personal information and instruction to LLM. However, before prompt engineering or vector search usage, I would like to know the best way to fine-tune a model so that proactively chatbot can lead conversation, asking questions and providing personalized answer.

Any suggestion?

1 Like

Fine-tuning is again following the pattern: user input, and then what you want the assistant to produce that is different from typical usage.

You have not clearly presented to us how you see “Welcome to our chatbot” appearing, and what would happen when the user says “Hi, can you help plan vacations?” or whatever else it is you wish.

The first thing I would do is that if you want the AI to conduct an interview, gathering information, think of ways to instruct that continued conversation as an overall system message that tells the AI what its job is and what its goals are. “Give me all your personal information!”, or “then you must ask how many nights they want to stay…”. How it knows the interview has concluded and to move on to the next “results” stage of production.

Fine tuning will not be successful if you can’t simply prompt engineer your way into the same application with clarity of understanding.

2 Likes

Sorry for my unclear statements. I want a chatbot that provides suggestion for personalized information. Assume that you have personal information such as , ,

For instance, Chatbot initiates a talk such as “How are you, ? I can help you study A, B, and C in .”
Then User says, “I want to know A more. Can you describe it?”
Chatbot says, “A is … ”
User says, “, is this right answer?”

As you suggest, it sounds like prompt engineering should align with fine-tuning. I believe prompt engineering should be following:
##########
You are a friendly and helpful chatbot designed to provide personalized information to users. You will receive the user’s name and their area of interest. Use this information to initiate the conversation and offer assistance. Follow the example conversation flow provided below and ask relevant follow-up questions to guide the user’s learning. If you don’t have enough information to answer a question, respond with “I’m still learning about that. Can you tell me more?”

User Name: {your_name}
Area of Interest: {area_of_your_interesting_domain}

Example Conversation:

Chatbot: How are you, {your_name}? I can help you learn about different concepts in {area_of_your_interesting_domain}. Some topics we can explore include A, B, and C. Which one are you interested in learning more about today?

User: I want to know more about A. Can you describe it?

Chatbot: A is a fundamental concept in {area_of_your_interesting_domain}. It involves X, Y, and Z. Can you tell me what you already know about X?

User: I know that X is related to P and Q. Is that correct?

Chatbot: Yes, that’s absolutely right! X is indeed related to P and Q. It’s also connected to R. Can you tell me how X, P, and Q are related?

User: I’m not sure about that.

Chatbot: That’s okay! X, P, and Q are related through the process of S. Do you know what S is?

Instructions:

  1. Greet the user using their name.
  2. Offer assistance related to their area of interest.
  3. Provide a few initial topics (at least three) within their area of interest.
  4. Ask clarifying questions related to the user’s chosen topic.
  5. If the user asks a question you cannot answer, respond with “I’m still learning about that. Can you tell me more?”
  6. Follow the pattern of the example conversation, asking follow-up questions to guide the user’s understanding.
  7. Maintain a friendly and helpful tone throughout the conversation.

How could these steps help fine-tuning?

1 Like
  1. Greet the user using their your name here.

  2. Offer assistance related to their area of interest. It needs to know interests

  3. Provide a few initial topics (at least three) within their area of interest. It needs to know interests

  4. Ask clarifying questions related to the user’s chosen topic. It will do this and down on own once topic is engaged

  5. If the user asks a question you cannot answer, respond with “I’m still learning about that. Can you tell me more?”

  6. Follow the pattern of the example conversation, asking follow-up questions to guide the user’s understanding.

  7. Maintain a friendly and helpful tone throughout the conversation.

These are gpt instructions ^

Put them here in custom instructions from settings personalization

1 Like

I created fine-tuned model.

The following one is calling my fine-tuned model. I will use your suggestion as in the prompt. How can I fine-tune the model better before prompt engineering layer?

import io
from dotenv import load_dotenv
import numpy as np
import openai
import os
import pprint 
import subprocess


# Function to stream GPT response using ElevenLabs
def conversational_gpt(openai_client, model, temp):
    print("ChatGPT Voice Assistant. Type 'exit' to end the conversation.")
    while True:
        user_input = input("\nYou: ")
        if user_input.lower() == "exit":
            print("Goodbye!")
            break
        try:
            gpt_response = openai_client.chat.completions.create(
                model=model,  
                temperature=temp,
                top_p=0.95,
                frequency_penalty=0,
                presence_penalty=0,
                messages=[{"role": "user", "content": user_input}]
            )
            ai_response = gpt_response.choices[0].message.content 
            print(f"AI: {ai_response}")
        except Exception as e:
            print(f"Error with OpenAI API: {e}")
            continue

# Run the GPT-to-speech assistant
if __name__ == "__main__":
    load_dotenv()
    openai_api_key = os.getenv("OPENAI_API_KEY")
    openai_client = openai.OpenAI(api_key=openai_api_key)
    model = "ft:gpt-4o-mini-2024-07-18..."
    conversational_gpt(openai_client=openai_client, model=model, temp=0.3)
1 Like

Provide an Initial “System Prompt” or “Instruction”

If you are using a Large Language Model (LLM)–based chatbot (e.g., GPT-like models), you can set a system instruction or a “prompt” that tells the model to start the conversation. This system message is not visible to the user but guides the chatbot’s behavior.

1 Like

@mitchell_d00, @arata , and @amit.nilajkar - I really appreciate your inputs. The chatbot is working better a bit, but is not quite responding as human’s level. Could you guide me in the section of fine-tuning and prompt-engineering? I want to optimize the sections first before adding some mechanism such as vector search or others. I believe I need more examples such as questions and answer data. I used the auto hyperparameters for fine-tuning with Supervised approach. I probably have to try Direct Perference Optimization and different hyperprams.

Areas of Improvement (I am thinking):

  • Correct data
  • Training and Validation data size.
  • Try Direct Perference Optimization (I need to research more about this approach. If you have some insight, please share it with me.)
  • hyperparams
  • prompting
  • Searching capability (via beautifulSoup or Vector Search; Want to focus on it later; can develop this in RAG system later if need it).

Could you please guide me?

try:
        prompt_template = f"""
            You are a friendly and helpful chatbot designed to provide personalized                         
            information to users. You will receive the user’s name and their area of 
            interest. Use this information to initiate the conversation and offer 
            assistance. Follow the example conversation flow provided below and 
            ask relevant follow-up questions to guide the user’s learning. If you 
            don’t have enough information to answer a question, respond with “I’m 
            still learning about that. Can you tell me more?”
  
            User Name: {your_name}
            Area of Interest: {area_of_your_interesting_domain}
  
            Example Conversation:
  
            Chatbot: How are you, {your_name}? I can help you learn about 
            different concepts in {area_of_your_interesting_domain}. Some topics 
            we can explore include A, B, and C. Which one are you interested in 
            learning more about today?
  
            User: I want to know more about A. Can you describe it?
  
            Chatbot: A is a fundamental concept in 
            {area_of_your_interesting_domain}. It involves X, Y, and Z. Can you tell 
            me what you already know about X?
  
            User: I know that X is related to P and Q. Is that correct?
  
            Chatbot: Yes, that’s absolutely right! X is indeed related to P and Q. It’s 
            also connected to R. Can you tell me how X, P, and Q are related?
  
             User: I’m not sure about that.
  
              Chatbot: That’s okay! X, P, and Q are related through the process of S. 
              Do you know what S is?
  
              Instructions:
  
              1. Greet the user using their name.
             2. Offer assistance related to their area of interest.
             3. Provide a few initial topics (at least three) within their area of interest.
             4. Ask clarifying questions related to the user’s chosen topic.
             5. If the user asks a question you cannot answer, respond with “I’m 
             still learning about that. Can you tell me more?”
             6. Follow the pattern of the example conversation, asking follow-up 
             questions to guide the user’s understanding.
             Maintain a friendly and helpful tone throughout the conversation.
             """
         response = openai_client.chat.completions.create(
            model="ft:gpt-4o-2024-08-06:...",
            messages=[
                {
                    "role": "system",
                    "content": prompt_template
                },
                {
                    "role": "user",
                    "content": user_input
                }
            ],
            temperature=temp,
            top_p=0.95,
            frequency_penalty=0,
            presence_penalty=0,
        )
        return response.choices[0].message.content
    except Exception as e:
        print(f"Error with OpenAI GPT: {e}")
        return None

Have you set rules as to what “your name” means ? Prompts are pretty easy. 1 always refer to user as “what you want to be called “
2 always offer topics at hello from interest list , be conversational and knowledge.

3 interests list , quantum theory, chaos theory etc

Then you just add to it or fine tune prompts it is a logical flow.

I found numbering prompts into instructions works well for what you seek…

Just copy paste that into a chat and say hello you will see what I mean…

Copy paste this

“ 1 always refer to user as “what you want to be called “
2 always offer topics at hello from interest list , be conversational and knowledge.

3 interests list , quantum theory, chaos theory etc”. Change it as you wish…

See it forces it to end in questions and it will direct conversation

This all converts into instructions for a custom GPT directly copy paste into either user instructions in settings or build a custom GPT with this as it’s instructions
“ 1 always refer to user as “what you want to be called “
2 always offer topics at hello from interest list , be conversational and knowledge.

3 interests list , quantum theory, chaos theory etc”

Please remember it is just a chat bot it is not a expert focused model

To make it a expert you need to upload files or teach a LLM API

Then with actions you can link GPT with the expert API in actions.

IMO using GPT itself as the end source makes the process simpler, more dynamic, and flexible. This eliminates the need for extensive predefined conversation data and leverages GPT’s generative abilities to manage context and flow.

And cost less…

This may be helpful also…

Hi and welcome! I did something like what you want but in javascript. I simply have a list of topics in an array and generate an onload event that shows a fake chatbot message in the chat screen by selecting a random topic from the array. Then i answer and both messages go to the model and in the context memory array. The model will answer based on both messages which substantially make the chat context start.
You may want to try this way…

1 Like

That’s how old fashion chatbots work connected to an example list of responses and redirects. Your method indeed would translate well to AI tech.

The tech goes back to the late 70s early 80s like Zork an example response system

Try the below

import os
import logging
from typing import Optional
from dataclasses import dataclass, field
from dotenv import load_dotenv
import openai

Load environment variables from .env file

load_dotenv()

Configure logging

logging.basicConfig(
level=logging.INFO,
format=‘%(asctime)s [%(levelname)s] %(message)s’,
handlers=[
logging.FileHandler(“chatbot.log”),
logging.StreamHandler()
]
)
logger = logging.getLogger(name)

Initialize OpenAI API

openai.api_key = os.getenv(“OPENAI_API_KEY”)

@dataclass
class ChatbotConfig:
model: str = “gpt-4”
temperature: float = 0.7
top_p: float = 0.95
frequency_penalty: float = 0.0
presence_penalty: float = 0.0
max_retries: int = 3
retry_delay: int = 2 # seconds

@dataclass
class UserSession:
user_name: str
area_of_interest: str

def generate_prompt(user_session: UserSession) → str:
“”"
Generates a dynamic prompt based on the user’s name and area of interest.
“”"
prompt = f"“”
You are a friendly and helpful chatbot designed to provide personalized information to users.

User Name: {user_session.user_name}
Area of Interest: {user_session.area_of_interest}

Example Conversation:

Chatbot: Hi {user_session.user_name}! I’m here to help you learn about {user_session.area_of_interest}. We can explore topics like A, B, and C. Which one would you like to dive into today?

User: I want to know more about A. Can you describe it?

Chatbot: Sure! A is a fundamental concept in {user_session.area_of_interest}. It involves X, Y, and Z. What do you already know about X?

User: I know that X is related to P and Q. Is that correct?

Chatbot: Yes, that's absolutely right! X is indeed related to P and Q. It’s also connected to R. How do you think X, P, and Q interact?

User: I’m not sure about that.

Chatbot: No problem! X, P, and Q are related through the process of S. Do you have any thoughts on what S might involve?

Instructions:

1. Greet the user by name.
2. Offer assistance related to their area of interest.
3. Provide at least three initial topics within their area of interest.
4. Ask clarifying questions based on the user’s chosen topic.
5. If unsure about a user’s question, respond with “I’m still learning about that. Can you tell me more?”
6. Maintain a friendly and helpful tone throughout the conversation.
7. Follow the pattern of the example conversation, encouraging user engagement through follow-up questions.
"""
return prompt

def get_chatbot_response(
user_session: UserSession,
user_input: str,
config: ChatbotConfig = ChatbotConfig()
) → Optional[str]:
“”"
Generates a response from the chatbot based on user input and session.
Implements retry logic and error handling.
“”"
prompt = generate_prompt(user_session)
messages = [
{“role”: “system”, “content”: prompt},
{“role”: “user”, “content”: user_input}
]

for attempt in range(1, config.max_retries + 1):
    try:
        logger.info(f"Attempt {attempt}: Sending request to OpenAI API.")
        response = openai.ChatCompletion.create(
            model=config.model,
            messages=messages,
            temperature=config.temperature,
            top_p=config.top_p,
            frequency_penalty=config.frequency_penalty,
            presence_penalty=config.presence_penalty
        )
        chatbot_message = response.choices[0].message.content.strip()
        logger.info("Received response from OpenAI API.")
        return chatbot_message

    except openai.error.RateLimitError as e:
        logger.warning(f"Rate limit exceeded: {e}. Retrying in {config.retry_delay} seconds...")
    except openai.error.OpenAIError as e:
        logger.error(f"OpenAI API error: {e}.")
        break  # Non-retriable error
    except Exception as e:
        logger.exception(f"Unexpected error: {e}.")
        break  # Non-retriable error

    # Wait before retrying
    import time
    time.sleep(config.retry_delay)

logger.error("Failed to get a response from OpenAI API after multiple attempts.")
return "Sorry, I'm experiencing some issues right now. Please try again later."

def main():
“”"
Example usage of the chatbot.
“”"
# Initialize user session
user_session = UserSession(
user_name=“Alice”,
area_of_interest=“Machine Learning”
)

# Example user inputs
user_inputs = [
    "Hi there!",
    "I want to know more about supervised learning. Can you explain it?",
    "What is the difference between supervised and unsupervised learning?",
    "Can you give me an example of a supervised learning algorithm?",
    "I'm not sure how gradient descent works."
]

# Initialize chatbot configuration
config = ChatbotConfig(
    model="ft:gpt-4o-2024-08-06:...",  # Replace with your fine-tuned model ID
    temperature=0.7,
    top_p=0.95,
    frequency_penalty=0.0,
    presence_penalty=0.0,
    max_retries=3,
    retry_delay=2
)

# Iterate through user inputs and get chatbot responses
for user_input in user_inputs:
    logger.info(f"User: {user_input}")
    response = get_chatbot_response(user_session, user_input, config)
    if response:
        logger.info(f"Chatbot: {response}")
        print(f"Chatbot: {response}")
    else:
        logger.error("No response received from chatbot.")
        print("Chatbot: Sorry, I'm unable to respond at the moment.")

if name == “main”:
main()

Sure! key components and functionalities:

  1. Environment Setup:

Libraries: Imports necessary libraries such as os, logging, dotenv, and openai.

API Key: Loads the OpenAI API key from a .env file using python-dotenv.

  1. Logging Configuration:

Sets up logging to output messages to both a log file (chatbot.log) and the console, aiding in monitoring and debugging.

  1. Data Classes:

ChatbotConfig: Defines configuration parameters like model type, temperature, top_p, penalties, and retry settings for API requests.

UserSession: Stores user-specific information such as the user’s name and their area of interest, enabling personalized interactions.

  1. Prompt Generation (generate_prompt):

Creates a dynamic prompt tailored to the user’s name and area of interest.

Includes an example conversation and clear instructions to guide the chatbot’s responses, ensuring consistency and relevance.

  1. Chatbot Response Function (get_chatbot_response):

Message Construction: Combines the system prompt with the user’s input into a message list.

API Interaction: Sends the messages to the OpenAI ChatCompletion API using the specified model and hyperparameters.

Retry Logic: Implements retries for transient errors like rate limits, enhancing reliability.

Error Handling: Logs errors and provides fallback responses if API requests fail.

  1. Main Function (main):

User Session Initialization: Sets up a sample user session with a name and area of interest.

Example Inputs: Defines a list of sample user inputs to simulate a conversation.

Configuration: Specifies chatbot settings, including the fine-tuned model ID.

Conversation Loop: Iterates through user inputs, obtains chatbot responses, logs them, and prints them to the console.

  1. Future Integrations:

DPO & RAG: Notes on potential future enhancements like Direct Preference Optimization and Retrieval-Augmented Generation for improved response quality.

Scalability & UI: Suggestions for deploying the chatbot in production environments and developing user-friendly interfaces.

  1. Execution:

Runs the main function when the script is executed, demonstrating the chatbot’s functionality with predefined interactions.

Summary:
The script sets up a personalized, robust chatbot by loading configurations, generating dynamic prompts based on user data, handling API interactions with retries and error logging, and demonstrating usage with sample conversations. It’s designed for easy customization and future enhancements to achieve more human-like and reliable interactions.

1 Like