Implementing User Input in an Insurance Quoting Chatbot: Best Practices and Examples

I am developing a chatbot to assist users in obtaining insurance quotes. I would like to know how I can obtain the necessary variables through chat with the user, and then use those variables to generate an accurate quote. Could you provide me with advice or examples on how to implement this functionality in my insurance quoting chatbot? I appreciate any help from the community. I am using the Rust language, but examples in any language would be helpful, please.

pub async fn create_completion(texto_usuario:String,info_doc:String,nombre_user:String,parametro:Option<String>,openia_var:OpenAI,contexto:Vec<SContextDB>,db: &rocket::State<SMongoDB>)-> Result<String, Box<dyn std::error::Error>>{
    let mut vec_data=Vec::new();
    let mut i:u128=0;
    let json_struc="Eres una ia llamada Kopernia y tus funciones son hacer cotizaciones sobre polizas de seguro para bicicletas. Convierte la entrada del usuario en JSON con el siguiente formato: '{'data': [{'respuesta': 'string','requiresUserData': 'booleano','variable_solicitar': 'string'}]}'.Proporciona solo la salida en formato JSON. Ejemplo: si un usuario te hace una pregunta tu respuesta debería ser '{'data': [{'respuesta': 'tu respuesta como kopernia va aqui.','requiresUserData': 'ejemplo si tu respuesta es pedirle al usuario la marca o modelo o el año o el costo sera true sino sera false','variable_solicitar': 'aqui vas a colocar la info que le estas solicitando, ejemplo si solicitas la marca sera marca y asi con modelo, año y costo'}]}'.Si la entrada del usuario hace alguna referencia a cotizar o contratar una poliza para la bici solicita de manera individual 'marca,modelo,año ,costo' con esos datos se podra hacer una cotizacion. Informacion sobre la cobertura de la poliza de bicicleta 'cubre daño contra hurto y daños a terceros por mas de 10.000 dolares, en caso de hurto se repone una nueva bici.' Cada respuesta debe ser un JSON.";
    vec_data.push(
        ChatCompletionMessageRequestBuilder::default()
            .role(Role::System)
            .content(
                json_struc
            )
            .build()?,
    );
    
    for element in contexto.iter(){
        vec_data.push(
            ChatCompletionMessageRequestBuilder::default()
                .role(get_tipo_role(element.role.to_string()))
                .content(element.content.to_string())
                .build()?,
        );
        i=i+1;
    }

    vec_data.push(
        ChatCompletionMessageRequestBuilder::default()
            .role(Role::User)
            .content(texto_usuario.to_string())
            .build()?
    );
    let req = CreateChatRequestBuilder::default()
        .model("gpt-3.5-turbo")
        // .model("code-davinci-002")
        .messages(
            vec_data
        )
        .temperature(0.0)
        .n(1)
        .max_tokens(2000 as u32)
        .presence_penalty(-2.0 as f32)
        .frequency_penalty(0 as f32)
        .stream(false)
        .build()?;
    let res = openia_var.chat().create(&req).await?;
    println!("res {:?}",res.choices[0].message.content);
    if let Err(e) =insert_conversacion(db,&Json(SContextDB{
        id_user:0,
        role:"Assistant".to_string(),
        content:res.choices[0].message.content.to_string(),
    })){
        eprintln!("Error al guardar el documento: {:?}", e);
    };
    Ok(res.choices[0].message.content.to_string())
}

You’re writing your own chat bot UI correct? So you just ask the user with your own code. Is there a reason you are looking to use a LLM like GPT for this task?

Yes, I am developing my own chatbot interface. It interacts with the user, but I was wondering if there is a way for GPT to do this for me. Sometimes, GPT requests the variables and understands what the user means, but other times it doesn’t understand. The task I need to use it for is as an insurance assistant, and I want it to understand when the user asks for a quote for a bicycle policy. It should ask the user for the following variables one by one: brand, model, year, and cost of the bicycle. Sometimes it does it correctly, and other times it doesn’t. Additionally, I ask GPT to create a JSON with each of the responses following a specific format, but it also happens inconsistently, sometimes it creates the JSON and other times it doesn’t.

There’s no built-in OpenAI provided functionality for this. Likely the best scenario is that you add your own logic on top of the GPT calls to determine when the user is asking about the quote (or use a model to determine if that is what they are asking), then you ask them the specific questions with your own code.

GPT is not a universal chatbot API, its a language model. You can add your own logic and prompts on top of it to accomplish what you need.

Thank you! That’s just how I resolved.

Btw. @mariacastillop15 , thread looks like related to Rust,
maybe you are interested in effort to bring Rustceans together and organize posts better with better set of Rust related tags?