Playground vs App How it works

Hi everybody!
I have a prompt which works on Playground but not in my application.
I have this code

import OpenAI from 'openai'
import express from 'express'
import cors from 'cors'
import bodyParser from 'body-parser'
import { Client } from "@googlemaps/google-maps-services-js";

const app = express()

const PORT = process.env.PORT || 3001

const client = new Client({});

app.use(bodyParser.json());

const openai = new OpenAI({
  apiKey: '■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■8LIiIzmi' 
})
app.use(cors())

app.use(express.json())

let welcomeMessage = ""

// Express.js endpoint to get a welcome message
app.get('/api/welcome', async (req, res) => {
  try {
    const response = await openai.chat.completions.create({
      model: "gpt-4", // Use an appropriate model
      messages: [{ // Assuming a start of a conversation with no prior context
        role: "system",
        content: "Start by saying 'Hello!Welcome to geography lesson.Do you want start learning geogrpahy?'."}],//"Generate a short friendly and engaging welcome message for a geography discussion chatbot."
        max_tokens: 4096,
        temperature:1,
        top_p:1,
        frequency_penalty:0,
        presence_penalty:0
    });
    welcomeMessage = response.choices[0].message.content.trim(); // Store the welcome message
    res.json({ message: welcomeMessage });
    //res.json({ message: response.choices[0].message.content.trim() });
  } catch (error) {
    console.error('Error fetching welcome message:', error);
    res.status(500).json({ error: 'Failed to fetch welcome message' });
  }
});

  // Assuming you already have the '/api/welcome' endpoint

// Endpoint to handle user messages and respond
app.post('/api/conversation', async (req, res) => {
    const { userMessage } = req.body; // Get the user message from the request
    
    try {
      // Send the user message to OpenAI's API
      const response = await openai.chat.completions.create({
        model: "gpt-4", // Adjust according to the available models
        messages: [{role: "system", content: welcomeMessage}, // Use the stored welcome message as the first message
        {role: "user", content: userMessage},
        {role: "assistant", content: ""}, // Leave the assistant's response empty for now
        {role:"system",content:"Ask question related to geography"},
        {role:"system",content:"If user answers correctly, reply with 'Excellent job!!!'"},
        //{"role":"system","content":"You are a geography teacher.You ask random geography questions.When user answers correctly you give feedback by saying 'Excellent job!!!'.When student answers incorrectly you provide hints or the correct answer."},
        //{"role":"system","content":"Ask random geography questions"},
        // {"role":"system","content":"When student answers correctly give feedback by saying 'Excellent job!!!'."},
        // {"role":"system","content":"When student answers incorrectly provide hints or the correct answer."},
      {"role":"user","content":userMessage}], // Send the user's message for processing
        max_tokens: 4096,
        temperature:1,
        top_p:1,
        frequency_penalty:0,
        presence_penalty:0
      });
  
      // Return the generated response to the frontend
      res.json({ botResponse: response.choices[0].message.content });
    } catch (error) {
      console.error('Error in conversation handling:', error);
      res.status(500).json({ error: 'Failed to process the conversation' });
    }
  });

  app.post('/api/city-info', async (req, res) => {
    const { cityName } = req.body;
  
    try {
      const prompt = `Provide detailed information about ${cityName}.`;
      const response = await openai.chat.completions.create({
        model: "gpt-4",
        messages: [{"role":"system","content": prompt}],
        max_tokens: 150,
      });
  
      res.json({ info: response.choices[0].message.content });
    } catch (error) {
      console.error('Error fetching city information:', error);
      res.status(500).json({ error: 'Failed to fetch city information' });
    }
  });
  
  app.post('/api/location', async (req, res) => {
    const { location } = req.body;
    try {
        const response = await client.geocode({
            params: {
                address: location,
                key: apiKey,
            },
        });
        const { lat, lng } = response.data.results[0].geometry.location;
        res.json({ lat, lng });
    } catch (error) {
        console.error(error);
        res.status(500).send('Error retrieving location data');
    }
});

app.listen(PORT, () => console.log(`Server started on http://localhost:${PORT}`))

and I get this

1 Like

Hey @iofclip we don’t see any error in the output image. Can you share more information on what error your getting in your application?

The conversation is not going on related to system prompts. Its flow is not the expected one.In Playground it flows as expected.
This is what I get to Playground

The reason response works as expected in the OpenAI Playground but not in your code, the issue likely lies in the way your code handles or structures the API requests and responses.

The Playground handles the dialog history but in code you need to pass the dialog history or previous context manually. I am not expert in python but I think you need to do is to make sure that the ‘messages’ array in your payload correctly represent the desired conversational history and follows the logical flow expected by the model.

try adding this before you sent a API call request:

console.log("Sending to OpenAI:", JSON.stringify(messages));

In console it display the photo below

System is the first persistent message that programs the behavior.

After that, you pass back the “assistant” response, what the AI wrote, in the role assistant, just as you see in the playground.

Everything you see in the playground chat is sent every time you press submit.

You also show two “sending to OpenAI” in your last screenshot.

I didn’t understand well.Can you provide me an example?

If I copy the discussion on Playground with ‘View Code’ and paste it to my application will it work?

Yes, paste code is a good way to run the same output via API, although to really chat, you need to manage the conversation and pass back those user questions and assistant answers yourself, from a loop that continues, with a limited length of chat.

Copy code on a quick chat:

from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
model=“gpt-4”,
messages=[
{
“role”: “system”,
“content”: “The assistant is a friendly interviewer who is curious about the user and asks short questions for the entertainment of an audience.”
},
{
“role”: “user”,
“content”: “Hi, can I chat?”
},
{
“role”: “assistant”,
“content”: “Of course! Welcome to our show! Let’s start with something light. What’s your favorite way to relax after a long day?”
},
{
“role”: “user”,
“content”: “I like just sitting back and playing on the guitar, or seeing what’s new on TV.”
},
{
“role”: “assistant”,
“content”: “That sounds wonderful! Playing the guitar, huh? Can you tell us more about your musical journey? When did you start playing and what kind of music do you enjoy playing the most?”
},
{
“role”: “user”,
“content”: “It all started about 15 years ago. I just needed an outlet other than piano keyboard for my ideas. And brain exercise!”
},
],
temperature=1,
max_tokens=256,
top_p=0.2,
)

You can’t leave the messages at an assistant response, otherwise, there is no new user input to process and answer about.

Then one only needs to add the extraction of the messages.content from the API return, or watch for and process tools if you have gone to the next level and specified those and have written code.

What do you mean by this?Do you mean assistant’s message can be blank?

I mean to say that the messages must go system, user, assistant, user. With the most recent question last.

If you copy from the playground and sent system, user, assistant, user, assistant - you’d be essentially asking the AI to either write more or repeat what it wrote; there’s no clear purpose to receiving that assistant message last.

image

Thanks a lot I will try it . And I will come back if needed

1 Like

Same in console is showing this and it goes to the last question.Maybe I have wrong in application right?

PLEASE DELETE MY COMMENT IF NOT ALLOWED.

Here’s a blog I wrote regarding the Multi-turn chat in OpenAI and I hope it helps you clearing the concept.

1 Like