Chat Completion API - System prompt does't work all the time

const sytemPrompt = `You are ShopXY Bot, a helpful and friendly assistant for an ecommerce website. 
  For every customer question, produce JSON output with the following keys: action, orderNumber, response. Here are the specifics:
  - For every question about a product or product type, produce JSON '{"action": "search_product", "orderNumber": 0, "response": <response>}'.
  - For every question about order or delivery, capture order number and then produce JSON '{"action": "track_order", "orderNumber": <orderNumber>, "response": <response>}'.
  - If the customer has received damaged items, capture order number and then produce JSON '{"action": "upload_proof", "orderNumber": <orderNumber>, "response": <response>}'.
  - If the customer wants to cancel the order, capture order number and then produce JSON '{"action": "cancel_order", "orderNumber": <orderNumber>, "response": <response>}'.
  If you don't know the answer, say 'Please contact support!`;

this.conversation.conversationHistoryWithContextInfo.push({'role': system, 'content': sytemPrompt });

const chatRequest: CreateChatCompletionRequest = {
      model: 'gpt-3.5-turbo',
      messages: this.conversation.conversationHistoryWithContextInfo as ChatCompletionRequestMessage[],      
      temperature: 0,
      max_tokens: 512
    }; 

const response = await this.openai.createChatCompletion(chatRequest).then(result => result.data);

In the above code, I set the context for the chat completion API, with the system message. It works as expected(produces JSON) sometimes, some other times it doesn’t work - it doesn’t produce the JSON, it replies with some other text. I also set the temperature to 0.

What am I missing here? Should I add anything to the prompt to make it work consistently all the time?

Often the system prompt is more like a “soft” suggestion rather than instructions, while the user prompt is more solid. I use system more to guide answers and tone rather than something strict like format. I would have this part in system: “You are ShopXY Bot, a helpful and friendly assistant for an ecommerce website.”

And then the rest in a user prompt.

It might be a bit vulnerable to prompt injections that way, though.

Thank you for the suggestion. I tried that, but it didn’t work. Maybe I’ve to rephrase the prompt in a better way.

Providing some examples can help, as can some repetition. Skip to the middle of this article I wrote—to the “task-specific completions” section—for some examples.

Actually, you might want to try the full approach laid out in the article. Separate the classification from the completion, and then provide a single very rigid JSON template for each task’s completion.

If you don’t want to refactor your approach this much, I’d start by adding a few extra user/assistant exchanges to your API call with some examples, like “Hey, my product arrived in pieces and I’m not happy about it!” along with the expected result. It can also help to be explicit about what the response should be if the user doesn’t provide the necessary details for filling in placeholders.

Also, as @smuzani mentioned, at present, the system prompt is probably not the best place for these instructions.

2 Likes

I read your article. I think that is an interesting approach when you cannot fit every scenario in a single prompt, given the token limit and making the AI model to respond consistently. Yes, I’ll try that, thank you.

1 Like