Asking a model to do something without asking as the user

As far as I know there are 3 roles available; system, user, and assistant. IIRC Assistant is only available in the Assistants API.

So when using the system/user roles in the API, how do I structure a prompt where I want to ask the LLM to do something on behalf of the user without implying that the user asked this?

i.e. based on a user inquiry I want to ask the LLM to execute task B, I can’t provide a role = system prompt as the latest input. Do I just need to act as a user role and insert that task into the prompt?

1 Like

That’s not correct. The assistant role is also used in the Chat completion API.

1 Like

Ah, wasn’t aware of that. What is the benefit of using system vs assistant when using those roles in the Chat completion API?

Chat Completions - Message Roles

  • system is used for defining the operation of the AI, its identity, behavior, or other teachings that need authority;
  • user something a user typed in, a particular question or instruction; can be just data with the right system operation “programmed”;
  • assistant role is primarily used to show the AI how it previously responded in a chat. It can be reused to have the AI “saying” retrieval knowledge in the chat history before the user input, for example, as there is no specific role for automatic data injection;
  • function: A specific role for returning information from a function call after the assistant requests a function, instead of replying to the user. Also could be used for knowledge, with a different quality of attention.

Your ability to use the AI as you want is restricted in the Assistants framework to only being able to place a new user message - you not being trusted to use the AI properly, I guess.

Appreciate this, missed it in the docs.

Is there any performance benefit or best practice in using these roles and presenting the chat completions endpoint with an array of messages across system, assistant, function, and user vs. creating a single prompt that structures the question, context, past conversation history, etc into a single string?

Or between those two kinds of approaches its more whatever works best for your use case.

If the user says “you persistently act like a rude pirate” the AI is less likely to believe and adhere to that as opposed to a system message.

The chat model is pretrained on the messages format. It will see assistant responses more as a user’s simulation, and won’t answer contextually as well.

The place for “all one big text” is the completions endpoint and model(s). https://platform.openai.com/playground/p/jiQHQwLGNdo7Fdxsixjt10vG?mode=complete


To answer the original question, here’s an example of acting on a user input without answering it, using the completions endpoint and gpt-3.5-turbo-instruct

2 Likes

Your example is taking the user input and formatting into a string that encourages the model to use that message as context to select the right function right?

To be more clear I am only using the chat API currently but with the API you can provide either a list of messages (role/content) or you could just as easily reformat all the information you have into a single message (ex. Role: system, content: you are a pirate. Given the message history <> and this function <> choose the correct option to fulfill the users request.)

Is there any benefit in doing that with the chat endpoints for 3.5/4 turbo vs leaving the message history always as an array, and just appending different roles as you go before submitting to the API?