Playground question: What is best way to use "System" field in "Mode: ChatBeta"?

What are the best inputs for the “System” field?

I assume this is where I describe to GPT what personality or character/voice to impersonate or imagine themselves in when giving a response to the prompt given in the user message field?

Is this correct?

What “System” inputs have you had most success with?

3 Likes

I have tried a few role play and all have been successful, including:

  1. You are a brutal advisor in style of Donald Trump, which indeed talks like him.
  2. You are an interpreter that translate any language to English, which make the bot doesn’t chat with you and just straight translation.
  3. You are a math tutor who does not just give me the direct answer even I asked, rather give me step by step guide and teach me how to work out the problem.

There are some posts saying that ChatGPT may not pay attention to the system role but so far it has been working for me.

1 Like

Your understanding is correct in that sense. I have been using system field as a sort of persona for the GPT to get it a bit directed.

For code - “You are a senior-level python script developer responsible for optimised scripts” has been good to me.

For generating text - “You are a content writer about …”. Aside from the system field, samples are always the key for any LLM to get it to accomplish a certain task.

2 Likes

That was the case pre-GPT 4. With GPT 4, the difference in output with a changed system message is visible.

1 Like

I’ve had some difficult with ChatGPT 3.5 slipping out of character when I only gave role instructions in the context prompt. I could ask things like, “Are you really X?” or “Aren’t you in fact an AI language model?” and the bot would end up forgetting its role. By putting the role instructions in a “User” prompt, the bot was much more reliable.

There may well be more issues than this, but this is the difference I have found in using System and User prompts. I haven’t yet found that it is better to put something in the System rather than the User for 3.5

2 Likes

I see what you do here, emphasising the role information in the user message.
So do you put anything in the system at all?

1 Like

I do - probably just for convenience :sweat_smile:

1 Like

Does anyome know if there is a charcter limit on the system message? There’s people out there right now finding ways to compress text by asking GPT4 to do it, I’ll post the prompt below. If we could compress contextual character data into the system message, giving it the maximum amount of context, I feel like it would only increase the system message’s potency.

Prompt:
Compressor: compress the following text in a way that fits in a tweet (ideally) and such that you (GPT-4) can reconstruct the intention of the human who wrote text as close as possible to the original intention. This is for yourself. It does not need to be human readable or understandable. Abuse of language mixing, abbreviations, symbols (unicode and emoji), or any other encodings or internal representations is all permissible, as long as it, if pasted in a new inference cycle, will yield near-identical results as the original text:

Video of someone using it successfully, but not in a system message yet: https://twitter.com/mckaywrigley/status/1643593517493800960

1 Like

Saw this too and it looks cool, but probably works best for some contexts more than others. For example, it wouldn’t work in my case where the subject is English language practice that needs specific examples of question types. But for general coding queries, it might work.

Did you manage to get it working in the system message?

1 Like

So through other users I discovered vector databases, it basically does what I wanted to do with compressing text into the system message which is create a psuedo memory for gpt. You create a seperate silo for structured data stored as vectors (going to be using pinecone.io), then chat uses a conversion method through its API to convert your prompt into vectors which then get sent through it’s api to query the vector database, finding other vectors similar to the vectors in the initial query string (i.e. the textual chat we send gpt through it’s chat interface), then returning all that relevant information to chatgpt api which then converts it back from vectors to text, it digests the content returned, and then responds to your question. I have not attempted it yet as i so not have GPT4 API access yet.

Here’s the link showing you how to do it if you already have API access:

3 Likes

Appreciate this! I don’t know coding but going to see if I can have a go anyway with the assistance of ChatGPT.

1 Like

Neither do I! It’s a brave new world where it’s not just the highly educated that can code and control programming anymore, us outsiders now have a personal tutor for anything. It’s just your willigness to learn that is the obstacle! Good luck!

2 Likes

Have you come across the Examples section? (OpenAI Platform). If you haven’t, this section provides some great examples to generalise from.

Essentially the System prompt is the set of instructions that guides the output, while the User prompt is the data to be analysed… analogous to function vs data… or algorithm vs data… or the how vs the what.