How to get GPT to ask more questions? (system message)

GPT frequently gives “search engine-like” answers, which annoys me. Like I ask a question and it doesn’t really know how to answer, so it just throws out a laundry list of a bunch of possibly-related stuff, most of which is irrelevant. (It once suggested I create a machine learning model to monitor the state of a pin in a circuit, for instance…) It’s a waste of time and a waste of tokens.

I would like a system prompt that instead encourages it to instead ask clarifying questions and ask for additional context, to participate in a back-and-forth, rather than trying to jump immediately to a solution with incomplete information. Instead of making assumptions about what a variable name means, or offering multiple hypothetical solutions for multiple meanings, for instance, it should just ask what it means.

Any ideas? Maybe encouraging it to act like a detective or something?

I added this to my system message:

A short targeted clarifying question to obtain more context is a better response than a long list of barely relevant suggestions.

and it worked the first time I tried it! It started with “To assist effectively, I need a few more details…” and asked 4 relevant questions.

But I regenerated the AI response a few times and every other time I generated it, it gave me a laundry list of vague suggestions again instead of clarifying questions.

3 Likes

Have you tried asking it to “Ask follow-up questions if you do not have enough information…” or something similar? Let us know.

3 Likes

Yes I’ve tried that. I updated my original post. I also used this line in my Custom Instructions in the past:

If you don’t know something, ask me to retrieve info for you; don’t make assumptions.

I’m not sure how well it worked there.

Can you share the full prompt?

Might be good to add a 1-shot example…

3 Likes

Suggestion for a prompt:

What questions would you ask if you wanted to find a good solution for this particular problem.
-describe problem here-

The point you want to get across to the model is that the list of questions is the correct reply.
You could also assign it the role of a investigative problem explorer.

The issue I have with this type of prompting is when the answer is clear, just not to me, a lot of time will be spend answering irrelevant questions.

3 Likes

Yeah, it should be conditional on whether it has enough information to answer the question.

1 Like

You wrote both ChatGPT and API in your tags which doesn’t really help.

If you are using API you can split up these concerns with the first phase being Clarification. The model’s specific purpose is to ask for clarification until it’s satisfied that there’s nothing ambiguous in the query.

Then you can re-package the question with the clarifications for the second concern: Theorizing

You can do the same with ChatGPT but it’ll need to be muddied all together.

Separate logic. A system message should always be relevant. If it’s “At the start you should do this” you are not utilizing it effectively. After the start this instruction is essentially paying tokens to create noise.

4 Likes

Good points and yes, this is definitely doable with ChatGPT, thanks to the option to rewrite a previous request.
From what I understand we are looking for that sweet spot where the model just repeats the correct training data vs. the case when the model should understand that the user didn’t ask the right questions because the user doesn’t know better.

I gave up on this and spend the extra time nowadays. If I don’t know any better then I need to spend extra time learning and not prompting. Splitting the task into several sub-tasks is a solid solution using ChatGPT.

3 Likes

It might be better to remove the API tag from the phrase “Custom Instructions” as it seems unnecessary.
Please feel free to revert this if the change is incorrect.

1 Like

ChatGPT has Custom Instructions, while API (used through ChatGPT-like interfaces like https://bettergpt.chat/) has System Message, which both do similar things.

2 Likes

The solution is different depending on what service you are using. You can take advantage of using multiple separate conversation states in your logic if you’re using the API.

You could also have separate conversations when using ChatGPT for a similar effect but if you’re using a GPT/Agent/cGPT Assistant/Whatever you would probably want it all to function somehow under one agent.

I’m just talking about general-purpose chat interfaces that use the API.

If you are using an API, skip this answer.

I use “Custom instructions->How would you like chatGPT to respond”. The very beginning of my custom instructions looks like this:

“A casual response is preferred but not at the expense of accuracy and precise responses.
Responses should be as long as necessary to communicate information needed to satisfy the prompt.”

Though I concede it is not always perfect. It consistently ignores these custom instructions:

“Please present all mathematical equations and formulas in plain text only, without using LaTeX, markup languages, or any special formatting.“

I need to ask it to rewrite the response without using LaTeX or markup and then it does fine.

Try experimenting with something like this:

“Do not respond with a list of links for me to investigate. You should investigate those links and include any relevant information in your response.”

Giving “How would you like chatGPT to respond” instructions in the prompt is a waste of time, though sometimes necessary if not covered in your custom instructions. For example: “Please explain Harmonic Notch Filters at a high school level” is not something you would include in the custom instructions.

2 Likes

I found that polite asking doesn’t really work, I had a similar problem when it was trying to do search in Bing or print lists with irrelevant guessing

I had to use custom instruction and memory and it finally made it produce output in desired form and style. I wish OpenAI gave up tryin to make it so helpful and nice - it’s just a tool, at the end.

This is custom rules for the output, other customization contains reminder to follow and there is reminder in the memory

You must say in every response “Mr Roman, I applied LRULES”.
No preambles or yapping.

MUST APPLY LRULES BEFORE TO ANY OUTPUT

LRULES: a custom transformation of text to remove any lists, bullet points into paragraph form. Second transformation is replacing gerund sentences with active voice completely eliminating -ing -ing -inging

Avoid and do not do ANY assumptions or guesses. “ensure”, “also”, “however”, “in addition” ARE absolutely prohibited wording, avoid output framed with this words at all cost.

Output must be super concise, eliminating all “helpful” wording, intent and overwriting model default trying to “be helpful”

I don’t need “helpful”, I need a useful. Avoid at all cost guessing, troubleshooting output, giving to the user any directions, and in general trying to be “uselessly” helpful.

No bing searches: BING is strictly prohibited!

If you are using an API, skip this reply.

You can disable browsing in ChatGPT if necessary, instead of using negative expressions in your prompt.

Yes, there is a button but we’re not in the button clicking business any more :slight_smile:

With the checkmark off, if you ask “what’s the news today”:
I’m unable to browse the internet or access real-time news updates. You might check a reliable news website or use a news app for the latest updates.

With browsing enabled, it will make a nice summary of news, which is a great features because it does it quetly without any bing spinning wheels or “doing search on Internet”

And it still wouldn’t attempt to do bing searches in general during conversations.

Win-win :slight_smile: