API delivers back the prompt as answer

Hi everybody

We’re using the GPT4 APi to deliver prompts into the system. Temperature is set to 0.6 and role assistant.

Sometimes sending out the prompt, the answer is exactly the prompt back. Does anyone know, why this could be?

Best
Stefan

“assistant” is how the AI sees itself.

If there is an assistant message, but no user message query, it appears that the assistant had just answered. What is it supposed to do after that?

Input requiring a response should be in the “user” role. Then the AI will answer.

assistant is how you would report past answers from the AI in a history of prior chat exchanges between the user and the AI assistant, before finally giving the latest user input.

Thanks.
We’re always delivering context of previous prompts with the prompt, too.
It works 90% of the times right now.

We use make.com to send the instruction where we are able to choose the role. So we should have a step naming the past context with “assistant” and then send the prompt question with “user”?

Best, just to make sure I have it understood right.
Stefan

Past chatbot conversation should be in user/assistant pairs as originally input and produced, for at least the most recent few responses (and then with earlier conversation, you can start to omit or truncate as you see fit to minimize the excess conversation).

Then the final message will be input typed by “user”, which the AI directly responds to.

The prior conversation could be important if the user only asks, “What do you mean?” or “what do you think I meant?” - and then the roles used must be correct for a correct answer.

Thanks.
We build up information like market trends, etc. by questions and then use this as context to dig deeper into other topics.

So far we always gave this output specific names like [MARKETS] and referred to that GPT recognizes the information and use it as basis to “reflect” upon the information.

Might it also be a solution to use the context mainly as part of the “user” question? writing that everything following is context?

You could do some tricky tricks in conversation history besides just a single entry with prefix like “AI: here’s today’s data to help you answer any relevant question”, or if it is specific to the user input, “knowledge base retrieval for answering:”

example 1, simulated conversation
user: Give all today’s market trends so I can ask followup questions.
assistant: Here’s the only up-to-date info I have: (insert)

example 2, pretend it is user-provided documentation
user: Here’s a full report of today’s market trends I’d like to ask questions about: (insert)
assistant: Received. If you ask about trends, I’ll only use that information.

example 3, fake function return
role: function, name: “markets”, content: “(insert)”
then use function calling API to add this required definition to get the function-understanding model:
functions = name: “markets”; description: “today’s trends. disabled, do not call”…

While each is verbose, the answering ability will be enhanced.

The user question must always come last.

Thanks alot, we’ll try to adapt the process and get back if there are more questions!