How to send messages "from" application?

In an application with an integrated assistant, what are some good ways to send a message “from” the application itself, as opposed to the user?

For example, in a todo list app with reminders, we may want to ping the assistant when a timer runs out. Something like:

{“role”: “application”, “content”: “Timer expired. Ask the user how it’s going.”}

That’s if custom “role”s were allowed.

It seems we would have to format the user message in some special way, like square brackets indicating messages not coming directly from the human user.

Or maybe we could have a function call like subscribe_to_app_alerts() that the AI calls at the start of every conversation, and we return a tool output message every time there’s a subsequent app alert…

Or we could use developer messages.

Anyone have ideas or experience with this sort of pattern?

Running an AI input implies you want generative language that is unique based on some sort of question-answering or intelligence, with a varying context.

The best way if you don’t have a question needing an answer or a deliverable is to not use an AI for things that a computer program can do, basically free, and then without messing up the quality of someone’s AI chat session history.

For example, this would be a silly “developer” to send on its own, because the output could be by a user interface, and the compliance would be better if the “application” language directly accompanied a user turn:

You can see the AI can barely distinguish “developer” from “user” .. a constant issue in your own developed applications with this demoted role. You’d certainly improve it with “automatic notification from application interface” or such…

If the text was meant for AI understanding, but by an application sending it, include any application-based messaging when it can accompany a user input and where the user actually wants a response.

Why have an AI make “how its going” messages, when you can randomly pull from a list of inactivity messages that are pre-determined?

Or at least, not grow an existing conversation for a non-conversational AI task. You might schedule “Get the user their customized daily news report”, an AI-powered task not pinned to a specific chat history.

That: the whole response is a separate application, “You observe a conversation that happened in the following text block, where a user has not asked for new input from our AI product for a while, and give them a tailored invitation back to the platform with a call-to-action based on their interests and past tasks we can send”.

“From an application itself…” I inquire about for you… and the AI model wasn’t so affected to become a patient tutor by the prior developer message that could have been a post-prompt.

If you’d like to experiment a bit with surrounding user messages with developer messages, where “tuning up” a response would be the goal, I have a “playground” where you can do this – which the OpenAI platform site “chat” does not allow you to see.

Hi _j, I appreciate the thorough reply.

Why have an AI make “how its going” messages, when you can randomly pull from a list of inactivity messages that are pre-determined?

In the example I had in mind, it would say something more specific, like “How’s it going with studying for the French exam you were telling me about earlier?” Ie where we do require some intelligence to come up with the message, and do want a reply from the AI, just not in direct response to a chat message.

That: the whole response is a separate application, “You observe a conversation that happened in the following text block, where a user has not asked for new input from our AI product for a while, and give them a tailored invitation back to the platform with a call-to-action based on their interests and past tasks we can send”.

Something like this might work, but could get quite complicated.

If anyone from OpenAI is reading, I would appreciate custom roles.

Just taking a stab: Seems like your talking about the need for a generic event handler and call back. You are refering to the API and not ChatGPT, right?

Handling events in my code is no problem. The issue is how best to structure conversations in the API when the assistant is sometimes replying to a chat message from a human and sometimes replying to an event within the application.

Yes, I am a developer using the API.

Sounds like you have some real complexities going on here…

Maybe if you could supply a process chart with pseudocode, someone could help you - that’s what I would do.

Typically we have:

  • Conversations with users in a chat format with a history of their own messages being answered;
  • Task and automation and data processing performed by AI.

Don’t mix the two, and you will have a happy life.

1 Like