Get topic/subject of the conversation

When we ask a question on on a new chat, it automatically gives a subject name to the chat.
I need the same thing with the API, is there any way to do so without actually giving the whole conversation again & asking the bot to give it a name?

Hi @Chronos,

I’ve read the documentation and was not able to find this functionality anyway there.

In general the Name is given very early on in the conversation and does not change on a later point, so you might think about getting the name early on and stick to it.

But I think that would be a great functionality to add :slight_smile:

Hi @Chronos

This is trivial (sorry to use that word) to implement in the API.

You can simply use the first prompt and truncate it to the length you wish and use that string as the topic / subject of the chat.

This is how I do it in my OpenAI API lab (and I think this may also be how ChatGPT UI does it, but I have not checked).

It’s a very simple few lines of code, TBH.

It’s not really necessary to add this to the API. This is easy code that a developer using the API can code in a few lines (see appendix for simple example).



Appendix: Sample (Ruby) Code

prompt = "How many planets are in the solar system?"
subject = prompt[0..21] + "..."
puts subject

> How many planets are i...
1 Like

Hi @ruby_coder,

thank you this is a good solution. Not quite as good as what ChatGPT UI does but should work for 99% of the cases.

So I think this is one step further than just taking the first lines

1 Like

You are welcome.

Well, developers are free to write their own method which is more than one line of code, LOL

My point is that it does not need to be a feature in the API. That is the kind of code devs write and all I did was provide a one-liner which works fine for me in my app. Of course, you are free to modify or create your own fantastic method.

You could sent your prompts and completions to some other model and ask it to summarize in 24 chars of less if that pleases you, of course. The sky is the limit.


1 Like

Typically a full product using GPT will have other models in the background doing small tasks such as classification. For example, I have an ada bot which determines if the question is directly related with a companies products, and therefore makes a request to my database for context.

Your best bet is unfortunately to re-send the complete conversation. You should be doing it regardless for things such as summarizations to reduce token length.

You don’t absolutely need an absolute powerhouse like GPT-4 though or even 3. There are many free open source models that you can spend your computer’s resources on instead of cold hard cash.

For fun, I asked GPT3.5 the following

Good day 3.5. Let’s visualize a wonderful relaxing environment to begin our conversation.

Hello! That sounds like a great idea. Let’s imagine a peaceful environment where we can have a relaxing conversation.
Close your eyes and picture yourself on a beautiful white sand beach with crystal clear water gently lapping at your feet. The sun is shining down on you, and a light breeze is blowing, carrying the scent of saltwater and coconut oil. You can hear the sound of seagulls in the distance and the gentle rustling of palm leaves overhead.
As you look around, you see colorful umbrellas and beach chairs dotting the shoreline, and people lounging and enjoying the peaceful surroundings. The water is so clear that you can see schools of tropical fish swimming by, and if you look out to the horizon, you can see sailboats and yachts gliding across the sparkling water.
Take a deep breath and feel the relaxation wash over you as you let go of any stress or worries. Now, let’s begin our conversation. What would you like to talk about today?

I sent this conversation to philschmid/flan-t5-base-samsum · Hugging Face

And got: Imagine a relaxing environment for the title.

Thanks all, personally I think as the OpenAI processes the prompt & knows the context of that prompt, it’s a good feature to have to be able to get the subject directly from the response as a ~5-word response. But other ways are available for now.