Threads in gpt3 (like chatgpt)?

one of the greatest features of chatgpt IMHO is the threading capability. i can create and maintain different threads by subject, mood, context, etc… but how can i mimic this with the GPT-3 API? seems like i always run into the limits either input or output or both.

anyone been successful in doing this?

2 Likes

For subject and context, you can squeeze them into the context window by summarizing the conversation history. For mood/personality, you can edit the original response according to what Anthropic AI call a “consitution”, which is outlined here with a video and my summary thereof:

gotcha. kindof wondering if it’s worth trying to square-peg-round-hole this or if it’s better to just wait for the chatgpt api?

It’s definitely doable, and will likely become more easily doable in the near future. There are already YT tutorials that eplain how to create a ChatGPT-like experience using GPT-3. Here’s a couple that I have saved for later:

1 Like

I’ve also run into this problem and haven’t found a way around it even with strategies above. For one feature where I wanted to generate structured responses based on a list of input data, the fine-tuning approach is able to take over the heavy lifting and reduce prompt size.

However, I’m working on a new writing assistant service and am having trouble getting around this when generating segments of a text that could be longer than the token limit. We don’t want to repeat the same point on a given topic if we call the API a second time. There’s the option for adding “don’t include info about topic A or topic B” in the prompt, but I’ve found negations like that to not work particularly well.

Obviously token limit increase in GPT4 with help with this, but I’d still be pro-adding a threading option for the API because the problem will likely recur just in a larger form :slight_smile: