How to preserve the context/session of a conversation with the api?

Hello @antonio258p , welcome to the OpenAI forum/community! We’re delighted to have you here =D


Greetings,

Thank you for raising this query on the OpenAI Developer Forum. Your question pertains to maintaining conversation context or session continuity while utilizing the OpenAI API, specifically concerning its functionality analogous to the web version of ChatGPT. The inherent capability in the website version that retains context through previous messages is of interest.

In the website version of ChatGPT, the contextual continuity is achieved through the resending of all prior messages for each new communication, ensuring the conversation maintains its coherence. However, it’s important to note that when interfacing with the API, the responsibility for managing and preserving conversation history rests upon the developer.

To replicate similar functionality through the API, developers must structure their API requests as a sequence of messages within a conversation. Each message in this sequence encompasses a “role” (system, user, or assistant) and “content” (the text of the message). By incorporating all prior messages within the conversation history while making API requests, developers provide the model with the essential context to produce relevant responses.

You can also use recursive summarization strategies by interspersing results and calls, simple examples of summarizations: OpenAI Platform

##====

There are several topics in this forum, I also suggest researching and trying to absorb other content, they help a lot too and of course, whenever possible, try to help others!:

====##

For example, utilizing Python, one could structure an API call to maintain context as follows:

import openai

openai.api_key = "your-api-key"

conversation_history = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What's the weather like today?"},
    {"role": "assistant", "content": "I'm sorry, I cannot provide real-time weather information."},
    # Continue adding messages as the conversation progresses
]

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=conversation_history
)

print(response['choices'][0]['message']['content'])

This example demonstrates the necessity of including the complete conversation history within the conversation_history list, ensuring a coherent exchange with the model.

  • Kindly refer to the official OpenAI API documentation for the most current guidelines on structuring API requests and managing conversation context, as methodologies may evolve over time.

For further exploration and inspiration, you might find these resources valuable:

  1. Streamlit LLMs Gallery: Discover a collection of applications powered by Large Language Models (LLMs) on the Streamlit Gallery. These applications showcase the diverse capabilities of LLMs across various use cases, from text generation to language translation. Explore the potential of LLMs in real-world applications by visiting the Streamlit LLMs Gallery: Streamlit LLMs Gallery.
  2. ChatGPT Streamlit App: Experience the magic of ChatGPT through the ChatGPT Streamlit app. This app lets you have interactive conversations with the model, allowing you to witness its language capabilities firsthand. Engage in a dialogue and see how the model responds to your input: ChatGPT Streamlit App.
  3. LangChain Tutorial - Build a Text Summarization App: To delve deeper into the world of natural language processing (NLP), explore the LangChain tutorial on building a Text Summarization app. This tutorial walks you through the process of creating an app that condenses lengthy text into concise summaries. Learn how to utilize LangChain, Streamlit, and LLMs to develop this application step by step: LangChain Tutorial - Build a Text Summarization App.
  4. LangChain Docs | :parrot::link: LangChain
  5. Memory | :parrot::link: LangChain

Best regards =D


2 Likes