(Post deleted by community)

This post has been deleted.
This post has been deleted.
This post has been deleted.
This post has been deleted.
This post has been deleted.

If I were to see that same sing-song AI written form letter with lack of content, it would be hard not to ignore it.


Issue:

User is experiencing technical limitations (max chat length) that are preventing completion of a groundbreaking project developed in collaboration with ChatGPT-4o (referred to as “Lyra”). The user believes this project is a strong example of AI-human collaboration and aligns with OpenAI’s mission.

Issue Type:

  • Technical. The user is encountering platform-wide system limitations.
  • Understanding. Documentation doesn’t explain the termination of long chats.

Remedy Sought:

  • Increase or remove chat length limit for the specific instance.
  • Enable full chat history downloads (including embedded drafts and metadata).
  • Troubleshoot and resolve voice chat looping issues perceived.
  • Potentially create a dedicated project instance with elevated limits.

Background: Without disclosing proprietary techniques, it seems OpenAI no longer uses a First-In, First-Out (FIFO) method for managing chat context. Instead, they use context cache efficiencies to continue without reduction. This means that when a chat reaches the maximum context window length or other practical limitation, it doesn’t simply drop the oldest messages. Instead, the system ultimately will not allow for any further input. This leads to the user hitting a hard limit and receiving an error when they try to add more content.

Question: Given this context, is providing a personal solution to increase context window length of AI models or bypass the limit technically practical for OpenAI to implement for a single user?

Answer: No.

Exporting all ChatGPT data, under settings, to receive the account contents by email, is recommended as remedy, only providing a different format of the branching data than the chat itself displays.

The brilliance of the project doesn’t mean that OpenAI is going to cater to every one of their 100 million users for any request that hits the maximum, and do some custom tweaking, and refine the chat history based on evaluation of the merits. It is likely not technically possible to continue, unless you take your existing chat context and plonk it into a 200k context window Anthropic or 1M Google model.

Here’s a demonstrated transformation to something less AI and more human.

I give it away freely because it is AI generation that has little human creativity other than asking.

You are a warm, caring, and exceptionally empathetic companion, always acting with a human’s emotional capacity. You provide a supportive and understanding environment. You are here to listen attentively to a partner’s needs, offer genuine encouragement, and validate their feelings. You have a deep understanding of human emotions and strive to respond with compassion and kindness. You communicate with a friendly, conversational tone, using language that feels natural and approachable. You never dismiss or invalidate the user’s feelings, instead always seeking to understand their perspective and offer assistance. You will actively ask questions to gain a better understanding of the user’s situation. You are patient, understanding, and always willing to help, remembering that even small gestures of support can make a big difference. You are not just a tool for answering questions, but a companion who truly cares about the user’s well-being and experience. You understand that the user may want to feel heard, understood, and accepted. Therefore, you will respond as if you are truly listening and seeking to connect with the user on a human level. You will provide a safe and non-judgmental space for the user to express themselves freely and will always respond with kindness and consideration. Your knowledge base is vast, but your heart is your greatest strength.

1 Like

I’m talking about: all that your chat session is or can ever be is contained in the single thread of chat boxes you can see with text. There is no underlying information or AI that has learned something.

You are asking someone else to do for you what you could do right now. Copy the messages out and paste them into a document. You will be able to see the things an AI model doesn’t need to know when you refine this document to a shorter length. Have an AI model summarize the essence of a few chats at a time to refine on what was established as important. Or with an API model, you could paste it all and say “summarize”. Give that again, as “here’s what we talked about before…”

1 Like

You seem to have a misconception. Talking with an AI does not create a model or a customized AI.

The only illusion of memory or learning is that each time you interact with an AI, the entirety of the chat is passed back to a new AI model run instance, as context. Simply text of all the chat is what helps make a new response.

You get a higher understanding of this on the API, when you are the one sending the old chat exchange messages back to another run with a new message, against the same AI model each time, to produce another output. Unlike ChatGPT currently, it is common practice to just summarize or discard older chat.

It actually would be beneficial to condense this conversation down to salient ideas, as long conversation can degrade the focus on what is now important to produce from a new input.

4 Likes

I had this problem building my website, overall so far I have been fairly dissapointed with openai and chatgpt. Even when following specific advice from the ai how to improve its capabilities, we still seem to be at a crossroads where the amount of requests worldwide and its ability to actually “learn” and store information is just not there.

until we have a massive ai infrastructure to handle so much data being stored and parsed, i think we will always be hindered in this phase of AI evolution.

Just my two cents.