Hello OpenAI team,
We’ve encountered a recurring limitation where users attempting to paste large code snippets or documents into ChatGPT receive an error indicating that the input exceeds the allowable size. This disrupts the user experience, especially for developers working on larger codebases or documentation.
Proposed Solution
Instead of returning an error when the input exceeds the context limit, we suggest a fallback mechanism:
- Temporary Local Storage : When the user’s input surpasses the context threshold, the system should automatically store the overflowed content into a temporary, user-scoped storage (e.g., within the session or a secure local buffer).
- Deferred Vectorization : Once the user completes their input (either explicitly or via a timeout), the stored text can then be chunked and vectorized—similar to how project memory or document uploads are handled today.
- Seamless Retrieval : This stored content could then be recalled or referenced through natural language queries, allowing users to continue working with large inputs in a more scalable and intelligent way.
This approach eliminates the hard-stop error, improves usability for technical users, and aligns with how memory systems like project storage already operate.
Cheers,
The Kruel.ai Team