Feature Request: Project-Specific Memory for Enhanced Workflow

Dear OpenAI Team,

I hope this message finds you well. I’d like to propose a feature that could greatly enhance the usability and efficiency of ChatGPT for users working on complex, multi-faceted projects: Project-Specific Memory.

Overview
The idea is to allow users to create segmented, persistent memory dedicated to specific projects. This memory would function independently of the general context window, enabling users to seamlessly continue conversations and expand on ideas without repeatedly reintroducing background information.

Key Benefits
Effortless Continuity: Users could return to a project days or weeks later and pick up right where they left off without losing context.

User-Controlled Updates: Users could add, modify, or delete project-specific memories at will, ensuring stored information remains relevant and accurate.

Focused Conversations: By isolating memory to specific projects, unrelated information wouldn’t clutter discussions, making the AI’s responses more targeted.

No Context Window Overload: This approach would reduce the need to conserve the active context window for follow-up questions, enabling more fluid conversations.

Proposed Functionality
Memory Activation Commands:

Example: “Activate memory for Project X.”
This would load all relevant details for that project without affecting the current conversation flow.
Save Directly to Memory:

Example: “Save this idea to Project X memory.”
Users could add ideas or insights directly to the memory during a session.
Dynamic Context Recall:

The AI could offer subtle reminders or suggest relevant details from project memory when applicable.
Cross-Project Flexibility:

Users could maintain multiple project memories (e.g., “General Knowledge,” “Project 1,” “Project 2”) and easily switch between them as needed.
Current Workarounds
While I currently manage project continuity by organizing notes in external tools like OneNote and converting them into PDFs for upload, this approach is not as seamless as a built-in memory system. Each new session requires manually reintroducing context or re-uploading documents, which interrupts workflow and adds friction.

Conclusion
A feature like project-specific memory would revolutionize how users engage with ChatGPT for long-term projects, whether it’s for writing, research, or any other creative endeavor. It would provide a more natural, intuitive way to manage information and reduce the reliance on external tools.

Thank you for considering this suggestion! If this feature is already in development, I’d be thrilled to hear about it. Please let me know if there’s a way to provide additional feedback or refine this idea further.

Best regards,
Draken_X

6 Likes

Welcome to the forum,

That is a good idea, I was thinking the same the other day!

If feature is added a method to transfer between/manage different memories would be useful.

1 Like

I came here to post almost exactly this. My Memory is full of project details so I can skip getting separate instances up to speed on things, especially since I tend to get ideas about projects in semi or unrelated chats. Also, clearing out the Memory is getting old. Project specific Memories would help with this quite a bit, as most of my Memory is about project details and not facts about me or personal preferences.

Hello there,

I really like the idea. And I actually came here to float one of my own ideas, which is very relevant to yours. If combined together, they could be revolutionary—finally bridging the gap between Custom GPTs and real project collaboration.

So here’s my thought: What if we could assign AI assistants or even full AI Teams to projects?

Right now, Custom GPTs work in isolation, and Projects don’t allow for AI collaboration. But imagine being able to:

  • Drop an AI assistant (or an entire AI team) into a project.
  • Assign specific roles like research assistant, content writer, strategist—whatever’s needed.
  • Pair AI Teams with Project Memory so they actually remember their roles and past work within that project.

With better memory, we’d have a full AI-powered workspace, where AI assistants work together like an actual team with context that sticks.

I’d love to hear your thoughts—do you think AI Teams + Project Memory together could take this to the next level?

Follow-Up Reply:

Hey everyone, just a quick update—I’ve now posted a feature request for AI Teams, which builds on this idea of Project-Specific Memory and takes it a step further.

The post explores how we can assign multiple AI assistants with distinct roles to projects, allowing them to retain memory, collaborate efficiently, and function like a true AI-powered workspace.

If you’re interested in this direction, I’d love for you to check it out and share your thoughts. You can find it by searching for:
:small_blue_diamond: “Feature Request: AI Teams for Custom GPTs & Projects”

Let’s keep this discussion going! :rocket:

In general, the Projects feature has no real functionality and feels like a beta release. OpenAI, please consider adding basic functionalities such as Project-Specific memory and multiple chats within each project.

You can kind of do that now: Create a canvas document. The canvas document is full of anchors to remind you of the existence of other canvas documents. In a canvas document you create outlines of your work.

On the $20/month plan you have full access to the full transcript of your instance. Combine that with numbering your prompts, you should be able to do things like "Put blocks 27-35, 21, 42, merge into a single document, and create canvas file. Title Canvas file “Crit-Think-numerical-estimation” Create anchor of same name. Add anchor to Crit-Think-Index

What is the operational cost of an increased context token space? E.g. right now on the free plan we have context token space of 128K. What is the effect of having a megatokek (Mt) in terms of response. I can see a couple of things:

  • A token takes roughly 4 bytes (about .75 words) So 128Kt = around 100KB. Means on loading an instance from storage, you’re bringing in something that is 8 times as large. SSD storage has access speed of about a 3-6 GB/s, but that requires some pipeline work, and multiple threads engaged in overlapping reads to sustain. But even at 1 GB/s a hundred KB is 1/10,000 of that, so 0.1 ms. Asking for a Mt would increase this to a whole millesecond.

I don’t know how long it takes to move a task to an array of GPU and back.

With greater context, I would expect longer response times. I am not sure how response time depends on context size. if everything has to be compared to everything, pairwise it would be quadratic. I’m sure it’s messier than that.

Overall we need some form of tagging, and some form of tiered memory. I’m a scatterbrain, and talk about all sorts of things in a session. I call a block my prompt and the instance’s response. If part of the response is to tag the block with one or more tags, then this can be used to vastly short circuit the amount of context that has to be considered. e.g. if the next prompt is clearly considering the same subject, then the initial response is to use the previous tag set.

Chats are pretty sparse on space use. I’ve exported my data on a few occasions. A busy week of work results in JSON files of a few MB.

The cost on accessing stuff in your history might be that it needs to be reparsed. However token space isn’t hugely larger than text space (If I understand this stuff – questionalbe…) So if I have 10 MB of text version of my conversation, the token space is going to be on the order of 12-15 MB. Still small spuds.