Hi all, if this is the wrong place for this topic, I appreciate you pointing me in the right direction, thanks!
My end goal is to have ChatGPT store, modify, analyze and update tabular data in a way that allows for real-time retrieval in future sessions. To start, my use case is a dosage log for some medicine I take daily. The log records the date/time, amount (ml), source, and notes. ChatGPT faithfully created this log in the format I requested, saying that it was stored in some hidden, persistent structured data warehouse, and I’ve been reciting my dosages every day, it has maintained this table.
But after two months of data, the familiar feeling of little errors (like when you’ve lost old context) creeped in. So I looked a little deeper and it seems like ChatGPT doesn’t have any kind of hidden structured data store at all, at least not one that the user can do anything with. All of my log entries have been stored in prose, in the memory feature, along with instructions on the table structure, etc. It must recompile this data each time, into whatever temporary space it has. This works great for smaller tables, but I can easily see now that it will not work for any sufficiently large data set, such as a years-long medicine log.
Hence my need for structured, tabular memory space which ChatGPT can use for whatever purpose requires this. So of course I asked it what I should do, and the most salient idea for me was to create an API of my own that implements CRUD operations on a SQLite DB (and any other supporting methods), and then configure the API Endpoints as a Custom Tool (or whatever) in ChatGPT. This way, it can call the API to add a log entry, call again to retrieve the current, real-time data, etc. Voila! Persistent, structured, tabular memory available between sessions.
I am posting here as a reality check before I embark on this. It seems sensible, but are there other designs that are easier or better? How do you handle this stuff, personally? Many thanks!
-Richard