Intelligent decision making for what is to be stored in database?

Working on an exciting project using Vite + React and GPT-3.5 Turbo integrated with a database for memory storage and retrieval. Currently tackling an intriguing challenge: enabling GPT-3.5 Turbo to intuitively decide which information should be stored as long-term memory for later retrieval—just like a human would.

For example, casual queries like “How tall is Mount Everest?” shouldn’t clutter the database. However, important details when working on coding tasks must be remembered and retrieved intelligently when needed. Essentially, I’m seeking a seamless, human-like judgment from GPT-3.5 Turbo: intuitively recognizing and recalling relevant memories while efficiently disregarding trivial data.

Has anyone tackled similar challenges or have ideas on making AI memory storage and retrieval more intuitive and context-aware?

I cannot use API for 4.5, o1 or even 4 despite being on pro plan :frowning:

Pleased with the interface i have created :slight_smile: (took all afternoon and most of the night to get this up and polished)

Ah API interface not very intuitive.

Managed to get it to work with o3-mini-high and Dalle-3 including image result within the chat window.

Image in chat window did not work in previous attemps and the explaination i was given for it is as follows..

ME: why did you not tell me to do this? the app.py was not working
ChatGPT said:

o3-mini-high: Because of how OpenAI’s API rollout for DALL·E 3 has been evolving, the official position was that specifying “model”: “dall-e-3” isn’t fully supported in the public API yet. So the standard guidance (and official docs) say to omit the model parameter altogether, which often falls back to DALL·E 2 or simply fails if your account lacks image-generation access.

Meanwhile, you updated your Chat.jsx to display any returned URL as an image, and apparently your account/project actually is returning valid image URLs (even though the logs showed “no access to model ‘dall-e-2’” earlier). Essentially, it started working once you:

Stopped specifying model="dall-e-3" in openai.Image.create() (so the endpoint defaulted to a model your project did have access to—or recognized your DALL·E 3 Beta).

Updated Chat.jsx so any returned URL is displayed as an image.

The reason I didn’t explicitly say “just keep the model=“dall-e-3” and it’ll magically work” is that—per the docs—the stable approach is to omit the model parameter and rely on the default DALL·E endpoint, which often is labeled “DALL·E 2” in the logs. If your project has DALL·E 3 Beta or any image-generation enabled behind the scenes, you can still end up getting DALL·E 3–quality images. It’s just not a documented or guaranteed method yet.

In short:

Officially: DALL·E 3 is not fully exposed in the public API.

Practically: Some accounts can use it if they have the right Beta or internal access, even though the logs may mention “dall-e-2.”

Your fix: Tweak Chat.jsx to handle image URLs and let the server respond with whatever image link it can generate.

That’s why I followed the documented approach to remove the model parameter entirely—rather than telling you to do what you ended up doing by trial and error. Now that it’s working for you, that just means your account has partial or beta access that the logs mislabeled as “dall-e-2.”

OpenAI needs to be more transparent/better documentation for changing how things should work :slight_smile:

What do you mean?

There is a code button in the playground. And here is a community where you can ask questions.

Back to your topic.

Still learning. moving nicely along.

I literally have 0 coding exp. enjoying it though, thanks.

Sorry, if my response was to harsh.

It’s absolutely ok to start some day. Everyone started at some point.
Just keep going and in a couple days or why not years (no need to rush) you’ll be an expert too. First step first.

For the absolute beginners there is also this

I havn’t really thought it through but in general I want to go from

Ax = basic programming stuff (how to set up the dev environment - how to use basic data analysis and what are data structures - building an own chat with extended memory)

Bx = ETL/ELT data pipelines and ideas on how to use databases, graphdatabases, vectordbs

Cx = how to collect and prepare datasets and build own models

Dx = how to build an agentic pipeline that creates entire AIaaS applications

Ex = how to deploy and monetize that

(D and E will be invite only and require a certificate of completion of the other courses)

I have bookmarked that webpage, will get into it this coming weekend.

See you soon

1 Like