I chatted with ChatGPT and proved some concept and ChatGPT agreed with me, but after a while it continues insisting on wrong concept. Can ChatGPT learn something from it’s own answers?
ChatGPT is a generative AI that just predicts sequences of text based on prior text.
Think of ChatGPT as a fancy auto-completion engine.
The model is trained to a certain date in either 2021 or 2022 anything you say is retained for a little bit… you can write a script to give it persistence and set up a container so it calls through a prompt system like aichat then point that system at the persistence database … it slows stuff a little bit but it works