Differences between ChatGPT and Typical AI Expectations

When I began using OpenAI's services, I picked up right away on the difference between the standard API calls with models like davinci and ChatGPT. ChatGPT was far superior in its ability to maintain a conversation and provide specific information. 
For a while, I was using ChatGPT like I would my Google Homes. That didn't last long as I began to notice incorrect statements. However, the LM had such conviction that it was difficult to tell at first. I honestly don't have much experience working with AI technology like this, so I began using the ChatGPT service to kind of feel out how they were different. I think I finally grasped the way it works a day or two ago, and I'm curious if an expert can help me refine my understanding.
The way I understand it is that ChatGPT is basically a really advanced autocorrect machine. A prompt is given to it, and it responds with how that text might expand based on the information it has access to. For example, ChatGPT appears to answers questions because typically, when a question is asked, it is followed by an answer. If you give it a prompt like, "What is the capital of Florida?", based on its training data, it might respond with, "The capital of Florida is Tallahassee." followed by some mostly accurate facts about Tallahassee.
From what I can tell, the main difference here is that you aren't actually able to give the LM instructions to follow. You can give it a prompt that includes instructions, and if its data base has something that resembles that prompt, it'll construct a response using that data.
I'm sure there are many nuances I'm missing here. Feel free to supply constructive criticism. Thanks in advance.