You cannot rely on technical details from chatting with these GPT models. GPT is not an exact science, they are language models and they are prone to a 20% give-or-take hallucination rate, which means they will confidently reply to questions “the best they can” based on their pre-training data (or lack of it, to be honest), and so these models will often provide very professional, perfect sounding information which is totally (or partially) inaccurate.
There are many documents on the network regarding the generalized GPT architecture; but these docs are very technical. Good search results are helpful.
Regarding fine-tuning. Many people are attempting to force these GPT models to give nearly exact answers germane to their own interests / business via the fine-tuning process. However, this approach is sub-optimal.
If you want a process (like a ChatBot) which has very specific replies to specific questions, you are better off to have a rules-based process in front of GPT3 (and perhaps use embeddings to help match), and not waste time and energy fine-tuning. In that way, the bot will reply based on rules which you provide; and when a prompt does not match in the rules-engine, then it looks to GPT3 for a reply.
I see many people here trying to fine-tune GPT3 models to reply like a rules-engine or expert-system. This is suboptimal, as mentioned above.
Note
If you also notice here, there are people who are chatting philosophically with these GPT3 models, some trying to prove their own personal, human beliefs based on replies from these models. However, in reality, they are chatting philosophy with a lunatic in many ways, because all the GPT3 models have a statically significant hallucinate rate, so you can get them to reply to just about anything 
The models are not perfect. They are beta models with a normal, high hallucinate rate. Don’t rely on them for factual technical information. You must confirm what they “say”, for sure. This “fact” is also spelled out the the OpenAI “Terms of Service” BTW.