I'm confused about My GPTs tokens

What all makes up the tokens for My GPTs that I built? Is it knowledge + instructions + conversation? How does the conversation growing longer impact the token count? Does the conversation get summarized into tokens? Can the conversation grow so long that the token count is too large to retrieve facts from the knowledge with high accuracy? Since tokens at the end are retrieved more accurately does this mean a conversation can impact accurate knowledge retrieval? When calculating tokens for My GPTs, are the token instructions placed before the knowledge tokens?