Hello,
I have been looking for further information about the models themselves. I was able to find the basics, such as the cost for tokens, maximum length of token vectors, etc. However, I didn’t really find any information about how the models were trained specifically (I’m aware of the GPT shortcut meaning), or if there are any tokens from the context window limited for OpenAI API services, etc.
I am pursuing this information because I am interested in integrating LLM models with the NAO robot. Additionally, I am curious about the handling of time interference. Any insights you could provide on these topics would be greatly appreciated.
Thank you.