I am making a “personal assistant” chatbot to send to people who do not know me which will answer questions about me. I am doing this by including a lot of information about myself in the prompt. However, this causes obvious issues as the AI has to tokenize this every single time a question is asked to the chatbot, which means I am using a ton of tokens. Is there a way to just tokenize it once and have the AI remember it, or some way to teach it about myself without having to use tokens?
I apologize in advance if this is an elementary question.
You would be better served (more optimal way to spend time and money) to just build your own full-text based search engine in front of the API. All you need is a full-text search capable DB and a simple way to enter your questions and answers into the DB and create the full-text search indexes and you are basically done.
Databases do great full text search and retrieval, so you don’t need to waste money (and time) fine-tuning for an AI GPT application which can easily be done with a database of questions and answers, etc.
If you really want to experiment with OpenAI, then you should consider doing the same thing described above, except instead of using full-text search, you can use OpenAI embeddings and store the embeddings (vectors) in the DB and then use a little bit of linear algebra to rank the replies based on the search string (which you also convert to an embedding vector).
I have a demo embedded vector search application which does this using OpenAI API generated vectors, BTW: