I followed the embeddings tutorial, which uses text-davinci-003 for completions and ada-002 for embeddings.
When I increase the “max_tokens” I get this error message:
This model’s maximum context length is 4097 tokens, however you requested 4605 tokens (1605 in your prompt; 3000 for the completion). Please reduce your prompt; or completion length.
When I put my prompt into OpenAI’s tokenizer it tells me my prompt is 15 tokens (66 characters). Is it counting something else in the prompt token count?
Yes, I can get it to work if I turn the max tokens down, I’m just perplexed at how it’s arriving at its prompt token value. It seems ~10x what it should be.