Looks like when I use the code completion api there is a length limit for the prompt. Any other way can let the model read the repository?
Thanks.
Looks like when I use the code completion api there is a length limit for the prompt. Any other way can let the model read the repository?
Thanks.
Unfortunately not.
With Davinci the limit is 406 tokens (about 3000 words), and with the other models it is 2048 (or about 1500 words)
That also includes the tokens that are in the prompt you send to get the completion
The word count will depend on the language and the content complexity
Codex is 8096 tokens for Davinci and less for Cushman