Codex seems to be very restricted in the file size, and the files in which it would be needed the most (larger ones), cannot be fit into the Codex prompt. Are there some best practices on how to mitigate the issue? I believe the token limit cannot be gone around, so what is left is to split the file somehow for the prompts and reconstructed after that. Any suggestions how to do this in the best possible way?
What are you trying to do exactly? With a bit more details, I might be able to help.
I’m trying to build an OpenAI based codemod. The idea is to give an example input & output, and let Codex handle the rest. It works decently on smaller files, but the files the codemods are needed the most(larger files) goes over the token limit. It also loses it’s efficiency the larger the file is: It often just gives up more likely the further it proceeds in the file.