Can you add gitpython to the list of installed packages in the Code Interpreter sandbox? I want to be able to upload a repo and have the AI read through the diffs, etc.
Also, it tried to use SpaCy and NLTK libraries, which are installed, but don’t actually work because it has no internet access and can’t download the models, so you should pre-download those and install them in the virtual environment, too.
Code Interpreter (CI) does not (as of yet) have access to the live internet. So, even if you had access to the packages you want, the model would not be able to make use of them.
Currently, the only way to give data/files to CI is to use the upload button. You can upload an archive file (zip, rar, 7z, tar, etc) and CI will unpack it in your sandbox, so if you get all of the files of interest from the repository you want to analyze, that might be a possibility.
You can get the diff for each individual commit by appending .diff to the commit URL.
Using the latest commit (at the time of this writing) for OpenAI evals as an example,
“I want to run a language model within my python within my language model”
The AI language libraries are probably there for their utility functions.
Imagine if they added tiktoken on it - and without it needing to hit the internet for dictionaries. “How many tokens of conversation history passed this turn. Then to fool the history mangler, summarize everything we’ve talked about.”
Maybe someone from OpenAI could stroll across this. It seems they are focused more on the next tweaks for ChatGPT than any API dev.
Apologies, from your first post in this topic I mistakenly thought you wanted to be able to simply point CI at an online repo (e.g. github) and have it pull and interpret the diffs.
I believe I now understand what it is you’re hoping to achieve.
With respect to the SpaCy and NLTK libraries, I would guess (at least) one reason why the pre-trained models are not pre-downloaded and installed is because of the size requirements in having them in everyone’s virtual environments—but maybe that’s not as large a concern as I am imagining. I’ve not used either (I only “dabble” in python) so I could be entirely off-base there.
I’m not entirely sure why you think this would be simpler. I sure it’s easier for you, but it is perhaps asking more of the model than you should?
I always find I get better results when I go out of my way to meet the model where it is. In this case I would expect that to be generating the diff files myself in a way which is easy for the model to extract the information it needs from the diffs to answer any questions I would have.
Regardless, with respect to your original post, this isn’t the proper venue in which to reach anyone at OpenAI with this type of decision making authority. Your best bets would be to provide in-chat feedback and submit a suggestion via help.openai.com.
Lastly, I’m changing the category of this topic to ChatGPT since it relates to using chat.openai.com and not the development of a plugin for ChatGPT.