You can already use your API in studio code and android studio canary … you can also check your code with the repl in aichat $ cargo install aichat --force
This sounds an awful lot like:
I’d really like to pay more money, to use the features you already provide, where you do all the work for me instead of expecting me to optimize my process.
I stopped working on it when they gave me official API / discord access but this was working last I was working on it… GitHub - 0m364/Bard: Python SDK/API for reverse engineered Google Bard
exactly! Let’s all put like and comment to get the attention of openai
You don’t need OpenAI to create an API just for that if you can create a repl for yourself like langchain does with the python interpreter
see
https://python.langchain.com/docs/modules/agents/toolkits/python
Unpopular opinion, but I find Langchain a huge pain to work with. I find the results are hit-and-miss and the docs to be incredibly unclear. With the function calling API OpenAI has introduced, I almost bat 1000 for exactly what I want it to do.
I think this is a good point, but it needs a lot of improvement when it comes to uploading files directly to the chat, I noticed that it is limited to a maximum of 300 lines per code
The “it” that you are referring to is likely not the API, as one would discover from the topic “please make a code interpreter API”. You are likely talking about ChatGPT and even pasting text, ChatGPT that already has a code interpreter mode available to some.
1530 lines of python code, 12229 tokens plus system prompt and instructions, analyzed by AI via API:
The application is structured as a chatbot interface using the PySide2 library for the user interface. It consists of several classes and functions that work together to create the chatbot interface.
The main class is the
ChatbotApp
class, which is a subclass ofQMainWindow
and represents the main window of the application. It contains the UI elements such as the chat history area, input box, and various buttons for interacting with the chatbot.The
ChatbotApp
class has functions for initializing the UI, handling user input, sending messages to the chatbot, and updating the chat history. It also includes functions for managing the chat history, such as adding, editing, and deleting messages.Other classes include
JPushButton
, which is a subclass ofQPushButton
with custom styling and functionality,AutoTextEdit
, which is a subclass ofQPlainTextEdit
with additional features for handling user input, andMessageText
, which is a subclass ofQTextEdit
for displaying chat messages.There are also utility classes such as
CustomVBoxLayout
andCustomSplitterHandle
that provide custom layouts and handle resizing of UI elements.The application uses signals and slots to handle user interactions and update the UI in response to events. It also includes error handling for API requests and provides options for changing the AI model used by the chatbot.
Overall, the application is designed to provide a user-friendly chatbot interface with features such as message editing, role switching, and customizable AI models.
(The analysis does miss mention of features and methods, and also isn’t exactly a “describe the structure” type instruction as requested.)
The point of “code interpreter” is to allow the AI to write and execute its own code, to provide calculated answers and output beyond the ability of the AI language model intelligence. Or “keep rewriting and testing this function until it returns the right value” as one possible case.
And the point of similar function via API - which can be done with your own code now - is to extend that to your own pay-per-use chatbot without Alzheimer’s.
In ChatGPT with CI, you can actually upload your own code or class to be executed by the AI to the virtual environment datastore via UI, but the output returned must be in the function format the AI-written code shim can understand. Or it can ask for the whole file to be printed within the python environment for its own analysis.
I think that is partially true. Yes, indeed, it is a plugin, but perhaps based on a fine-tuned model that is not publicly available.
API is a vague word. It can be huge or small. I think a very small API would be very useful. Three things would be on my wish list:
- Uploading and downloading files
- Using the existing or similar ChatGPT API on the Code Interpreter
- Allowing the Code Interpreter to interact with other plugins.
All those things should be pretty easy, technically. We can already upload/download the files including code by hand. Well, it will not run your uploaded code directly, but it will generate the equivalent code and run it.
Since it is a plugin, it should be able to route prompts to other plugins when necessary.
OpenAI states “Disabling internet access limits the functionality of our code sandbox, but we believe it’s the right initial tradeoff. Third-party plugins were designed as a safety-first method of connecting our models to the outside world.” But, enabling internet access for data only, not code, should be safe enough, shouldn’t it?
That’s not unpopular at all, I think that more and more people are starting to realize what you’ve stated exactly, that Langchain is adds an unnecessary layer of complexity that adds close to zero value.
I understand your point of view about LangChain. It’s true that LLMs and the ecosystems around them have grown tremendously this year, and that many of the capabilities LangChain initially enabled are less necessary now. However, I think it’s important to remember that LangChain was a groundbreaking project that made advanced LLM techniques accessible to thousands of people. It was a major step forward in the development of natural language processing, and it paved the way for many of the advances we’re seeing today.
I also think it’s important to consider the use cases for LangChain. Even though LLMs have improved significantly in the past 6 months, there are still many situations where LangChain can be a valuable tool. For example, LangChain can be used to:
- Build applications that use multiple LLMs.
- Create applications that require a high degree of customization.
- Integrate LLMs with external data sources.
- Build applications that are robust and scalable.
Of course, whether or not LangChain is easy enough to implement or worth the effort to do so is certainly debatable. However, I believe that LangChain still has a lot to offer, and I’m excited to see how it’s used in the future.
definitely agree the upgrade to API of code interpreter
Yes, I would love access to code interpreter via API. Be great to allow users to upload an excel file and ask question about it and generate graphs etc.
Isn’t this literally the concept of an API
I have been tinkering with something similar GitHub - pgalko/BambooAI: A lightweight library that leverages Language Models (LLMs) to enable natural language interactions, allowing you to source and converse with data.
Very alpha at the moment, but the initial results are encouraging. Runs and executes locally with access to internet
just brainstorming but what if the code sandbox was on the customer side? Via something like Docker containers.
- That is a great idea. It’s also something you can already implement pretty much on your own if you were inclined.
- I think OpenAI would be at least somewhat reluctant to facilitate that. There’s a not insubstantial risk in allowing the AI the ability to run code in the wild. With a cloud sandbox they at least can ensure it’s locked down and can’t hurt anything.
Of course (1) is a great counter-argument to (2), but I think lowering the barrier to entry too much creates the potential for a PR nightmare for OpenAI through misuse or abuse.
But, who knows?
Honestly, I think a great intermediary step would be the ability to link it with an Azure Notebook. (I’d prefer Google Colab but, you know, Microsoft…)
But, sincerely, a great idea! Also, it reduces computational loads for OpenAI, so win-win?
like you said, openAI would probably be reluctant to have sandbox that they don’t control.
With cloud providers, OpenAI can scale up on sandboxes on demand so they can just price the code interpreter API to reflect their cost.
Now for Open Source implementation of the code interpreter, docker like environment would absolutely make sense
I found some similar ideas:
Are you working at OpenAI?
Please tell us whether to wait for CI API release or build on top of unofficial solutions…