I have an interesting idea, but I’m not sure if this idea is allowed. If GPT could allow 2 to 3 conversations to run at the same time, we could assign different tasks to different conversations, and then return the task information to one of the conversations for centralized analysis, adding more possibilities.
If this behavior can be allowed, then we might be able to do a better job of linking plugins and better assigning tasks to GPT
Note that most users do not have the API activated and our discussion will be limited in scope to this case.
Also, I understand that OpenAI will not adopt this proposal easily, which may put more pressure on the server and facilitate some violators
It’s currently able to do that pretty easily with some clever description within the same conversation and a single prompt.
GPT-4 great at understanding intent and coming up with proper flows of its own, and it does follow the tasks assigned, it makes error but far lesser and able to correct overall, till you hit the context limit of course
Didn’t get your point, sorry
Please elaborate.
Maybe, you are referring to something similar to HuggingGPT or AutoGPT, which could still be implemented and integrated with the current setup, pretty much easily with not much lines of code.
Thank you very much for replying to this question which was not thought through in depth.
My original idea was to imagine each GPT in a conversation as an employee. In real life, one person’s ability is always limited, but if you choose to work in a team, you will be much more efficient and create more interesting things.
In this scenario, if we set different goals for different conversations, load different functional plug-ins, and then let them communicate with each other, we may get more accurate and perfect answers.
For example, GPT always won’t find its own mistakes, if we assign another Conversation to check and correct the problems, we may have more interesting discoveries.
Of course, maybe AutoGPT already does this (I’m not too familiar with this project)