Tl;dr: Drop your thoughts on how you think openai strategy will look like in the next months. Do you think that plugins like browse, code-interpreter and third-party ones will become accessible via api? Or will they try to build everything in their own UI and want devs to build plugins for that?
I am a dev building explorative applications for companies. For considering what to build, it is important to consider how we excpet openai’s strategy to develop. For example building now a code interpreter with langchain is likely wasted time, because soon openai interpreter is available and that’ll be better than stuff we build ourselves. Some thoughts:
- Some companies are exploring to use the API to build their own internal bot with access to internal systems, say e.g. a “General Motors Bot”. Building APIs for such a bot to MS software (so basically anything, be it Office, Azure, Power Platform) doesn’t make any sense. Similarly, indexing internal information with the Ada-002 embeddings might make some sense, but still begs the question, doing it at a custom bot vs chatgpt plugin. Currently companies are reaosnably sensible about connecting anything to chatGPT plugins, but its only a question of a few months until corporate version with data protection etc is offered. Thus I don’t think any “General Assistant” for a corporation makes sense to build.
- Openai and others (Google, MS) are competing about predominance on the Personal Assisstant market. All players will try to build an “Everything Assistant”, openAI might partner with MS there only deliver the technology and letting MS’ Copilot win this race, but I can well imagine that their alliance is only temporary and OAI will try to have their own platform competing against MS. I imagine the software world in 3 years that every larger systems has a chatUI that can fully use the entire system. And everyone will have a “Main Bot” that asks the system-specific bots to do certain things.
- This begs the question, do you guys think that all the plugins as in chatGP T web will somewhat soon be available via API as well? For some software UI applications, I would think it makes sense to build stuff now, but in some cases this needs a python code interpreter or internet search. Does it makes sense to manually develop this from say langchain, or can I reasonably expect that the API features that anyways?
It be curious to hear any thoughts on what you think the strategy of openAI (&MS/Google) looks like, and how this should influence our actions as devs!