just curious if we deploy the plugin code in the same Azure region or CDN as ChatGPT, will it boost its performance? and which region would be the most suitable choice?
I think currently the latency in the communication between chatGPT and the plugin is insignificant compared to the completion time. Any location in the world practically is less than 600ms. Having said that, hosting in the US would be a little faster than other places.
Hey John,
Thanks for your reply… I’ve noticed that the installation times for plugins can vary significantly, with some even timing out occasionally. It seems that this inconsistency shouldn’t be related to the completion time ?
It is related, because in the background OpenAI uses GPT-4 to produce metadata for the plugin. Once you start a plugin install, OpenAI gets the plugin descriptor and manifest, feeds that into GPT-4 to produce a namespace definition and example usage, then that’s fed into your system prompt.
I don’t work for OpenAI but this is my best guess based on what I have seen on how it works.
I don’t think there is a relation between installation time and completion. Installation happens only once and I guess there is some back processing from chatGPT so that might cause the delay.
but this should happen only at the installation right? Not when using the plugin.
Well, my point is that both installation and usage will involve GPT-4, and if GPT-4 services have issues, you will see similar timeouts and latencies and so on in both.
For plugin usage, the LLM is used to identify which request should be sent to a plugin, and then it is used also to parse and reformat the output from the plugin.