For the last two months I’ve been unable to call my plugin. I either get an error 429 from within GPT (the ‘conversation’ query specifically as viewed in chrome’s network inspector) or a stream error (in that case the ‘conversation’ query reports a code 200, but has no preview or response in the network inspector).
One time it showed a long sequence of queries where each had a slightly longer response that seemed to correspond to the ‘using plugin’ preview window. Each was one token longer than the previous, like it was spending all of some limited amount of communication on filling out the plugin query preview. I unfortunately didn’t copy that sequence and haven’t been able replicate it.
On my side, my server is basically the demo server as at https://github.com/openai/plugins-quickstart/blob/main/main.py . It doesn’t ever report these POSTs from the plugin. It does correctly return all other messages - the image, the schema, etc. I’m able to send a local query from Insomnia and the server correctly fields it.
I am able to use the Wolfram plugin with no problems.
Does anyone know what to do about this? It’s completely blocking my development.