Localhost Dev Plugins not working any more?

Same here. Localhost plugins coming back! :heart:

2 Likes

Totally agree. 25 msg/3 hr makes any Dev testing impossible

2 Likes

I agree. 25msg/3h doesn’t make sense for plugin usage. If I could use it with 3.5 it makes more sense.

2 Likes

And here is another super frustrating thing, while testing the plugin after endpoints finally worked, somehow multiple responses within the same prompt counts against the limits. I was actually in my second prompt when I gave it a task to execute that involves 4 steps, it made so many errors and kept explaining (and fixing them), but while at the end of one of the response (within the same prompt), I got the message that I exceeded the Cap!!!
So how that possible? does that even make any sense to count the responses within the same prompt against the cap?
To my surprise also the 3.5 model made far lesser errors while executing using exactly the same prompt, but that’s another story, I was super excited to see how GPT-4 will do using the tools giving the spark of AGI paper, but my testing is a bit underwhelming (or I had very high expectations :smiley: ) but that’s another story.

But with that @logankilpatrick , I will stop working on further development of the plugin until the cap is either lifted to a fair value and/or bringing back the 3.5 model which was quite good at executing the general tasks, because honestly this is becoming super frustrating, we do these things for free and for benefits of the people, but we don’t have to suffer during the process ourselves :slight_smile:
And I don’t think asking much here because simply the folks took something that was working and replaced it with something that doesn’t work at least with the current constrains, I think taking down the 3.5 model with plugins on the day you folks made it Beta was kind a shot in the foot IMHO.

Thanks.

2 Likes

totally agree with you @ksaid.79.
Google is also releasing similar plugin features ,maybe worth trying that :slight_smile:

2 Likes

I’m probably going with bard with plugins if I have access to them. Real time data+bard is probably going to be more useful for domain specific matters than just chatgpt4 alone. 25messages/3hour for plugins makes zero sense for day-to-day work.

I want to use it for day-to-day work, not just hobbyist.

1 Like

We are exploring many options here, in the short term, devs will have a higher usage cap for testing. The UI says 25 messages an hour but it is higher, I just pinged the team to confirm what the actual number is. It will not be reflected in the UI, only on the server side.

8 Likes

And I am glad local host is working, sorry for the delay in getting it fixed, the team was heads down fixing all kinds of fires as we rolled out.

In general, we are at the very very beginning of the plugin journey, thanks for all the feedback here!

5 Likes

is it supposed to be back for all users? I am still experience the issue.

1 Like

Thanks @logankilpatrick you are really helpful :smiley:

Thank you for your prompt response, @logankilpatrick.

@logankilpatrick
Hey there, I was wondering if the team followed up with what the maximum limit is for plugin developers. Also, do you think we’ll be able to use code interpreter, browsing, and plugins on version 3.5 or will we have to switch to version 4 for all the new beta features? I’m trying to get a rough understanding of the plan because my team and I want to know how this will impact our development and use of plugins.

1 Like

Hey folks, currently still running into this issue. My network tab doesn’t show any request to localhost, and my service doesn’t show any connection logs either (other than the logs generated when initially installing the plugin).

I did validate that the actual OpenAPI spec content is coming from my service (by changing some fields and re-installing the plugin).

Should this be working? If I need to go with hosting the plugin I suppose I can, but I really wanted to develop locally first.

It’s working fine so far here.
What is the output when it tries to call your endpoint? the screenshot I shared above should give you an idea of how it used to fail.
Unfortunately during the course of my testing, there are lots of silent places that this could fail (assuming you are not running into the same issue) and there is no Devtools any more from OpenAI for the plugins so we can debug better :frowning:

Just to make sure things are working from the platform as expected, try a well known working version of your plugin, or use the todo plugin on github as a test and see if the endpoint is getting called.

I was also having the same problem. I was trying it in Safari, and couldn’t get it to work. When I switched to Chrome it worked for me.

Yes, there is an issue with Safari, it blocks non HTTPS messages locally. I will add this to the docs, I tried to find a workaround but it’s not easy to get it to work with Safari.

1 Like

I am on chrome, but still see the same issue.

Last I tested was on Friday with text-davinci-002-plugins model. Thought ill mention even though this is probably not related the model you are using.