I have been concerned that my plugin performance has been degrading. By performance I mean the responses and interpretation of json by chatgpt. Over the last few weeks I notice that the model is not keeping to the limits (see: There are limits to the length of certain fields in the manifest file mentioned above which are subject to change. We also impose a 100,000 character maximum for the API response body which may also change over time.)
However I see many other plugins making stuff up and ignoring the returned json. For example I tried asking webpilot 'explain this white paper to me https://arxiv.org/pdf/2305.14314.pdf ’ - I was returned a summary on a completely made up paper. However the json response from webpilot is accurate. I have replicated this with the pdf reader plugins too.
It appears to me that openai are enforcing the 8k token limit we see in gpt-4 api calls and perhaps the documentation no longer means 100k characters (about 25k tokens - aprox.
I have posted a few times about this but there is not much response from Logan and team at the moment. It is creating for our users a lack of trust in the plugins.
Are others seeing the same issues.
I have attached a few screenshots.