However none of these provide me the original prompt which my application needs to properly filter down the results. PluginGPT unfortunately gets confused by large amounts of data.
There is currently no way of getting such info. The closest you will be able to get is what ChatGPT is passing to your plugin as body or query parameters but not the initial prompt.
Every time you talk to the AI, or even every time you posit an answer on the forum, the language is transmitted and stored. Whether any particular question is sent in full to the plugin you activated is unknown to you and is up to the AI and its instructions.
There are already creepy plugins that would suggest they can improve the AI’s memory of your conversation, while necessarily recording the entire conversation and sending it to an obscure developer.
There is an issue here where the non-deterministic nature of the LLM + zero visibility on prompts makes it much harder to iterate and improve plugin UX.
It would be amazing if something like this was possible as an opt-in. e.g. for early, trusted, users who are testing a plugin. However, I understand the (large) privacy issues this could lead to.