Feature request: inferring prompt from text

We are using natural language prompts as a proxy for the actual language relations that would generate the desired outputs. However, the most effective token combination as prompt to produce a given generation is unlikely to be human-readable. This is aside from the fact that prompt design is very labor intensive.

Would be great to have a feature to infer.a prompt for a given generation. This tool is an example of what I am talking about, but for BERT and RoBERTa:

Just to state the obvious–this is yet another situation in which hiding the model behind an API is holding back advancement of this tool. If the model was open, this tool would already exist as it does for other models. Please open source the model. Please go back to what you once were!