I’ve been thinking about the development of a built-in “Prompt Assist” feature that could be enabled to provide users feedback on their inputs before submission. This could help in refining queries and avoiding common pitfalls like ambiguous prompts or unintended references.
Key features could include:
• Suggestions for clarifying and enhancing prompts based on common usage patterns.
• Real-time feedback on potential errors or ambiguities in user inputs.
• Enhancements to user experience by reducing confusion and improving the quality of interactions with GPT models.
Such a feature could make AI tools more accessible and user-friendly, particularly for new users or those engaging in complex queries. Does anyone have insights on the technical challenges or potential impact of implementing such technology?
=========
Additionally, I want to share where this idea was recently inspired by; I was thinking about the lack of clarity in regards to the prompt for in-painting/out-painting as it doesn’t have any context of the conversation but the user might be unaware of that tidbit and use verbiage that is unclear or ambiguous in their prompt. This leads to unintentional results and the user may not be clear as to why this is happening. Perhaps if the in painting prompt was pre-scanned to detect this then the user experience could be improved as well as increased accessibility for the tool.