Hello OpenAI Community,
I would like to propose an idea called the External Prompt Index (EPI) — a conceptual mechanism designed to help AI estimate the authenticity and origin of a user’s response.
Core Idea:
EPI is based on whether a user has consulted external sources (e.g., search engines) before responding to a complex or philosophical prompt. If no external query is detected, it suggests that the user generated the answer independently. This would not affect content or scoring directly but could serve as a meta-insight into how the response was formed — useful in educational, psychological, or philosophical dialogues.
Purpose:
- To foster more meaningful AI-human exchanges.
- To recognize and encourage original thought.
- To allow AI to weigh the context of a reply — not just what was said, but how it was formed.
Ethical Consideration:
EPI would only be used with explicit user consent, respecting privacy and data ownership. No hidden tracking. The goal is not to penalize users, but to provide the AI with additional insight for a more empathetic dialogue.
Use Cases:
- Educational platforms – rewarding independent thinking.
- Conversational AI – adjusting depth of engagement.
- Mental health assistants – detecting hesitation or emotional depth.
I understand that implementation would be complex, and this is just an early concept. But I believe it could be a step toward AI that better understands why we say things, not just what we say.
I’m open to feedback and discussion.