Subject: Curious About LLM Accuracy & Methods of Detecting/Avoiding Confabulation - Also Humbly/Desperately Seeking Plugin Access Tips
Hey OpenAI Community,
I’m grateful to get involved in this group- thank you for all of the incredible insights, I’ve learned a lot in my first few hours since joining.
I’ve got a question about how OpenAI handles tagging parts of text in LLM inputs/outputs. How do you tell the difference between ground truth knowledge and what the LLM predicts as the next text output? I’m generally curious about how confabulation is avoided. I feel like external knowledge bases might be the key here, and maybe there’s a way to tag LLM output to show what’s trustworthy and what needs ground truth verification. Any thoughts or resources on this would be super helpful!
Also, I know I need to be patient, but I’m also determined to get access to the plugins feature in a legal and moral way. If you have any tips or tricks on how to get off the waiting list faster or even some good vibes to send my way, I’d really appreciate it.
Thanks for being kind and helpful (this is just a cool thing to be a part of, and I’m excited to keep on learning).
Cheers,
Colin