Measuring hallucinations in a RAG pipeline
|
|
2
|
710
|
September 4, 2024
|
Medium Post: Grounding LLM's - Part 1
|
|
22
|
828
|
August 23, 2024
|
How to prevent ChatGPT-4 from answering questions that are outside our Context
|
|
3
|
208
|
August 16, 2024
|
The model is mixing up the input data
|
|
1
|
111
|
July 14, 2024
|
Has regular gpt-4 model changed for the worse by any chance?
|
|
11
|
1057
|
June 17, 2024
|
Dramatic rise in hallucinations with Assistants v2 API / gpt-4-turbo
|
|
3
|
710
|
May 2, 2024
|
Using Assistant API Retrieval Hallucinates
|
|
30
|
4299
|
April 15, 2024
|
Are there any ideas to reduce hallucinations via community-moderated databases?
|
|
1
|
623
|
January 12, 2024
|
Prompting Assistants leads to more non-truths around capabilities
|
|
0
|
427
|
November 11, 2023
|
Gpt-4-1106-preview hallucinated about its own API
|
|
0
|
836
|
November 9, 2023
|
Hallucination in retrieval augmented chatbot (RAG)
|
|
24
|
10629
|
December 19, 2023
|
GPT-4 hallucinates and ignores my instructions in descriptions in plugin openapi spec
|
|
5
|
1021
|
September 8, 2023
|
When you get bored: Try 1-token prompted infinite hallucinations
|
|
5
|
847
|
January 21, 2024
|
Fine tuned model is hallucinating
|
|
4
|
1276
|
December 24, 2023
|
GPT-4 keeps lying instead of saying "I don't know"!
|
|
6
|
6174
|
December 19, 2023
|
ChatGPT halluciating with very long plugin response
|
|
4
|
1571
|
December 16, 2023
|
Hallucinations Gone Wrong!
|
|
7
|
1032
|
January 21, 2024
|