Hi ,
I was going through this deepAI learning course in association with openAI:
learn.deeplearning.ai / courses / chatgpt-prompt-eng / lesson / 2 / guidelines
This is what is mentioned in this video:
It is told that Boie is a real company, the product name is not real and the response the model gives to the prompt is incorrect.
Code:
client = openai.OpenAI()
def get_completion(prompt, model=“gpt-3.5-turbo”):
messages = [{“role”: “user”, “content”: prompt}]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=0
)
return response.choices[0].message.content
prompt = f"“”
Tell me about AeroGlide UltraSlim Smart Toothbrush by Boie
“”"
response = get_completion(prompt)
print(response)
The tutor says that the response is incorrect because Boie is a real company but the product is not.
Question:
-
Why does it happen that the model ‘invented’ as incorrect answer. The LLM’s are an example of supervised learning. So, if we did not teach the model about this product which does not exist, why did it get an answer ?
-
So, what is the level of confidence the users of GPT have when using these models? As a user, I would always be in a dilemma whether the model returned me correct answer or not.