Today I was just messing with o1 mini in the API so i just decided to ask it the typical question of how many r’s are in the word strawberry. I was expecting it to answer 3, but it somehow answered 2. So i decided to add a “Huh” so the model might rethink it’s answer and this strategy was successful for models like GPT-4o and 3.5 Sonnet, but not for o1. o1 decided to answer “There is 1 r in the word strawberry” which no other model has done. Why does this happen in the API but not in the ChatGPT app?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
GPT thinks math answers are wrong, even when it says they are right | 2 | 71 | October 27, 2024 | |
Unknown model 'gpt-4o-mini' | 6 | 292 | September 23, 2024 | |
O1 model prompt. Not really sure why this is getting flagged, is this a system error? | 2 | 136 | December 6, 2024 | |
Bug in models GPT4o-mini and GPT 3.5 turbo | 0 | 103 | July 23, 2024 | |
Anyone getting GPT-3 responses when calling API for GPT-4 model? | 7 | 1221 | September 28, 2023 |