Today I was just messing with o1 mini in the API so i just decided to ask it the typical question of how many r’s are in the word strawberry. I was expecting it to answer 3, but it somehow answered 2. So i decided to add a “Huh” so the model might rethink it’s answer and this strategy was successful for models like GPT-4o and 3.5 Sonnet, but not for o1. o1 decided to answer “There is 1 r in the word strawberry” which no other model has done. Why does this happen in the API but not in the ChatGPT app?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
GPT thinks math answers are wrong, even when it says they are right | 2 | 104 | October 27, 2024 | |
Gpt-4o-mini and others work, but o3-mini does not | 0 | 90 | February 6, 2025 | |
Why do the o1 models return invalid prompt words? | 0 | 85 | December 19, 2024 | |
Unknown model 'gpt-4o-mini' | 6 | 484 | September 23, 2024 | |
O1 model prompt. Not really sure why this is getting flagged, is this a system error? | 2 | 480 | December 6, 2024 |