O1 Mini failing the strawberry question?

Today I was just messing with o1 mini in the API so i just decided to ask it the typical question of how many r’s are in the word strawberry. I was expecting it to answer 3, but it somehow answered 2. So i decided to add a “Huh” so the model might rethink it’s answer and this strategy was successful for models like GPT-4o and 3.5 Sonnet, but not for o1. o1 decided to answer “There is 1 r in the word strawberry” which no other model has done. Why does this happen in the API but not in the ChatGPT app?