Can someone explain why OpenAI’s models are not “online” like Google’s? What I mean is that while Gemini has direct access to real time information, ChatGPT does not, and has a knowledge cutoff. Why can’t OpenAI do the same? Is it because it’s difficult (in terms of software or hardware) or because being online has some disadvantages?
The second question is: what are the possible reasons of the significant speed gap between the two models? Gemini returns answers instantly, while ChatGPT needs to think about it a bit before starting to type.
The reason Gemini, provided by Google, and Microsoft Copilot, provided by Microsoft, can offer real-time information is because they have their own search engines, which allows search results to always be reflected in their responses.
On the other hand, OpenAI does not have its own search engine and relies on Microsoft Bing, which I believe is the reason for this.
As for the speed difference, I think it’s because each company has optimized their language models for search.
1 Like
Thanks for the explanation. I guess that’s why Sama said they wanted to build their own search engine.