I explored different LLMs but OpenAI LLM response is much faster than others, how does they manage to do this in such a short period of span
Welcome to the forum, Shrimad.
Basically in two ways, the first is simply more compute, more GPU’s in the servers, the second is streamlining the execution pipeline with bug fixes, new ways of achieving the same result in less steps and new algorithms to get more out the existing compute infrastructure.
1 Like
Thanks, I am curios about the algorithm you are using for this in-streaming approach. If you are comfortable than can I get to know about the algorithms?
I wish I knew! I’m not an OpenAI staff member, just a developer who likes the ecosystem they have created.
1 Like