A bunch of new announcements at Google Cloud Next. Here’s a supercut of the event.
Nothing too impressive, imho, but Google has been around for decades now and have a great foundation. Competition is heating up which will accelerate AI dev even more.
Your thoughts?
Google certainly has a strength in having its own search engine, doesn’t it?
However, their announcements can sometimes seem a bit overly dramatic 
I believe that progress in multimodal vision and text, as well as video generation, will continue.
ASICs like TPUs are useful for both training and inference in models, but I think they have limitations in versatility.
I’m also concerned about whether data centers and power plants will be sufficient.
1 Like
For sure! Haha.
Yeah, this is what I was meaning with their “foundation”… They’ve got a lot of infrastructure. However, they’re fighting with Microsoft/OpenAI, Amazon, Facebook, and others for available compute…
One of the big jokes at Dev Day 23 was “we need GPUs…” That said, Nvidia seems to be making a lot of progress which might mean more compute for all - or those with $$ to pay for it.
1 Like
I want a super high-performance GPU that runs on solar power and fits in a smartphone🤣
1 Like
Soon, likely!
The other advantage they have is all their (mostly) stable apps like Gmail, Docs, etc. where you can just hook an LLM to it for $xx+ extra per month.
With all these tools charging, it’s going to be even harder to compete unless you can bundle everything for a low monthly price?
Google has already hit me up at least a dozen times asking me to spend money on their AI.
TPUs complement the demand for GPUs, and even when new types of high-performance GPUs are released, older GPUs continue to be used.
Rather than widening the gap where only those with $$ can afford these resources, I see it as a direction where computational resources are distributed to many people😃
1 Like