Personally I am skeptical. Not about the product or its utility, but about the use of the “LAM”. Not because it’s not possible. Just because of the way they present the ability to connect apps.
in the demo, they say you log into apps via their websites and they dont store data, and they dont reverse engineer anything. Let’s take them at face value for a second. How are they logging into android or web apps on a different device then? where is the large action model coming into play?
they contradict themselves on their website vs their presentation.
Video:
“with the rabbit hole web portal… the way we designed the infrastructure… we do not store any of your 3rd party credentials, we never save your username and password… the authentication happens on their respective apps”
Website:
“LAM completes these tasks for you on virtual environments in our cloud, from basic tasks such as booking a flight or reservation to complex ones like editing images on Photoshop or streaming music and movies”
If they are using virtual environments in the cloud, they are storing your credentials in at least one way. They need to be able to retrieve them and log in, even if they are encrypted by the applications themselves.
Furthermore, the actual items in the demo. Spotify, uber, Pizza Hut , a travel booking, etc. All these demo services have 1 thing in common. They have conventional APIs.
So the Rabbit R1 appears to be just a heavily quantized 7B LLM running on a handheld, with access to conventional APIs via a LangChain or plugins type system . Nothing wrong with this, but it’s not a “large action model” running most of your everyday apps.
This is not to say that they can’t finalize and invent a real “LAM” by the time they launch, but in its current form, any one of us could make a mobile app that can do the same thing. Down to the push to talk and the on-device AI.
The “teach” mode is the only time when it looks like the LAM is actually being used, and I could’ve sworn another set of researchers came out with a similar AI a few weeks ago. If I remember correctly, that other AI in question which include a broad enough model so you had to teach it yourself on your own pc before you could use it. Could they just be using that for this demo, and that’s why thats the only time we’re seeing it in action?
I don’t nessicary think the R1 is a bad product, but I wish the marketing wasn’t deceptive. They have already sold 4 batches of this thing off these promises, and when people realize that you have to teach it things that dont have APIs integrated already, they’re going to be disappointed. I hope things change before then, but I am not convinced at the moment