Thoughts on Rabbit R1 Device tech? (Natural Language OS)

Personally I am skeptical. Not about the product or its utility, but about the use of the “LAM”. Not because it’s not possible. Just because of the way they present the ability to connect apps.

in the demo, they say you log into apps via their websites and they dont store data, and they dont reverse engineer anything. Let’s take them at face value for a second. How are they logging into android or web apps on a different device then? where is the large action model coming into play?

they contradict themselves on their website vs their presentation.

Video:
“with the rabbit hole web portal… the way we designed the infrastructure… we do not store any of your 3rd party credentials, we never save your username and password… the authentication happens on their respective apps”

Website:
“LAM completes these tasks for you on virtual environments in our cloud, from basic tasks such as booking a flight or reservation to complex ones like editing images on Photoshop or streaming music and movies”

If they are using virtual environments in the cloud, they are storing your credentials in at least one way. They need to be able to retrieve them and log in, even if they are encrypted by the applications themselves.

Furthermore, the actual items in the demo. Spotify, uber, Pizza Hut , a travel booking, etc. All these demo services have 1 thing in common. They have conventional APIs.

So the Rabbit R1 appears to be just a heavily quantized 7B LLM running on a handheld, with access to conventional APIs via a LangChain or plugins type system . Nothing wrong with this, but it’s not a “large action model” running most of your everyday apps.

This is not to say that they can’t finalize and invent a real “LAM” by the time they launch, but in its current form, any one of us could make a mobile app that can do the same thing. Down to the push to talk and the on-device AI.

The “teach” mode is the only time when it looks like the LAM is actually being used, and I could’ve sworn another set of researchers came out with a similar AI a few weeks ago. If I remember correctly, that other AI in question which include a broad enough model so you had to teach it yourself on your own pc before you could use it. Could they just be using that for this demo, and that’s why thats the only time we’re seeing it in action?

I don’t nessicary think the R1 is a bad product, but I wish the marketing wasn’t deceptive. They have already sold 4 batches of this thing off these promises, and when people realize that you have to teach it things that dont have APIs integrated already, they’re going to be disappointed. I hope things change before then, but I am not convinced at the moment

3 Likes

I think everyone is making it more complicated than it actually is. I think that they have their main cloud based ai, each rabbit r1 will have its own virtual machine running on this cloud, when you go to their web portal you’ll be seeing your own virtual machine menu running in that cloud. You login to the apps of your choosing, so the credentials aren’t saved anywhere other than whatever app you are logging into, from that point when you ask your r1 to do something (with whichever app) it’ll still be logged in, because the virtual machine is still running. You should never have to login again and rabbit won’t have your credentials. Now, if you were to log out all devices in said app, you’d probably have to go sign in on the rabbit web portal again. The cloud based ai will always have everything it learned from you on your specific virtual machine running on the cloud. If for some reason they need to restart their main computers, they lose power, or they have a crash, all information in your virtual machine could be lost and require you to log back in, teaching it everything all over… unless they have some failsafe that creates an image of all virtual machines every so often, in case of power loss or whatever. I could be completely wrong but i doubt it, I’ve used vm’s for years and it makes the most sense to be able to have the security/privacy they claim to have.

The issue i see is that they probably don’t actually own whatever cloud it’s running on. They probably are paying another company to host this ai that then connects to each virtual machine that they also are leasing. If they at least owned the vm’s, they would at minimum be able to keep those going if the host randomly restarted their ai. The other problem is that if they do own it all, they will probably have issues when 10k people get their device the same week and keep flooding the ai with random requests, essentially creating a ddos attack on themselves, or on new years eve when a million people pull out their rabbit and tell it to send a happy new years text or 100,000 people in the same time zone request it to display the current countdown at the same time, it’ll probably crash or have extreme latency issues for some. Either way it seems like an awesome idea, but imagine them just releasing an app that does the same thing on Android and iPhone would make more sense. They probably decided against it because then Meta, Google, Amazon, Apple, Microsoft, and many more would all have their own proprietary version (copy) release right after. So, making it hardware based gives them an opportunity to make more money in the short term, the device probably costs under $50usd a unit to produce, which leaves them with $150usd per unit, for the software. There is no way this many people would have purchased the app version for their phone for even 25% of that. I mean if it was a downloaded app for $37usd to try would you try it?

1 Like

Fredrick Pohl first conceived of the ubiquitous smart device in his “The Day after Forever” (1949)

A story about a fireman who is killed in a fire and cryogenically stored. When his insurance claim earns enough interest to pay for his resuscitation he awakens to a society several hundred years in the future.

There are more than likely others but that’s one of the very first.

1 Like

My thoughts on the device are that the purpose is to move human data from the west to the east. There is no innovation as side from the data being offloaded from Apple and Google and loaded to CCP controlled servers.

Our government is for sale as all governments are and our data and hence control over it and us is also for sale. This is a weapon of mass destruction. One that the enemy installs targets and launches willingly.

I’m sure you’ve seen those YouTube videos where they demonstrate a ChatGPT type tool that allows you to make an entire list of requests and parameters and barriers in your request that’s all I’m talking about.

If those requests would require four or five different apps like in their example with Booking a vacation you make one request and everything is handled.

The big thing that would make it all work is that these companies start building APIs to rabbit. The companies that build the apps could be paid a certain fee for each API call as part of their revenue.

Another example in your work life might be some thing like taking a data set and having rabbit, put it into Excel, and build graphs and pivot tables, whatever you desire, and then maybe take the resulting data and load it in as a Microsoft access database so that the data can be Shared and updated by all the staff, etc. etc.

Even with all, this said, as I stated above, it seems this would be easily doable as a single app on a cell phone, which controls all the other apps, rather than a hardware device. I would agree with all of you, raising that as a concern.

1 Like

This is a brutal chicken-egg problem. Why would companies build APIs for rabbit unless it already has a massive userbase, but it can’t without these APIs already built. Function calling LLMs? Maybe?

I believe this is close to what OpenAI envisions with their GPTs. They captured a massive userbase and are now saying “Developers, look at our walled garden with a massive userbase, come create your products here, don’t mind the fire in the corner”.

I see this as also what companies like Zapier are trying to accomplish.

I can definitely see it.

If I knew what kind of food I wanted, the temperatures I like my house to be & when I’ll be there, when I wanted my car to warm up, I could tie something into my Schedule app to perform these tasks for me (because there’s nothing to be searched & assumed).

Right?? This is really what I’m trying to see. I can see LLMs being built-in to phones for user intent/routing, maybe (which is really what Google Assistant, Siri does). I’m perfectly fine with running more complex services via a third-party cloud or even directing it to my home network.

Realistically, if a phone can be upgraded to accommodate these LLMs and are highly coveted, well, then phones will have them. Similar to cameras.

3 Likes

Didn’t want to start a whole new thread for this, but it’s related to the custom device vs smart phone…

Should OpenAI release a dedicated ChatGPT Smartphone?

  • Yes
  • No
  • Maybe?
0 voters

Lol why am I always late to the game when I find great discussions like this :rofl:.

I’m curious about this tech, but more or less because I’m the guy that would break it open so I could build my own and tinker with it. I did similar with Amazon Echo devices :face_with_hand_over_mouth:

I’m going to be perfectly real here, people are not getting creative enough with this stuff yet. This is like having an iPod in my pocket with my phone all over again. God that was an awkward annoying era.

If I had millions of dollars I’d make a GPT-based Spellbook / noteboook. And it would talk like Weiss from Nier. Now that is an extra “device” I’d keep around. Practical, multi-functional, and has a reason for existing beyond “It interacts with your phone for you because reaching in your pocket to use your phone is soo much effort”

4 Likes

This. Exactly! :joy: That feeling when one item scratched the other while pulling it out of the pocket. Ooof.

I think you’re right. It’s on the right track but not there yet. Like, maybe if I had glasses this would work? I think it would direct through the phone, anyways.

I am so down to becoming a Wizard. Especially if dueling is possible

1 Like

I’m skeptical about using audio only. I often prefer to write than speak, especially when an important action is at stake.

1 Like

RIGHT

Honest to god, I think we could genuinely get there really soon. You have no idea how much I’m actually dying to build a prototype for something like that on my own. I’m waiting until I can get my hands on Qualcomm’s new chip coming later this year to actually build a talking notebook. But the cool part is, apart from that, all the tools are actually already there to build something like this, js.

At this point, can someone with money please steal my idea so we can get this ball rolling? :rofl:

I wanna ask GPT to shoot fireballs out of my notebook dammit.

2 Likes

Not a smartphone, but another tech piece like what Rabbit did.

In their keynote, they said that you could press a button to open up a keyboard if you wanted to type instead.

Now that would be awesome.

1 Like

Some info on 500 ms response time and reducing latency… Comments by Microsoft CEO as well…

The phones will eventually build the same thing…instead of us opening an app, we can ask siri to book an uber ride and siri will know I am referencing the Uber app…and If they have to intergrate a LLAM then great…I see similar to the function feature in chatgpt assistant APIs and I have built a desktop app that converts user prompt to some structured input and calls the function asked which then executes the action/api

They do require you to sign up with your credentials, I am thinking they use an api integration with oauth flow and save your refresh token instead of the actual user credentials, the refresh token lets them continuously access the app on your behalf without needing to save your actual creds…

This can work till phones build it which is bound to happen…

1 Like

I bought one, mainly cuz im curious about what will happen when introduce it into a qubesOS ecosystem…

3 Likes

GOD. Seriously. Can investor-hungry companies just CHILL??

Put some RESPEK on THA NAME.

This reviewer is great though. Loved his review for Humane (tf name is that lol).

Seriously. All of these products are just feet in the door. “FIRST” comments on youtube. Provide no value, waste resources, clog up the actual insightful comments, and are generally just a waste of space.

“pls buy me because in the future it’ll actually be useful”

Yeah. Sure. There’s so many products out there that ride the hype-train, lots of futuristic promises, are released, are shit, and yet somehow pop up a year later and manage to carry the hype on their back. Oh, wait… Tesla…? LOL. No Man’s Sky I think is THE ONLY EXCEPTION (maybe Cyberpunk?)

Thanks for sharing.

For a tangent here’s an interesting article that I can resonate with. I don’t think it’ll happen in the next 10 years (unless hardware really does just keep gettign better and better. Maybe a mixture of augmented reality glasses and then a smartphone as a remote?) but I can 100% see the future of these technologies being in glasses

3 Likes

Yeah, that’s a great story. It’s so much better than Starfield too lol…

If you peer a few more years forward, though, we’re headed toward post-capitalism, I think. People look at me less crazier now than they did a few years back when I say that heh… Still sounds crazy, though!

2 Likes

Turns out that it’s just a android app after all.

Tldr: someone was able to pull the APK off the device and install it on their phone.

4 Likes

Is this product even ethical at this point? That is insane. Their own app store…inside of the app… wheezee with generative UI for each app as well… WWHEEEZEEE

/me dies of irony overflow

I mean, LOL. I just… I seriously can’t. I have nothing else at this point. This is gold.

This could have literally just been an app. I think I’d actually rather use the ChatGPT app than this

I swear to the android lords. The reviewer made an interesting point about it being touch-screen…but not support it for anything besides the keyboard.

High-level Executive Bob: “How can we differentiate this from a phone & not appear like an app?”
High-level Executive Steve: “What if we got rid of touch-screen and used a scroll-wheel”

Monay monay monaaayyyy

1 Like