Thoughts on Rabbit R1 Device tech? (Natural Language OS)

Their device is $200. It costs about $100 to train a small LLM. So, I see it as a toy that is currently needed to use their service. But it is so cheap it is not big deal. Eventually, they will sell access to their service to makers of AR glasses or whatever or sell thier whole company to google. The killer feature is the ability to train custom actions with it and then use voice commands to have new capabilities.


Only for interface reasons, not for computation, I will still be carrying my phone, so it doesn’t make sense to have a second device with basically the same hardware.

Imagine something that’s ~20$ and roughly the size of an air tag, with a microphone and speaker, so you can quickly interface with your favorite AI without having to pop your phone out :laughing:

This is basically already a thing, but it’s only as powerful as the amount of smart products and sensors you have, and most people don’t have the amount of “smart products” to really utilize this.


The newer iphones have this omni action button or whatever it’s called - it would be imaginable that devices would come with a dedicated assistant mode that’s easier to access than ever before.


Yeah something like that would be amazing, I just don’t want to carry an iphone in my chest pocket, it’s a bit big for that :rofl:

1 Like

maybe we can shrink them back to 2010 sizes as the screen loses importance :laughing:

1 Like


Now, if they could turn the R1 into an update for my phone I would spend $29.99 for sure. As it is, it is not a practical device. It definitely have some interesting capabilities. The one thing that excites me is the fact that this gives us an idea of the direction we are going in.


Many of you are completely missing their main point. Which is the fact that currently, you have to use individual apps for individual tasks.

What this is allowing for is to authenticate with all these different apps, and then make requests that call each of the apps for you, so it’s all part of one request and not 10 separate requests, for example.

I just went through this very same problem after Christmas when I got all these things that were supposed to help make my house a smart home, but it required all these individual apps to do individual things like dimming lights, or turning on music or switching from one music streaming service to another, changing the room temp etc. etc. I really don’t want anything to do with Amps anymore. I just want to issue voice commands.

1 Like

Wonder if this device could operate other devices? Like could i tell it to play Netflix on my phone? Assume that could be set up with some amount of effort, but be nice if it could do it right out the box

Get “home assistant”, I’ve been using it for several years, and it solves that problem entirely.

It’s free & open source btw :wink:

Not sure what you have been reading. The main issue here is that it’s a separate portable device.

To be fair, this is and has been a difficult task with smart home technology. Mainly because each company wants to use and control their own crappy technologies, but as @N2U has posted, open-source has come through and hopefully that, and/or Matter will set the standards.

Trying to abstract all of the these third-party services to a single voice command is a near-impossible task that will either be full of bugs, slow, faulty, and/or simply not work.

I truly don’t understand this. I like apps. They are designed specifically to deliver the information I need to make an educated decision. On Food Delivery I can see all the places close, if they have discounts, their reviews, and menu within a minute. Their visual interface is very powerful, intuitive, and provides a lot of information.

There’s this strange obsession with making things so stupid simple & packed together. I don’t get it. Even if I had a voice assistant capable of all of these things I would still want it to open the apps for me and navigate through them with me. But, again, it would probably be quicker just to use the powerful interface rather than blindly argue with a voice assistant.


It’s a Chinese company bought by baidu, raven tech

1 Like

Hello, the Rabbit R1 is created by Rabbit Tech not Raven Tech. (Pretty similar names so it would be easy to get them confused.)

Jesse Lyu is the founder and ceo of both companies

1 Like

Olvidan que también puedes usar tu tarjeta sim en el rabbit r1 , asi que quiero un rabbit r1.

Why can’t Rabbit R1 just be an app on my iPhone? I already carry my iPhone, it’s got a camera, microphone, speaker, etc. I just need the Rabbit OS, running as an app. Is there already an app on iOS that mimics this?

1 Like

For local device LLM’s on your iPhone, I would look into Apple’s recently released “Ferret” LLM. It’s a small multimodel LLM that can operate on devices like your iPhone.

So expect these small capable AI models to hit your devices that you already own, or perhaps come packaged with newer iPhones in the future.

If this takes off, I would expect various device manufacturers to deploy these small LLM’s on their devices.

I am not sure an “app” version on your phone will suffice, because of the computing power required to run the model. It may only become available on phones that have the upgraded hardware to run the models locally on the device.


Would be nice if it was built into the Apple Watch. Don’t find it replacing my phone.

1 Like

This has to be the analog era iphone moment in the AI LLM era.

In an old BBC TV series ‘Starcops’ the lead character has such a device and he used it to sort, collate, refine and extrapolate information. I think the series was in the 80’s.
The device in that series is now just about a reality in the rabbit r1!
It’s power in sorting and sifting information into a digestible amount for human consumption is amazing and such a small delay time.
The other holy grails of interacting in a human natural way with tech and translation into other languages in real time are just around the corner.
I think it’s one to watch .