- Today in Tabs
- Posts
- Rabbit, Run
Rabbit, Run
Smells like Updike in here.
Rabbit’s R1 is a square plastic box designed by square plastic box designers Teenage Engineering which contains a screen, a camera, a mic, a speaker, some awkward physical controls, and an internet connection to Perplexity A.I.’s chatbot, which lets you ask questions and receive incorrect answers after an uncomfortably long wait.
As Marques Brownlee discovered in his barely review “Rabbit R1: Barely Reviewable,” the screen is actually a touchscreen, but you’re (mostly) not allowed to use it as one. Brownlee listed some of the other ways the Rabbit R1 shares functionality with a rock:
“…it can't set alarms, can't set timers, it can't record videos, can't record photos, can't send emails, there's no calendar built in.”
It sounds kind of relaxing? You might think this is bad, but by A.I. gadget standards “it doesn’t work” is table stakes. The less it works now, the more blank space there is to imagine the amazing things it could do in the future. The Atlantic’s Caroline Mimbs Nyce suggested a more immediate question: “As with so many AI products, the R1 is fueled more by hype than by a persuasive use case. (So many of its functions could, after all, be done on a smartphone.)”
So why is the R1 confined to a dedicated device that amounts to the world’s most useless smartphone? Why isn’t it just an app? You know where this is going. This weekend in Android Authority, Mishaal Rahman wrote that he:
…got the Rabbit R1 launcher up and running again on a stock, unrooted Android device (a Xiaomi 13T Pro), thanks to help from a team of reverse engineers including ChromMob, EmilyLShepherd, marceld505, thel3l, and uwukko. We were able to go through the entire setup process as if our device was an actual Rabbit R1. Afterwards, we were able to talk to ChatGPT, use the Vision function to identify objects, play music from Spotify, and even record voice notes.
Yes, it’s just an app. Android Central’s Jerry Hildenbrand argued that just because you can install it on a stock Android phone, that doesn’t make it an app. But his argument also requires us to believe the “Pixel camera app” isn’t an app, so I’m not buying it. It’s an Android app. Here’s a video of it running on an Android phone, which it can do because it’s an Android app.
Building on top of the Android open source platform is a perfectly reasonable approach to develop a device like this, even if it’s kind of scammy to make a big deal in the marketing about an imaginary “Rabbit™ OS” and issue statements to the press claiming that “rabbit r1 is not an Android app.” No one who works in tech would expect it to be anything but an Android app, which makes it strange that Rabbit even tried to keep that a secret. I wonder if the company has anything else it would like to keep secret?
Building on research by Emily Shepherd, Ed Zitron documented his belief that it does:
In November 2021, a company called Cyber Manufacture Co raised $6 million for its “Next Generation NFT Project GAMA,” about a week after it incorporated with the Secretary of State of California.
…I’m talking about it because the CEO of GAMA was and is Jesse Lyu, the co-founder of Rabbit, the company that makes the purportedly AI-powered R1 device, and that Cyber Manufacture Co. is the same company as Rabbit Inc.
In excruciating, punishing detail Zitron chronicles how Rabbit is a pivot from the ruins of the NFT hype bubble directly into the A.I. hype bubble by a team with a history of overpromising and underdelivering who seem keen to keep their NFT-peddling days under wraps. Zitron also found some evidence that Rabbit may not be currently authorized to do business in California. He sums up by expressing his concern that:
Lyu — and, for that matter, Rabbit/Cyber Manufacture Co. — has failed to be transparent with its customers, the press, and the public writ large. And yet he demands an unbelievable amount of faith from users of the Rabbit R1, who must provide their usernames and passwords for any service they wish to connect to the device by remotely connecting into a virtual machine and typing them.
Yeah, about that remote connection stuff…
Ok, first let’s step back: so far we have learned that the Rabbit R1 doesn’t work on its own terms as an A.I. assistant. We’ve learned that far from needing to be an orange plastic teenage box, the R1 is an Android app that could, in theory, be running on the phone you already have. And we’ve learned that the company behind it seems shady at best.
But one of the biggest selling points of the R1, and the feature that occupies the largest share of this device’s Great Imaginary Future Where It Works (the GIFWIW, trend it) is the “Large Action Model” or LAM. Rabbit’s website claims that the LAM is:
…a model that learns how to use any software it runs into, takes actions, and gets better over time. It learns by studying how people use online interfaces, and then it is able to operate those interfaces in the same way that a human would. Importantly, it also understands natural-language inputs, so you can speak to it like you would speak to a personal assistant. You ask it to do something, and it takes care of it for you.
This is, how you say, bullshit. Zitron, above, linked to a video by Aaron White showing that the way you provide Rabbit login credentials for the four (4) services it currently claims the LAM supports is to go to a website which (semi-secretly) opens a VNC session running the service on another browser on a remote server, which you then log in to with your real username and password. “It feels like they’re doing auth hijacking on you?” says White, because that is exactly what they’re doing. Rabbit calls this the “Rabbit Hole.” Listen, I don’t know what they were thinking either.
so, here's everything we did to achieve this in action:
— xyzeva (@xyz3va)
9:55 PM • May 7, 2024
This week some more reverse engineers demonstrated that when you go down the Rabbit Hole you’re also not connecting to a custom “Rabbit™ OS” there, but regular Ubuntu Linux running on an AWS c6a.12xlarge instance, which is a cloud computer with forty eight CPUs and ninety six gigabytes of memory. It’s a big powerful instance, and there’s no way it’s not shared with many, if not all, other R1 users. @xyz3va posted videos showing how she broke out of the VNC browser to a shell, and ran Doom and Minecraft on the remote server.
This server is storing and running your login sessions so the R1 can try (and usually fail) to call you an Uber or play music on Spotify. How secure are these active logins you’re storing on this cloud server you don’t have any control over, and which penetration testers are already posting videos of themselves getting a shell and installing remote scripts on? It’s “isolated” and “well-contained” according to Rabbit. So who can say.
What @xyz3va can say, though, is that there’s no evidence to be found of a Large Action Model on this backend. The server appears to be running a web app testing package called Playwright, which software developers would use to automate the otherwise tedious task of opening an app and performing a set of predefined actions in it, to make sure that new code updates didn’t unexpectedly break something. If that’s all gibberish to you, just take my word for it as a former web app developer that this is a hilariously janky way to run a consumer-facing product.
But in contrast to all the relatively harmless questionable decisions Rabbit has tried to cover up, the company’s CTO Peiyuan Liao publicly took credit for exactly this janky system on X. We’re meant to believe that the LAM is somewhere else, “studying how people use online interfaces” and then generating the scripts that Playwright uses to mimic a person clicking buttons in a web browser.
But... why bother using A.I. for any of that? The whole idea of Playwright is that it can record a human using a web app and then play that human’s actions back. Why would you train an A.I. on a human using Uber and then use that A.I. to simulate a human using Uber to generate a Playwright file that Playwright can use to use Uber for you on a remote AWS instance, in the unlikely event a chatbot can puzzle out what you wanted it to do for you in the Uber app?
Why would anyone do any of that, rather than just open Uber on their own goddamn phone?
Also: Here’s Lily Herman’s Formula 1 newsletter Engine Failure about the Formula 1 content creator market but actually about all content creator markets. Liz wrote about Tim Apple’s gross crush fetish ad which everybody hated. Casey Johnston on Anne Hathaway’s hips and Lyz Lenz on Anne Hathaway’s middle aged fantasy. Luke Winkie went on the Creed reunion cruise and wondered “If ‘the worst band of the 1990s’ is suddenly good, does that mean all music is good now?” No it just means everyone has lost the ability to distinguish good things from bad things. You know who evolved and changed his mind but never lost that ability? Steve Albini. Tom Scocca also wrote a good appreciation of Albini’s work, and Albini’s Grub Street diet was an all-time classic.
And Finally, Sabrina Imbler asks “Want To Eat This Snake? What If It Was Dead, Bleeding From The Mouth, And Covered In Poop? What Then?” Well I guess I’d really have to think about it, Sabrina.
Today’s Song: Chapell Roan, “Good Luck, Babe!”
Thanks Music Intern Sam, who would like to add that his radio show NO CHILL returns tonight on the Chinatown stream at kchungradio.org 9pm PT.