What’s old is new and what’s new maybe isn’t what we want
Joshua Topolsky: Rabbit’s R1 AI Companion Is Not the Future You Were Promised
Right now, every time you want to do something like that with your phone, you’re jumping into and out of individual apps; the r1 supposes a future where all your apps connect into a central piece of AI-driven software, disappear, and then are acted on in the background when you request a specific action. For instance, you might say something like “I need an Uber where I am right now and I want to go home” and then instead of opening the app and tapping in all your requests, the r1’s LAM will do it for you.
What they’re describing is screen scraping and automated UI testing, which have been a thing forever. Maybe there’s some LLM magic that makes this special, but I get real “we’re doing something old but calling it AI to get more attention,” vibes from me.
Soon after was this bit about this concept of having one voice interface that lets you do everything with your voice:
This, on its face, is a brilliant idea, and most likely the future state of computing for the vast majority of users.
I once again must say I highly doubt that the future state of computing involves primarily voice interfaces. I think screens are great and they’re not going anywhere, even if voice can do more than today.