Normal view

There are new articles available, click to refresh the page.
Before yesterdayThe Verge - Reviews

Apple iPad Pro (2024) review: the best kind of overkill

Apple’s latest high-end tablet is a marvel of hardware design still in need of the software and accessories to really make it sing. But wow is it fun to use.

The new iPad Pro is a genuine achievement in tablet design. It’s the closest thing I’ve ever seen to the vision that a tablet should feel not like a computer but, rather, like a piece of glass in your hand. I’m honestly not sure how you’d shrink it more; the USB-C plug I use to charge the 13-inch Pro I’ve been testing is already thicker than the iPad itself. It’s a light, fast, remarkable machine.

But does that really count for much anymore? The iPad has been a hardware triumph for years, plenty fast and light and long-lasting for just about anything you could do with it. The problem has always been the software: by forcing the device to run a locked-down, oversimplified operating system, Apple has prevented this ultraportable and ultrapowerful machine from becoming the full-fledged computer so many users want it to be.

The way Apple seems to see it, the iPad’s appeal is greater than the sum of its parts. No, you can’t do some of the things you’d do on a MacBook. But you can hold it in your hands in bed. You can draw on the screen. You can play mobile games. Everyone at Apple speaks of the iPad’s “versatility” as its main selling point — the fact that it’s a jack-of-all-trades is a feature, not a bug. The hard part about trying to do everything, though, is that it’s really hard to do everything well.

Apple’s case for the iPad Pro seems to be that this is the device for the future. It has the processor, screen, accessories — everything you’ll need to be ready for the next decade of your computing life. Because pretty soon, AI will change everything, and you’ll be glad you had all the power to run it well. That might well be true! But none of it is real yet. And besides, the most important parts of that future will happen on the screen, not behind it.

This new iPad Pro feels, in many ways, like the finale of the 14-year history of the iPad, all the pieces finally in place. It also feels, as ever, like a futuristic device plagued by software stuck firmly in the past, one I’m not sure I’d recommend to most people.

I do love it, though.

A magic pane of glass

I’ve done almost all of my testing on one of the highest-end versions of the iPad Pro: a 13-inch space black model with 1TB of storage, 16GB of RAM, and a built-in cellular connection. That’s a $2,099 tablet right there. Add in the $129 Pencil Pro and the new $349 Magic Keyboard, and I’m reviewing $2,577 worth of iPad — the amount you would spend on a high-end laptop. You can get it cheaper, of course, though the Pro is never exactly inexpensive: the 11-inch model starts at $999 and comes with 256GB of storage and 8GB of RAM. (That entry-level storage option is double what it used to be, which is a nice change but still spendy.)

No matter which Pro you buy, though, you get access to the three most important new things about this new model: the chip; the screen; and the design.

A photo of the two sizes of iPad Pro.
The 13-inch iPad Pro is enormous. But it’s much lighter now.

Let’s do the chip first, because it’s important and also slightly confusing. The Pro runs on the M4 processor, a brand-new chip Apple designed specifically to accommodate the Pro’s new screen and design, and it’s as fast as you’d hope. In my benchmark tests, the M4-powered Pro scored about 50 percent higher than the previous M2-running model. In practice, it definitely doesn’t feel 50 percent faster, but it does feel faster.

Apps load and close a half-beat faster with the M4, even complex games run perfectly smooth (I still can’t believe how good Call of Duty: Warzone Mobile looks on this device), and iMovie renders video noticeably more quickly than on the 11-inch M2 Pro I’ve been using for a couple of years. Individually, these aren’t earth-shattering upgrades, but particularly if you’re doing a lot of intense photo and video work or even love a long Warzone session, it’s a real performance bump. And in all my testing, I’ve never noticed the device getting hot in my hands. Occasionally very slightly warm, maybe, but that’s it.

The top-tier models of Pro — with 1TB or 2TB of storage — get the best M4, with an additional performance core in the CPU. Yay for more power, I guess, but I’d be astonished if there were any way to tell the difference in everyday use. In most cases, the iPad’s raw performance hasn’t been an issue for a very long time.

The M4’s main practical purpose is to power the new OLED display. Apple’s new “Tandem OLED” setup basically smashes two OLEDs together to get a sharper, brighter panel. Apple calls it Ultra Retina XDR, which is a ridiculous name, but whatever, it works beautifully. All of the traditional upsides of OLED are immediately apparent: since OLEDs control each pixel individually, you get much richer blacks, so the letterboxes above and below a video just disappear into the bezel, and photos look much more dynamic. Colors are incredibly vibrant — to the point of occasionally looking too contrasty and HDR-y to my eyes. The Pro’s peak brightness is significantly brighter than the new Air, too, which is tough to pull off with an OLED.

A photo of the iPad Pro, showing a live view of a parking lot.
The Pro’s OLED display is a big step up from anything on an iPad before.

The only downside I’ve noticed in the display so far is that the OLED seems to pick up a little more glare and reflection than the Air’s LCD panel. When I’m using it outdoors, that has meant I crank the brightness a little more than I’d like to be able to see everything on the screen. But that’s a tiny complaint; this screen looks fantastic — and I haven’t noticed battery draining faster at max brightness than before.

On the design front, the new Pro is more of a refinement than a redesign, but the difference is still pretty remarkable. The thinness is one thing — at 5.1mm thick for the 13-inch model and 5.3mm for the 11-inch, they’re the thinnest iPads yet — but the weight is what really gets me. The 13-inch Pro I’ve been testing weighs about a quarter of a pound less than last year’s model, which doesn’t sound like much but is very noticeable when I’m holding this big slab of glass in my hands on the couch. I’ve always thought the larger-size iPads were way too big to actually use, but I’ve been holding and using this one a lot. It’s so thin and light that I’ve worried about it being fragile. So far, it’s been sturdy.

A photo of the edge of an iPad Pro, showing the speaker and USB-C port.
LOOK HOW THIN THAT IS.

The only other big design change here is that Apple finally — finally — put the front-facing camera in the correct spot: in the middle of the long side of the iPad. This is very much a landscape-first device now, but that’s a good thing! The iPad, in general, absolutely is a landscape-first device. I’m not particularly impressed with the quality of the front-facing camera, but it’s fine, and it’s much more useful now.

Apple doesn’t seem to have sacrificed anything in the name of being thin and light. As a pure design and engineering exercise, it’s a home run.

Feature creep

There are basically two types of iPad users. (This is an oversimplification, but go with me.) The first type wants a simple way to send emails, read news, do the crossword, look at photos, and browse the web. For those people, the new iPad Pro is total overkill. Everything about it is a little better than the new Air or even the newly cheaper base iPad, but not so much better that I’d recommend splurging unless you really want that OLED screen. (If you do, please know: I get it. I’m with you.)

A photo of a person pinching the screen on an iPad Pro.
As ever, the biggest problem with the iPad is iPadOS.

The other type of iPad user does all those things but also has an iPad-specific feature or two that really matters to them. Musicians love it for turning sheet music; students for handwriting notes; filmmakers for quickly reviewing footage; designers for showing interactive renders to clients. When Apple talks about how “versatile” the iPad is, I think this is what the company means. The iPad is not all things to all people, but it should have something for everyone. By putting ever more power into the device, Apple is trying to expand the number of those features that might appeal to you.

New features this year come mostly in the form of the Pencil Pro. It has a nifty new squeeze gesture that is useful and makes it quicker to bring up menus and commonly used tools. Apple’s also letting developers customize what happens when you squeeze in their apps, so expect some cool and deeply weird integrations soon. The new Barrel Roll feature is also going to be a big win for artists of all sorts, now that you can turn your virtual brush or pen just by twisting the Pencil as you draw. (It works really well, though honestly, I’m woefully unqualified to review anything from an artist’s perspective. We’ll have more on that front soon.)

Same goes for the new Magic Keyboard, which is my personal favorite upgrade of the whole lot this year. When you dock the iPad in the attachment, it adds a full keyboard and a trackpad, floating the iPad above it — it’s the most laptop-like way to use an iPad. The new model is sturdier than the last, though it does still wobble a bit when you touch the iPad’s screen. The keyboard feels wonderful, right in step with a MacBook’s keys or the traditional Magic Keyboard. Now that there’s a row of function keys and a bigger trackpad, I can use the device for hours without ever picking my hands up. Best of all, it’s about 50 grams lighter than before (658g on the new model, according to my kitchen scale, compared to 710g on the last), which contributes to the overall smaller footprint of the new Pro.

An overhead photo of the iPad Pro’s Magic Keyboard.
The row of function keys makes the Pro a very functional laptop replacement.

In my own use, my iPad hardly ever leaves the keyboard case. I use the Magic Keyboard for journaling, emailing, and just as a stand while I’m cooking and watching shows. Having a better keyboard in a smaller package matters a lot to me. But it won’t to a lot of people, especially at $299. With both of its accessories, Apple is making the Pro more appealing to the people who might already have a Pro and not doing much to win over those who don’t.

There is, I should at least note, the possibility that AI could change the whole equation. Maybe generative AI will make Photos so much better that everybody suddenly wants a big, beautiful screen. Maybe Siri will get so good that the iPad will become a smart home controller. Maybe the camera software will be so spectacular that you’ll use a tablet for all your video calls forever. Maybe, maybe, maybe. WWDC is in a few weeks, and I expect Apple to aggressively try to convince you that advances in AI make the iPad Pro more than just an iPad. If it can make the argument that a super-powerful, super-portable, jack-of-all-trades device is what you need in the future, I’ll probably be running to buy an iPad Pro.

For now, it’s just an iPad. The best iPad ever, I think — maybe even the best iPad you could reasonably ask for. But the story of the iPad — the “magic pane of glass,” as Apple is so fond of calling it — is actually all about software. The iPad’s software has let its hardware down for years. Apple has led us to believe that’s about to change, that this year’s WWDC will be the great turning point for AI and iPads and everything. We’ll see. Until then, the iPad Pro is almost too good for its own good.

The new Apple iPad Air is great — but it’s not the one to get

A photo of the iPad Air in a cafe setting.
The iPad Air is an excellent iPad — and that’s all. | Photo: David Pierce / The Verge

The iPad Pro is a beast. The two-year-old iPad is more compelling than ever. So what is the Air even for anymore?

The new iPad Air is very good. If you buy one, you’ll almost certainly like it. That’s it, that’s the review.

But is this the iPad you should buy? That’s a more interesting question. The iPad Air is a study in tradeoffs, even more so than before. Starting at $599, it’s not the cheapest iPad you can buy, nor is it the most impressive. It doesn’t support all the accessories, but it does support some of the accessories. It’s fast but not the fastest, thin but not the thinnest, powerful but not the powerful-est. It is Apple’s attempt to find the Goldilocks middle ground — the features that matter most to the most users and nothing else.

Outside of a couple of specific scenarios, I don’t think I’d tell you to buy this year’s iPad Air. Not because it’s not great — it is great! It’s just that for $250 less, you can get the base iPad, which is just about as good at every common iPad activity. The 10th-generation iPad is a couple of years old at this point, but it’s still an excellent device, especially after Apple lowered its price from $449 to $349. The iPad, not the iPad Air, is the right iPad for most people.

The new Air is pretty much last year’s iPad Pro in the body of last year’s iPad Air. The two models are identical other than the screen size. The new 13-inch model is obviously larger in every dimension and about a third of a pound heavier than the 11-inch Air, which is exactly the same size and weight as the last-gen Air.

Both new Airs run the same M2 chip as the old Pro and, in my testing, run it practically identically — it’s a fast and reliable chip, though the new M4 processor in this year’s iPad Pro runs laps around it in benchmark tests. The screen is the same as last year’s Air, the battery life is the same, and the rear camera is the same — it’s just a spec bump on the same thing.

In my testing, there’s really only one change from the old Air that I’ve noticed: Apple moved the front-facing camera to the middle of the landscape edge, which means I can use it for video calls without looking like I’m always staring up and away from the screen. This is a great change, and one Apple should have made a long time ago. If you do want to buy an Air, I’d recommend this one over the previous generation just to get the camera in the right place.

Next to this year’s Pro, on the other hand, the Air definitely feels like a lesser model. The Pro has a much better OLED screen, that ultra-powerful M4 chip, full Thunderbolt support on the USB-C connector, more speakers, more storage in every price tier, and is lighter and smaller at both screen sizes. You pay handsomely for those upgrades, but they’re real upgrades.

A side photo of the new iPad Air.Photo: David Pierce / The Verge
The Air is thin, but it’s not that thin.

But honestly? If you’re just looking for a way to send emails, browse the web, play games, and maybe make an iMovie or two, none of that will really change the way you use your iPad. An iPad is an iPad is an iPad, and until Apple either fixes a bunch of things or opens up the operating system — and I wouldn’t hold my breath on either one — you just aren’t going to get enough out of all that extra power to make it a must-have upgrade. You can do lots of things on an iPad, which is great! But the list is pretty much the same no matter which tablet you’re holding. The iPad Pro is the best iPad, no question about it, but it’s also a very expensive iPad. And it’s still an iPad.

There are only two Pro features that I truly missed in everyday use after switching to the iPad Air. The first is Face ID: the Air uses Touch ID in the home button to log you in to your device, which works well enough, but Face ID on the Pro makes it feel like you never have to log in at all. The second is the row of function keys on the Magic Keyboard attachment. On the 13-inch Air in particular, the Magic Keyboard is big and roomy and lovely to type on — which means I’ve missed having quick access to playback, brightness, and more.

A photo from overhead of the iPad Air’s Magic Keyboard.Photo: David Pierce / The Verge
Love the Magic Keyboard. Miss the function keys.

In real use, the Air is much closer to the base iPad than the Pro, which puts it in an awkward tweener position. You do get the M2 chip instead of the A14 Bionic, and as Apple continues to push into on-device AI features, it’s possible that having a bonkers amount of processing power will become very useful. The M2 is certainly the more future-proof option, but the A14 Bionic is fully capable of handling a typical iPad workload.

Otherwise, the base iPad and the Air have the same cameras and camera placement, the same Touch ID system, and the same battery life. The iPad is a bit larger than the Air, but we’re talking hundredths of inches and pounds. Neither has a headphone jack, which remains dumb and bad. The Air’s screen is definitely better — it’s probably the most important spec upgrade over the regular iPad. But the regular iPad is good enough — just don’t look at them side by side. Ignorance is bliss; it’ll be fine.

The Air gets points for supporting the Pencil Pro, which the regular iPad doesn’t. The iPad gets points for having a function row on its Magic Keyboard Folio but loses some because it doesn’t feel as sturdy as the larger accessory. (Can I just say, by the way, that it makes exactly no sense which keyboards get which features on which iPads? No sense at all.) The iPad also comes in much nicer colors, though I love the look of the white Magic Keyboard, and that only comes with the Air.

A 10th-gen iPad in an Apple Magic Keyboard Folio.Photo by Dan Seifert / The Verge
The 10th-gen iPad is still a terrific (and newly cheap) tablet.

Ultimately, I think I can answer the Air vs. iPad debate in two questions. Do you want a big screen? Do you use the crap out of your Apple Pencil? If so, buy the Air. The 13-inch model is the cheapest big screen in Apple’s lineup — a whopping $500 less than the comparable iPad Pro — and the 11-inch model is the least expensive way to get access to the Pencil Pro. Done and done.

Otherwise, buy the plain ol’ iPad, which is an already terrific tablet at a newly terrific price. There’s even a better way to upgrade: I’d urge you to spend $150 upgrading the base iPad to the cellular model rather than $250 upgrading to the Air. Having an iPad that is just always connected, without having to think about it, is a game-changer for tablet life.

My standard buying advice is to buy the best stuff you can afford and then keep it as long as possible. But I’m confident that even a two-year-old 10th-generation iPad is capable enough to do most things really well for a long time. So is the Air, obviously! But the bad news for Apple, and the good news for you, is that every iPad is a great iPad — including the cheapest one.

Rabbit R1 review: nothing to see here

Artificial intelligence might someday make technology easier to use and even do things on your behalf. All the Rabbit R1 does right now is make me tear my hair out.

“You’re holding a taco.”

My Rabbit R1 told me that the other day. I was sitting at a table at a cafe on the National Mall in Washington, DC, and I had just picked up this $199 orange rectangle of a gadget and pointed it at the food in my hand. With unwavering, absolute confidence, the R1 told me it was a taco.

It was a Dorito. A Cool Ranch Dorito, to be specific. I’d shown the R1 the bag just a few seconds earlier and asked about the calories. (The R1 got that bit right.) I moved the chip around and tried again — still taco. I could not convince this AI-powered gadget, theoretically at the cutting edge of a technological revolution, that I was holding a chip.

Over and over in my testing of the R1, I’ve run into moments like the taco encounter, where the whole thing just feels broken. It misidentified a red dog toy as a stress ball, then as a tomato, then as a red bell pepper that it assured me is totally safe to eat. I’d start playing a song on the R1, and then the device would stop responding but keep playing so that I couldn’t even pause it or turn the volume down.

For a while, the R1 couldn’t even tell the time or the weather. Rabbit finally fixed that with a software update on Tuesday, and the company promised many more updates to come — though now, instead of the weather being wrong by thousands of miles, it gives me the weather from about 15 miles away. I guess that counts for something.

Ever since the R1 debuted at CES, with a keynote filled with big promises and impressive demos, this device has been sold as a super-clever, ultra-helpful AI assistant. Rather than just answer ChatGPT-style questions, it was supposed to do just about everything your phone can do, only faster. A few months later, this device on my desk bears no resemblance to the one we were told about, that more than 100,000 people preordered based on promises and demos.

After reviewing the Humane AI Pin and finding it woefully unable to execute its ambition, I was excited about the R1. It’s cheaper, more whimsical, and less ambitious. After using the R1, I feel like Humane at least deserves credit for trying. The R1 is underwhelming, underpowered, and undercooked. It can’t do much of anything. It doesn’t even know what a taco looks like.

On the LAM

The most intriguing tech in the R1 is what Rabbit calls the “Large Action Model,” or LAM. Where a large language model, or LLM, is all about analyzing and creating text, the LAM is supposed to be about doing stuff. The model learns how an app works in order to be able to navigate it on your behalf. In a LAM-powered world, you’d use Photoshop just by saying “remove that lady from the background” or make a spreadsheet by telling your device to pull the last six quarters of earnings from the investor website.

There is basically no evidence of a LAM at work in the R1. The device only currently connects to four apps: Uber, DoorDash, Midjourney, and Spotify. You connect to them by opening up Rabbit’s web app, called Rabbithole, and logging in to each service individually. When you go to do so, Rabbit opens up a virtual browser inside the app and logs you in directly — you’re not logging in to a service provided by DoorDash but rather literally in to DoorDash’s website while Rabbit snoops on the process. Rabbit says it protects your credentials, but the process just feels icky and insecure.

I logged in to them all anyway, for journalism. Except for Midjourney, which I never managed to get into because I couldn’t get past the CAPTCHA systems that obviously thought I was a bot. The connection doesn’t do much anyway: the R1 won’t show you the images or even send them to you. It’s just typing an image prompt and pressing enter.

A photo of the Rabbit R1.
The R1’s camera can see things... it’s just not great at knowing what they are.

I’d love to tell you how Uber and DoorDash work better once you’re logged in, but I never got either one to successfully do anything. Every time I pressed that side button on the R1 — which activates the microphone — and asked it to order food, it spat back a warning about how “DoorDash may take a while to load on RabbitOS” and then, a second later, told me there was an issue and to try again. (If you have to include that disclaimer, you probably haven’t finished your product.) Same thing for Uber — though I was occasionally able to at least get to the point where I said my starting and ending addresses loudly and in full before it failed. So far, Rabbit has gotten me zero rides and zero meals.

Spotify was the integration I was most interested in. I’ve used Spotify forever and was eager to try a dedicated device for listening to music and podcasts. I connected my Bluetooth headphones and dove in, but the Spotify connection is so hilariously inept that I gave up almost immediately. If I ask for specific songs or to just play songs by an artist, it mostly succeeds — though I do often get lullaby instrumental versions, covers, or other weirdness. When I say, “Play my Discover Weekly playlist,” it plays “Can You Discover?” by Discovery, which is apparently a song and band that exists but is definitely not what I’m looking for. When I ask for the Armchair Expert podcast, it plays “How Far I’ll Go” from the Moana soundtrack. Sometimes it plays a song called “Armchair Expert,” by the artist Voltorb.

Not only is this wrong — it’s actually dumber than I expected. If you go to Spotify and search “Discover Weekly” or “Armchair Expert,” the correct results show up first. So even if all Rabbit was doing was searching the app and clicking play for me — which is totally possible without AI and works great through the off-the-shelf automation software Rabbit is using for part of the process — it should still land on the right thing. The R1 mostly whiffs.

About a third of the time, I’ll ask the R1 to play something, it’ll pop up with a cheery confirmation — ”Getting the music going now!” — and then nothing will happen. This happened in my testing across all of the R1’s features and reminded me a lot of the Humane AI Pin. You say something, and it thinks, thinks, thinks, and fails. No reason given. No noise letting you know. Just back to the bouncing logo homescreen as if everything’s A-okay.

The long and short of it is this: all the coolest, most ambitious, most interesting, and differentiating things about the R1 don’t work. They mostly don’t even exist. When I first got a demo of the device at CES, founder and CEO Jesse Lyu blamed the Wi-Fi for the fact that his R1 couldn’t do most of the things he’d just said it could do. Now I think the Wi-Fi might have been fine.

A photo of the Rabbit R1.
The R1 connects to Spotify but doesn’t do it very well.

Hot mic

Without the LAM, what you’re left with in the R1 is a voice assistant in a box. The smartest thing Rabbit did with the R1 was work with Perplexity, the AI search engine, so that the R1 can deliver more or less real-time information about news, sports scores, and more. If you view the R1 as a dedicated Perplexity machine, it’s not bad! Though Perplexity is still wrong a lot. When I asked whether the Celtics were playing one night, the R1 said no, the next game isn’t until April 29th — which was true, except that it was already the evening of April 29th and the game was well underway. Like with Humane, Rabbit is making a bet on AI systems all the way down, and until all those systems get better, none of them will work very well.

For basic things, the kinds of trivia and information you’d ask ChatGPT, the R1 does as well as anything else — which is to say, not that well. Sometimes it’s right, and sometimes it’s wrong. Sometimes it’s fast — at its best, it’s noticeably faster than the AI Pin — but sometimes it’s slow, or it just fails entirely. It’s helpful that the R1 has both a speaker and a screen, so you can listen to some responses and see others, and I liked being able to say “save that as a note” after a particularly long diatribe and have the whole thing dumped into the Rabbithole. There’s a handy note-taking and research device somewhere inside the R1, I suspect.

To that point, actually: my single favorite feature of the R1 is its voice recorder. You just press the button and say, “Start the voice recorder,” and it records your audio, summarizes it with AI, and dumps it into the Rabbithole. $200 is pretty steep for a voice recorder, but the R1’s mic is great, and I’ve been using it a bunch to record to-do lists, diary entries, and the like.

The most enjoyable time I spent with the R1 was running around the National Mall in Washington, DC, pointing the R1’s camera at a bunch of landmarks and asking it for information via the Vision feature. It did pretty well knowing which large president was which, when memorials were built, that sort of thing. You could almost use it as an AI tour guide. But if you’re pointing the camera at anything other than a globally known, constantly photographed structure, the results are all over the place. Sometimes, I would hold up a can of beer, and it would tell me it was Bud Light; other times, it would tell me it’s just a colorful can. If I held up a can of shaving cream, it identified it correctly; if I covered the Barbasol logo, it identified it as deodorant or “sensitive skin spray,” whatever that is. It could never tell me how much things cost and whether they had good reviews or help me buy them. Sometimes, it became really, really convinced my Dorito was a taco.

For the first few days of my testing, the battery life was truly disastrous. I’d kill the thing in an hour of use, and it would go from full to dead in six hours of sitting untouched on my desk. This week’s update improved the standby battery life substantially, but I can still basically watch the numbers tick down as I play music or ask questions. This’ll die way before your phone does.

A photo of the Rabbit R1.Photo: David Pierce / The Verge
AI gadgets are coming — but they’re not great.

A vision in orange

Just for fun, let’s ratchet the R1’s ambitions all the way down. Past “The Future of Computing,” past “Cool Device for ChatGPT,” and even past “Useful For Any Purpose At All.” It’s not even a gadget anymore, just a $200 desk ornament slash fidget toy. In that light, there is something decidedly different — and almost delightful — about the R1. A rectangle three inches tall and wide by a half-inch deep, its plastic body feels smooth and nice in my hand. The orange color is loud and bold and stands out in the sea of black and white gadgets. The plasticky case picks up fingerprints easily, but I really like the way it looks.

I also like the combination of features here. The press-to-talk button is a good thing, giving you a physical way to know when it’s listening. The screen / speaker combo is the right one because sometimes I want to hear the temperature and, other times, I want to see the forecast. I even like that the R1 has a scroll wheel, which is utterly superfluous but fun to mess around with.

As I’ve been testing the R1, I’ve been trying to decide whether Humane’s approach or Rabbit’s has a better chance as AI improves. (Right now, it’s easy: don’t buy either one.) In the near term, I’d probably bet on Rabbit — Humane’s wearable and screen-free approach is so much more ambitious, and solving its thermal issues and interface challenges will be tricky. Rabbit is so much simpler an idea that it ought to be simpler to improve.

But where Humane is trying to build an entirely new category and is building enough features to maybe actually one day be a primary device, Rabbit is on an inevitable collision course with your smartphone. You know, the other handheld device in your pocket that is practically guaranteed to get a giant infusion of AI this year? The AI Pin is a wearable trying to keep your hands out of your pockets and your eyes off a screen. The R1 is just a worse and less functional version of your smartphone — as some folks have discovered, the device is basically just an Android phone with a custom launcher and only one app, and there’s nothing about the device itself that makes it worth grabbing over your phone.

Lyu and the Rabbit team have been saying since the beginning that this is only the very beginning of the Rabbit journey and that they know there’s a lot of work left to do both for the R1 and for the AI industry as a whole. They’ve also been saying that the only way for things to get better is for people to use the products, which makes the R1 sound like an intentional bait-and-switch to get thousands of people to pay money to beta-test a product. That feels cruel. And $199 for this thing feels like a waste of money.

AI is moving fast, so maybe in six months, all these gadgets will be great and I’ll tell you to go buy them. But I’m quickly running out of hope for that and for the whole idea of dedicated AI hardware. I suspect we’re likely to see a slew of new ideas about how to interact with the AI on your phone, whether it’s headphones with better microphones or smartwatches that can show you the readout from ChatGPT. The Meta Smart Glasses are doing a really good job of extending your smartphone’s capabilities with new inputs and outputs, and I hope we see more devices like that. But until the hardware, software, and AI all get better and more differentiated, I just don’t think we’re getting better than smartphones. The AI gadget revolution might not stand a chance. The Rabbit R1 sure doesn’t.

Photography by David Pierce / The Verge

The internet really is a series of tubes

An image of a cable repair ship, on top of The Vergecast logo.
Photo by Go Takayama for The Verge

Hundreds of cables. Hundreds of thousands of miles. The internet runs, in vastly more ways than we realize or think about, through a series of garden-hose size tubes on the ocean floor. Without those tubes, the modern world sort of collapses. And the folks responsible for keeping them functioning have bigger, harder, stranger jobs than you might think.

On this episode of The Vergecast, we talk to The Verge’s Josh Dzieza, who has been reporting on the undersea cable world and just published a feature about some of the folks who keep it running. It’s a story worthy of a high-seas action movie, and it’s all about cables.

Then, we chat with The Verge’s Tom Warren and Joanna Nelius about the new generation of PCs that Microsoft and others seem to think are going to be huge improvements over anything we’ve seen before. Can Qualcomm finally make the PC chip we’ve been waiting for? Is this really, actually, finally the year of Windows on Arm? What the heck is an AI PC? We cover all of that and more.

Lastly, Alex Cranz joins to help us answer a hotline question about e-readers. Because it’s always interesting times in e-readers.

If you want to know more about everything we discuss in this episode, here are a few links to get you started, beginning with Josh’s story on undersea cables:

And on AI PCs:

And on e-readers:

Humane AI Pin review: not even close

For $699 and $24 a month, this wearable computer promises to free you from your smartphone. There’s only one problem: it just doesn’t work.

The idea behind the Humane AI Pin is a simple one: it’s a phone without a screen. Instead of asking you to open apps and tap on a keyboard, this little wearable abstracts everything away behind an AI assistant and an operating system Humane calls CosmOS. Want to make a phone call, send a text message, calculate the tip, write something down, or learn the population of Copenhagen? Just ask the AI Pin. It uses a cellular connection (only through T-Mobile and, annoyingly, not connected to your existing number) to be online all the time and a network of AI models to try to answer your questions and execute your commands. It’s not just an app; it’s all the apps.

Humane has spent the last year making the case that the AI Pin is the beginning of a post-smartphone future in which we spend less time with our heads and minds buried in the screens of our phones and more time back in the real world. How that might work, whether that’s something we want, and whether it’s even possible feel like fundamental questions for the future of our relationship with technology.

I came into this review with two big questions about the AI Pin. The first is the big-picture one: is this thing… anything? In just shy of two weeks of testing, I’ve come to realize that there are, in fact, a lot of things for which my phone actually sucks. Often, all I want to do is check the time or write something down or text my wife, and I end up sucked in by TikTok or my email or whatever unwanted notification is sitting there on my screen. Plus, have you ever thought about how often your hands are occupied with groceries / clothes / leashes / children / steering wheels, and how annoying / unsafe it is to try to balance your phone at the same time? I’ve learned I do lots of things on my phone that I might like to do somewhere else. So, yeah, this is something. Maybe something big. AI models aren’t good enough to handle everything yet, but I’ve seen enough glimmers of what’s coming that I’m optimistic about the future.

That raises the second question: should you buy this thing? That one’s easy. Nope. Nuh-uh. No way. The AI Pin is an interesting idea that is so thoroughly unfinished and so totally broken in so many unacceptable ways that I can’t think of anyone to whom I’d recommend spending the $699 for the device and the $24 monthly subscription.

“AI Pin and its AI OS, Cosmos, are about beginning the story of ambient computing,” Humane’s co-founders, Imran Chaudhri and Bethany Bongiorno, told me in a statement after I described some of the issues I’ve had with the AI Pin. “Today marks not the first chapter, but the first page. We have an ambitious roadmap with software refinements, new features, additional partnerships, and our SDK. All of this will enable your AI Pin to become smarter and more powerful over time. Our vision is for Cosmos to eventually exist in many different devices and form factors, to unlock new ways to interact with all of your devices.”

As the overall state of AI improves, the AI Pin will probably get better, and I’m bullish on AI’s long-term ability to do a lot of fiddly things on our behalf. But there are too many basic things it can’t do, too many things it doesn’t do well enough, and too many things it does well but only sometimes that I’m hard-pressed to name a single thing it’s genuinely good at. None of this — not the hardware, not the software, not even GPT-4 — is ready yet.

Front and center

As a piece of gear, the AI Pin is actually pretty impressive. It’s smaller than you might think: roughly the size of four quarters laid in a square, or half the size of a pack of Orbit gum. It’s not heavy (about 55 grams, according to my scale — roughly the same as two AA batteries or the key fob to my car), but it’s definitely solid, made of aluminum and designed to survive falls or even the occasional trip through the washing machine. My review unit is white, but the AI Pin also comes in black. Both look and feel much better than your average first-gen hardware product.

A photo of a person tapping on a Humane AI Pin.
The AI Pin’s designed location is right above your chest, where either hand can reach it.

The bar here is high, though, because of how you’re meant to use the AI Pin. In all of Humane’s demos and marketing, the AI Pin sits in the same place: on the right or left side of your chest, right below your collarbone, attached via a magnet that also acts as a “battery booster.” It’s a pin on a lapel. (It’s a little fiddly to get situated, but the magnet does hold through all but the thickest of clothes.) You don’t have to use it this way — you can hold it in your hand or even talk to it while it’s in its desk charger — but the AI Pin’s built-in microphones are designed to hear you best from that angle; the slightly downward-facing camera sees best from there, and the upward-firing speakers work best in that spot.

The AI Pin is also just incredibly unsubtle. When you stand in front of a building, tapping your chest and nattering away to yourself, people will notice. And everything gets in the way, too. My backpack straps rubbed against it, and my messenger bag went right over it. Both my son and my dog have accidentally set the AI Pin off while climbing on top of me. If you buy this thing, I recommend also buying the $50 clip that makes it easier to attach to a waistband or a bag strap, where I actually prefer to keep it.

A photo of the Humane AI Pin with several accessories.
Humane makes a bunch of accessories for the AI Pin — that black clip is particularly handy.

The upside of sticking it on your chest is that you can reach it with either hand (I call the moves “The Pledge of Allegiance” and “The Backpack Strap Grab”), and even a spare pinkie is enough to wake it up. Anytime you want to talk to the AI Pin, you press and hold on its front touchpad — it’s not listening for a wake word — and speak your questions or commands. Practically anything the AI Pin can do, you can ask for. It can answer basic ChatGPT-style questions, make phone calls, snap photos, send text messages, tell you what’s nearby, and more. You can also do a few things just by tapping the touchpad, like keyboard shortcuts on a computer: double-tap with two fingers to take a photo; double-tap and hold with two fingers to take a video.

Having the thing right there did make me use it more, sometimes for things I wouldn’t have bothered to pull out my phone to do. It feels a little like the early days of Alexa and Siri a decade ago, when you discovered that saying “set a timer for 10 minutes” beats opening your phone’s Clock app by a mile — and you can do it with sticky fingers, too.

Except, oh wait, the AI Pin can’t set an alarm or a timer. It can’t add things to your calendar, either, or tell you what’s already there. You can create notes and lists — which appear in the Humane Center web app that is also where you connect the device to your contacts and review your uploaded photos — but if you try to add something to the list later, it’ll almost always fail for some reason. The problem with so many voice assistants is that they can’t do much — and the AI Pin can do even less.

Humane has said it’s working on a lot of this functionality, and it’s surely true that a lot of this will get better over time as AI models and interfaces get better. Bongiorno tells me there’s a huge software update coming this summer that will add timers, calendar access, more ways to use the touchpad, and much more. But at The Verge, our longstanding rule is that we review what’s in the box, never the promise of future updates, and right now, it’s inexcusable that this stuff doesn’t work on a device that costs as much as the AI Pin does.

Every time the AI Pin tries to do seemingly anything, it has to process your query through Humane’s servers, which is at best quite slow and at worst a total failure. Asking the AI Pin to write down that the library book sale is next week: handy! Waiting for 10 seconds while it processes, processes, and then throws a generic “couldn’t add that” error message: less handy. I’d estimate that half the time I tried to call someone, it simply didn’t call. Half the time someone called me, the AI Pin would kick it straight to voicemail without even ringing. After many days of testing, the one and only thing I can truly rely on the AI Pin to do is tell me the time.

The more I tested the AI Pin, the more it felt like the device was trying to do an awful lot and the hardware simply couldn’t keep up. For one, it’s pretty much constantly warm. In my testing, it never got truly painfully hot, but after even a few minutes of using it, I could feel the battery like a hand warmer against my skin. Bongiorno says the warmth can come from overuse or when you have a bad signal and that the device is aggressive about shutting down when it gets too hot. I’ve noticed: I use the AI Pin for more than a couple of minutes, and I get notified that it has overheated and needs to cool down. This happened a lot in my testing (including on a spring weekend in DC and in 40-degree New York City, where it was the only warm thing in sight).

The battery life is similarly rough. The AI Pin ships with two battery boosters, a charging case, and a desk charger, and you’ll make heavy use of all of it. I went through both boosters and the AI Pin’s smaller internal battery in the course of just a few hours of heavy testing. At one point, the AI Pin and a booster went from fully charged to completely dead in five hours, all while sitting untouched in my backpack. This thing is trying to do an awful lot, and it just doesn’t seem able to keep up.

In fairness, you’re not meant to use this device a lot. The whole point of the AI Pin is to get in, get out, and go back to living your life without technology. On my lightest days of testing — which typically consisted of a couple of calls, a few texts, a half-dozen queries about the number of teaspoons in a tablespoon and whether it’s safe for dogs to eat grapes, and maybe a half-hour of music — I didn’t have many overheating issues, though the battery did still die well before the day ended. As long as you don’t use the projector too much, the AI Pin can muddle through. But if I’m going to pay this price and stick this thing so prominently on my body, it needs to do more than muddle.

An image of a hand with green text projected onto it.
The AI Pin’s projector is the closest thing it has to a screen.

Look but don’t touch

The closest thing the AI Pin has to a screen is its “Laser Ink” projector. You summon it by tapping once on the touchpad or by asking it to “show me” something. If the AI Pin is speaking something to you aloud, you can also pick up your hand, and it will switch to projecting the text instead. The projector is also how you access settings, unlock your device, and more.

Whenever it wants to project, the AI Pin first sends a green dot looking for your hand. (It will only project on a hand, so my dream of projecting all my texts onto the sides of buildings is sadly dead.) After a few minutes, I memorized the sweet spot: about ribcage-high and a few inches away from my body. The projector’s 720p resolution is crap, and it only projects green light, but it does a good-enough job of projecting text onto your hand unless you’re in bright light, and then it’s just about invisible.

A Humane AI Pin projecting onto a hand.
I figured out the hand placement pretty fast — but not the actual interface.

The projector’s user interface is — how can I put this nicely? — bananas. To unlock your device, which you have to do every time you magnetically reattach the AI Pin, you move your hand forward and backward through a series of numbers and then pinch your thumb and forefinger together to select a number. It feels a bit like sliding a tiny trombone. Once you’re unlocked, you see a homescreen of sorts, where you can see if you’ve gotten any recent texts or calls and tap your fingers through a menu of the time, the date, and the weather. To scroll, you tilt your hand forward and backward very slightly. To get to settings, you move your hand away from your body — but not too far, or the projector loses you — until a new radial menu comes up. To navigate that menu, you’re supposed to roll your hand around like there’s a marble in your palm. I swear to you, I never once managed to select the correct icon the first time. It’s way too many interaction systems to memorize, especially when none of them work very well.

It feels like Humane decided early on that the AI Pin couldn’t have a screen no matter what and did a bunch of product and interface gymnastics when a tiny touchscreen would have handled all of these things much better. Kudos to Humane for swinging big, but if you’re going to try to do phone things, just make a phone.

An image of a person tapping on the Humane Pin.
Using the AI Pin is to constantly just ask a question and hope for the best. Way too often, you get nothing.

Asked and unanswered

The single coolest thing I’ve been able to do with the AI Pin is something I’ve done a few times now. I stand in front of a store or restaurant, press and hold on the touchpad, and say, “Look at this restaurant and tell me if it has good reviews.” The AI Pin snaps a photo with its camera, pings some image recognition models, figures out what I’m looking at, scours the web for reviews, and returns it back. Tacombi has great reviews, it might say. People really like the tacos and the friendly staff.

That’s the best-case scenario. And I have experienced it a few times! It’s very neat, and it’s the sort of thing that would take much longer and many more steps on a smartphone. But far more often, I’ll stand in front of a restaurant, ask the AI Pin about it, and wait for what feels like forever only for it to fail entirely. It can’t find the restaurant; the servers are not responding; it can’t figure out what restaurant it is despite the gigantic “Joe & The Juice” sign four feet in front of me and the GPS chip in the device. Bongiorno says these issues can come from model hallucinations, server issues, and more, and that they’ll get better over time.

In general, I would say that for every successful interaction with the AI Pin, I’ve had three or four unsuccessful ones. I’ll ask the weather in New York and get the right answer; then, I’ll ask the weather in Dubai, and the AI Pin tells me that “the current weather in Dubai is not available for the provided user location in New York.” I’ll ask about “the thing with the presidents in South Dakota,” and it’ll correctly tell me I mean Mount Rushmore, but then it will confidently misidentify the Brooklyn Bridge as the Triborough Bridge. And half the time — seriously, at least half — I don’t even get an answer. The system just waits, and waits, and fails.

When I first started testing the AI Pin, I was excited to try it as a music player. I dream of going on walks or runs while leaving my phone at home, and the always-connected AI Pin seemed like a possible answer. It’s not. For one thing, it only connects with Tidal, which means most people are immediately ruled out and also means no podcast support. For another, that connection is as broken as anything else on the AI Pin: I ask to play Beyoncé’s new album or “songs by The 1975,” and the AI Pin either can’t connect to Tidal at all or can’t play the song I’m looking for. Sometimes it works fine! Way more often, I have interactions like this one:

  • Me: “Play ‘Texas Hold ’Em’ by Beyoncé.”
  • The AI Pin: “Songs not found for request: Play Texas Hold ’Em by Beyonc\u00e9. Try again using your actions find a relevant track, album, artist, or playlist; Create a new PlayMusic action with at least one of the slots filled in. If you find a relevant track or album play it, avoid asking for clarification or what they want to hear.”

That’s a real exchange I had, multiple times, over multiple days with the AI Pin. Bongiorno says this particular bug has been fixed, but I still can’t get Tidal to play Cowboy Carter consistently. It’s just broken.

A photo of the Humane Ai Pin’s camera and speaker.
You can talk to the AI Pin all you want — but there’s no telling what you’ll get back.

It’s all made worse by the AI Pin’s desire to be as clever as possible. Translation is one of its most hyped features, along with the fact that it supposedly automatically discerns which languages to translate. When you land in Spain, boom, it switches to Spanish. Super cool and futuristic, in theory. In reality, I spent an hour in our studio trying desperately to get the AI Pin to translate to Japanese or Korean, while The Verge’s Victoria Song — who speaks both — sat there talking to it in those languages to absolutely no avail. Rather than translate things, it would just say them back to her, in a horrible and occasionally almost mocking accent.

The language issues are indicative of the bigger problem facing the AI Pin, ChatGPT, and frankly, every other AI product out there: you can’t see how it works, so it’s impossible to figure out how to use it. AI boosters say that’s the point, that the tech just works and you shouldn’t have to know how to use it, but oh boy, is that not the world we live in. Meanwhile, our phones are constant feedback machines — colored buttons telling us what to tap, instant activity every time we touch or pinch or scroll. You can see your options and what happens when you pick one. With AI, you don’t get any of that. Using the AI Pin feels like wishing on a star: you just close your eyes and hope for the best. Most of the time, nothing happens.

Still, even after all this frustration, after spending hours standing in front of restaurants tapping my chest and whispering questions that go unanswered, I find I want what Humane is selling even more than I expected. A one-tap way to say, “Text Anna and tell her I’ll be home in a half-hour,” or “Remember to call Mike tomorrow afternoon,” or “Take a picture of this and add it to my shopping list” would be amazing. I hadn’t realized how much of my phone usage consists of these one-step things, all of which would be easier and faster without the friction and distraction of my phone.

But the AI Pin doesn’t work. I don’t know how else to say it.

I hope Humane keeps going. I hope it builds in this basic functionality and figures out how to do more of it locally on the device without killing the battery. I hope it gets faster and more reliable. I hope Humane decides to make a watch, or smart glasses, or something more deliberately designed to be held in your hand. I hope it partners with more music services, more productivity apps, and more sources of knowledge about the internet and the world. I hope the price goes down.

But until all of that happens, and until the whole AI universe gets better, faster, and more functional, the AI Pin isn’t going to feel remotely close to being done. It’s a beta test, a prototype, a proof of concept that maybe someday there might be a killer device that does all of these things. I know with absolute certainty that the AI Pin is not that device. It’s not worth $700, or $24 a month, or all the time and energy and frustration that using it requires. It’s an exciting idea and an infuriating product.

AI gadgets might one day be great. But this isn’t that day, and the AI Pin isn’t that product. I’ll take my phone back now, thanks.

The Ember Tumbler is a cool, high-tech travel mug — but it can’t handle the heat

A photo of the new Ember Tumbler on a table.
It’s a good-looking desk accessory — but is it a good mug? | Image: Ember

My coffee routine is needlessly fussy but fairly predictable. I wander bleary-eyed into the kitchen, add some water to the kettle, and start heating it to 200 degrees Fahrenheit, grind some coffee beans, get out the Chemex, add and rinse a new filter, put the whole thing on a scale, add 40 grams of beans, slowly pour in 600 milliliters of hot water, pour it into my trusty silver Yeti Rambler, and pad into the living room to either play with my son or look at TikTok on my phone. It’s a life, you know?

For the last few weeks, the Ember Tumbler has wormed its way into that routine. The $199 Tumbler is Ember’s newest product but carries the same promise as the company’s popular mugs: it can keep your drink at the exact temperature you want for hours at a time. Its internal heating system will, by default, hold your drinks at 135 degrees Fahrenheit, but if you pair it via Bluetooth to your phone, you can use the Ember app to dial it exactly the way you like it.

The Ember Mug was a fixture in my life for a long time, even though two things about it drove me nuts. It could only hold 10 ounces of liquid — objectively not enough, thankfully there’s a 14-ounce model now — and it was a standard mug with a handle, so I couldn’t take it with me on the train or in the car. (Ember’s first-ever product was actually a travel mug, but it also only held 10 ounces. What are we doing here, people?)

A photo of a black smart mug on a wooden table.Image: Ember
The Ember Mug was a staple on my desk for years.

The Tumbler, in theory, solves all my problems! It holds 16 ounces (about 473ml) of liquid, which is a much better number. It also comes with two lids: one with a sliding piece that you can click open and drink out of and one with a handle that closes the mug off entirely. Its battery lasts three hours, which should be plenty of time to finish your coffee or tea. On paper, the Tumbler is everything I’ve wanted it to be.

In reality, it’s a little more complicated than that. For my specific use case, which now involves working from home instead of commuting to an office, it is in fact everything I wanted. The Tumbler is a delightful thing to drink coffee out of every morning. It’s heavy in a good way, like you’re holding a luxury object instead of just a mug; it’s nice to drink out of with the lid both on and off. In the mornings, I bring the Tumbler down to my basement office, set it on the included round charging pad, and my coffee’s hot for hours. Lovely!

As a travel mug, though — like, for people who actually go outside from time to time — the Tumbler is disappointing. I noticed a few times that when I wasn’t keeping it on the charging pad, my coffee seemed to get cold even faster than with my usual travel mug. So I did some testing, comparing the battery-charged Tumbler to my batteryless but well-insulated Yeti Rambler. (There are a lot of good insulated travel mugs out there; this one’s just my go-to.)

First, I poured boiling-ish water into both mugs, and about 60 seconds later (the time it took to find a thermometer), I measured the temperature in both. This was our control number.

  • Rambler start temperature: 205.9 degrees Fahrenheit
  • Tumbler start temperature: 204.6 degrees Fahrenheit

Then I put the lids on both mugs, closed them, and left them alone for an hour. This is the “I made coffee and then something came up” test. Then I measured their temperatures again.

  • Rambler after one hour: 180.7 degrees Fahrenheit
  • Tumbler after one hour: 149.4 degrees Fahrenheit

Pros and cons here. The Rambler does a better job of holding its heat, but the Tumbler’s job is to let my drink cool to its ideal drinking temperature — you can set whatever temperature you want in the Ember app for your phone, but it defaults to 135 degrees Fahrenheit, which science (Ember’s app) apparently says is the ideal coffee drinking temperature, and then uses its internal heaters to hold it there. So the rapid decline isn’t necessarily the sign of a problem yet.

Next, I slid open each lid’s mouthpiece and left them out for another hour. This is the “early morning meeting” test.

  • Rambler after two hours: 162.7 degrees Fahrenheit
  • Tumbler after two hours: 139.2 degrees Fahrenheit

Same deal! The double-insulated Rambler continues to hold its heat, but the Tumbler is at a perfect drinking temperature after two hours. Everybody wins.

For the third hour, I upped the ante. With lids on and mouthpieces open, I stuck both mugs out in the windy 40 degree Fahrenheit elements of my morning. This is the “warm drink on a cold day” test.

  • Rambler after three hours: 144.1 degrees Fahrenheit
  • Tumbler after three hours: 135.1 degrees Fahrenheit

Once again, I’m calling this a double victory. Both are losing temperature — these are tough conditions! — but are still doing their jobs, and the Tumbler is still in its sweet spot. After three hours, both my beverages are still very drinkable.

I brought both mugs inside, slid their mouthpieces closed, and left them for another hour. This is the “maybe this just became my afternoon coffee instead” test.

  • Rambler after four hours: 135.0 degrees Fahrenheit
  • Tumbler after four hours: 113.7 degrees Fahrenheit

For the first time, we have a too-cold beverage, and it’s the Tumbler. The battery died right at the end of hour three, as advertised, and now, with no heater and not much insulation, it’s starting to lose temperature quickly. Meanwhile, the Rambler is still hanging on.

My last test was to leave them overnight, which is not exactly a real-world use case, but hey, I’d already poured all the water. I call this the “I left it out overnight” test.

  • Rambler after 12 hours: 99.0 degrees Fahrenheit
  • Tumbler after 12 hours: 76.5 degrees Fahrenheit

Obviously, both are undrinkable here, but once again, the Rambler is doing a noticeably better job of keeping my drink warmer. The Tumbler is, for all intents and purposes, at room temperature.

These tests confirmed two things to me. First: the Tumbler does a good job of getting your drink to the right temperature quickly and keeping it there for a few hours. Score one for the fancy mug! But second: if my goal is to keep my coffee hot and drinkable (sometimes too hot!) as long as possible, Yeti’s mug is clearly the better choice.

The comedian Brian Regan has a great bit where he’s walking around an appliance store shopping for a refrigerator. “This keeps all your food cold for $600,” the salesman says to him. “You’ve got this refrigerator, keeps all your food cold for $800.” Then the salesman walks to the next one, drapes his arm over it, and starts the sales pitch: “Check this out: $1,400, keeps all your food cold.”

That’s how I feel about the Ember Tumbler. It does the job, which in this case, is to keep all my drinks warm. But Yeti’s 16-ounce Rambler costs $30, compared to $200 for the Tumbler, and it actually keeps my coffee above the too-cold level for longer. And if I need it to cool down faster, I can just take off the lid or throw in an ice cube. I’m much more interested in keeping my drink hot than in cooling it down faster.

If you spend most of your time like I do, shuffling from kitchen to basement with the charging pad never more than a room away, the Tumbler works great — when I leave it on the pad, my coffee is at or close to that magic 135 degree Fahrenheit temperature all day. It’s a great mug, albeit a ludicrously expensive one given that my microwave is just upstairs. But when I leave the house, I’m taking the Rambler. Thirty bucks, keeps all my drinks warm.

Meta Quest 3 review: almost the one we’ve been waiting for

Meta’s new headset is better than its predecessors in almost every way. But until there’s more to do in mixed reality, this won’t be the headset that gets everyone wearing headsets.

The Meta Quest 3 is much better than the Quest 2. It’s more comfortable, more powerful, easier to figure out, more pleasant to use for long stretches, and just flat-out better. If that’s all you’ve been wondering about Meta’s latest headset, there’s your answer. The passthrough improvements alone — the fact that I can now easily find my coffee / safely walk around the room without taking my headset off — makes this a worthwhile upgrade, even if you picked up a Quest 2 just a couple years ago and perhaps haven’t used it as much as you thought you might.

But that’s about the only thing I can say with total confidence about the Quest 3. Because when I really think about it, I’m not entirely sure what the Quest 3 even is. If it’s a VR headset, a direct successor to the Quest 2 from 2020, it’s certainly better but also nearly twice the price. If it’s a state-of-the-art mixed reality headset meant to usher in a future where the digital and real worlds are blended seamlessly together, it has some serious flaws and not nearly enough content. If it’s just a super-immersive game console, it’s great, but its library can’t hang with Sony and Microsoft.

Meta keeps calling the Quest 3 “the first mainstream mixed reality headset.” Strictly speaking, that’s true: at $499.99 for the model with 128GB of storage and $649.99 for 512GB, it’s a steep climb from the $299.99 Quest 2 starting price but still on the right side of the too-expensive line, especially compared to Apple’s forthcoming $3,500 Vision Pro. And unlike the mixed reality devices we’ve seen from Magic Leap, Microsoft, and so many others, an individual consumer can actually buy this one. But what Meta really wants is for this to be more than just the best reasonably priced headset. It wants the Quest 3 to be the one that makes people care about, use, and develop for mixed reality in a big way.

So here’s the real question, I think. Is the Meta Quest 3 a very good VR headset? Or is it, as Meta would have you believe, the first in a new line of a new kind of device?

I’ve used the Quest 3 enough to convince me that mixed reality could be awesome. It probably will be, eventually, once these devices are lighter and more socially acceptable and there’s a whole lot more MR content available for them. But that’s probably a ways off. For now, the Quest 3 is just a very good VR headset.

All smart, no glasses

Let me just get this bit quickly out of the way: I’m mostly going to be talking about the Quest 2 as a comparison in this review. The Vision Pro isn’t shipping, and there really are no other straightforward competitors to the Quest 3. The Quest Pro, Meta’s other mixed reality device, has some interesting tech but costs $1,000 and is really not worth considering. The question here, really, is whether the Quest 3 is worth the extra money over its predecessor.

The fit and finish of the Quest 3 is about what you’d expect for a second- to third-gen upgrade. Meta’s long-term plan for headsets is to make them look like a typical pair of sunglasses, and the Quest 3 is very much not that. But in the realm of “big, blocky plastic doodads on your face,” it does a lot of things better than its predecessor.

The headset itself is significantly smaller than the Quest 2, though the padded black face mask that attaches to it is much larger, so the overall footprint is about the same size. The whole package is about 160mm across and 98mm tall, compared to 142mm and 102mm on the 2. (You absolutely will not notice the small differences there.) The three vertical, pill-shaped cutouts on the front give the Quest 3 more personality than the bland face of the Quest 2. I’m not sure that’s a good thing — the Quest 3 looks like a character from WALL-E that was rejected because nobody could tell if it was good or evil — but it doesn’t really make a difference. You’ve got a giant headset on your face; people will point and laugh if you wear it in public. Let’s worry about the aesthetic details when we get a little closer to smart glasses.

A side-by-side overhead photo of the Meta Quest 3 and 2.
The Quest 3 (left) is smaller, a little heavier, and a lot more comfortable than the Quest 2.

The Quest 3 is actually a bit heavier than the Quest 2 (515 grams compared to 503), but it wears its weight much better. The Quest 2’s heaviest bits stick out from your face, so it always feels like it’s pulling down toward your nose. The Quest 3 is comparatively more balanced. It’s still a blocky thing on my head, but where my Quest 2 always feels tight somewhere — the top of my head, the back of my head, or most often right on my forehead — I found a comfortable Quest 3 setup almost immediately, and it stays in place even when I’m bouncing around during a workout. Adding the $70 Elite Strap makes it better still since it moves some of the weight to the back of your head and sits a little more rigidly. But unlike the Quest 2, I don’t think you absolutely need one.

Speaking of that setup: one small but welcome hardware change in the Quest 3 is that it brings back the little wheel underneath the headset that you can use to control the distance between the lenses. (The original Rift had a slider, while the Quest 2 just made you move the lenses, which is awkward and bad.) Everyone’s interpupillary distance is a little different, and it’s an important adjustment to get right — when you first turn on the Quest 3, it instructs you to turn the wheel to see what looks good. Even if you’re just going to set it once and forget it, it’s still a better system than the Quest 2. And if you share the device with co-workers or family members, it’s far easier to get dialed in.

The new Touch Plus controllers look and feel just like the old controllers, minus a large tracking ring at the top. They’re lighter and smaller as a result, but other than smacking them together a little less than I used to, I haven’t noticed much difference in actual use. And while losing the rings hasn’t made the Quest 3 worse at tracking the controllers, it also hasn’t made it better: the headset still struggles to follow the Touch Plus controllers when they’re even slightly out of your field of view. They look like a non-camera-studded version of the Quest Pro’s Touch Pro controllers, which you can, in theory, buy to replace the Touch Plus, but I don’t think those are worth the $299 upgrade. In part because the Touch Plus’ battery is much closer to the Quest 2’s controllers than the Pro’s: I’ve been using the heck out of this thing for over a week and haven’t killed the AAs yet.

A photo of two white controllers on a shiny metal background.
The Quest’s Touch Plus controllers are small but will be familiar to Quest users.

You’re going to want to keep those controllers handy, by the way, because the Quest 3’s hand tracking is pretty rough. In theory, you can do most navigational things just by waving your arms around; move the round cursor over what you want to click on, tap your thumb and index finger together, and you’re off. But because the Quest 3 doesn’t do inward-facing eye tracking and only uses its external cameras to follow your hands, it’s imprecise and frequently wrong — you have to very carefully move your hand a millimeter at a time to get the cursor in the right place. (Eye tracking might be the only thing about the Quest Pro I wish the Quest 3 had copied.) You can also just reach out and touch stuff, which works a little better, but the Quest 3’s depth sensing also misses a lot: you go to grab the Home menu to move it toward you, and your hand just flies through it. After testing hand tracking, I’ve stopped using it altogether.

I can see clearly now

The Quest 3’s two most important upgrades become immediately obvious as soon as you stick your head in the headset. It puts a 2064 x 2208 LCD in front of each eye, which is the best screen in any Quest ever. You can tell: everything from on-screen text to high-res games looks significantly crisper and better, like you’ve upgraded from a standard-def TV to a high-def set. It’s not quite as sharp or as dynamic as what we’ve seen from the Vision Pro’s dual 4K micro-OLED displays, but it’s enough that I can comfortably read small text in the headset for the first time. I could never shake that nagging feeling in the Quest 2 that everything was just a hair out of focus, and the Quest 3 hardly ever feels like that.

The field of view in the Quest 3 is a bit larger than before, too, which is nice, but it still has that “I’m looking through binoculars” rounded black shape around your periphery. The sharpness is the real win here. The screens are so much clearer, in fact, that they show just how low-res some games are: playing NFL Pro Era on the Quest 3 was like playing an N64 game on an HDTV, where I could see every pixel and every stutter with new clarity. But games like Red Matter 2 and the updated Pistol Whip, which are ready for the resolution bump, generally look fantastic. I’ve never had so much fun just wandering around in VR than I have with the Quest 3.

Two screenshots from the game Red Matter 2, showing the graphics.
Not every game is updated to look good in the Quest 3, but the ones that are look great.

Upgrading the display even opens up a bunch of new uses for a device like the Quest 3. It’s a pretty useful entertainment system, both for VR and non-VR content — apps like PlutoTV and Peacock work really well. All those educational apps for seeing art and far-off places are much more immersive now, too. Apps like Virtual Desktop actually work for streaming your computer to your headset without hurting your eyes, though the display isn’t quite high-res enough for me to actually want to work like that for very long. (Meta’s whole “you’ll do your job in VR!” thing is still a ways away, and let’s not even talk about how bad Horizon Workrooms still is.)

The other big upgrade is the speakers. The Quest 2’s audio still pours out into whatever room you’re in, which is a bummer, but it’s noticeably better than before. This thing gets loud if you want it to, and the spatial audio does a nice job of anchoring sound in place. You’re still going to get the best experience with a pair of headphones — my over-ear Bose cans fit around the headset fairly comfortably, but I prefer a pair of wireless earbuds just to keep some weight off my head.

Thanks to the Qualcomm Snapdragon XR2 Gen 2 processor and 8GB of RAM in the Quest 3, it’s also noticeably snappier than the Quest 2. The headset boots faster; games load more quickly. I was able to play Dungeons of Eternity at high settings at about 80 frames per second, which isn’t up to gaming PC standards but is plenty for most purposes. I’ve hardly noticed any lag in head movements or any of the other stuttering that can make VR unpleasant. The only consistent performance issue I’ve had is with scrolling the Quest’s menus, which still wobble and lag like the screen’s refresh rate isn’t quite high enough. In general, though, the Quest 3 is as fast as I need it to be and can stand up even to the platform’s most demanding games like Red Matter 2.

And by the way, there’s now a lot to do in the headset. I’ve been impressed with the growth of the Quest’s ecosystem over the last couple of years, and there’s now a solid stable of games, ranging from casual puzzlers to ultra-intense shooters and practically everything in between. I used to warn VR buyers that you might eventually run out of content in there — I don’t worry about that anymore. And, of course, through Quest Link, you can plug your headset into your computer and play a library of PC VR games as well.

In my testing so far, I’ve gotten a hair over two hours of battery life from the Quest 3, no matter how I’m using it. Two-ish hours of movies, two-ish hours of games — it seems that as long as the thing is on, it drains about the same. That’s less life than I’d like, but two hours is a pretty long session in VR, and thanks to Meta’s new charging dock (which comes separately and costs $129.99), I can just drop in my headset between sessions, and it seems to always be charged. The dock is a really terrific accessory, though it pushes the headset even higher in price.

A screenshot of an alien craft coming through the ceiling, in the game First Encounter.
The feeling of an alien actually crashing through your ceiling never gets old.

VR meets IRL

Up to this point, I’ve been talking about the Quest 3 as a direct successor to the Quest 2. In that sense, it’s a lot of little upgrades and one huge one (the screens) that make it a much better VR headset. Considerably more expensive! But much better. If you want a VR headset to play games, watch movies and TV, and do other VR things in, this is the one.

But let’s talk about the other bit. The thing really enabled by those pill-shaped cameras and sensors on the front of the Quest 3, the thing that has Meta believing the Quest 3 isn’t just “the third Quest” but the first of something else entirely. The Quest 3’s mixed reality features are simultaneously the most impressive and most frustrating part of this headset: they’ve convinced me that there’s some seriously cool and fun tech at work here and also that we’re really not particularly close to mainstream MR.

The first and most practical thing the cameras do is provide better passthrough, the view that lets you see your real-world space through your headset. On the Quest 2, that was a grainy black-and-white mess. Now it’s in full color and dramatically higher resolution. Not high resolution, mind you — just higher. Good enough that you can see your cup of coffee; not good enough to see if it’s coffee or tea. Good enough to see the time on your watch; not good enough to read the text of your notification.

A screenshot of a scanned room in a Quest 3 headset.
The Quest 3’s passthrough lets it automatically scan your room, which is much better than creating boundaries yourself.

The improved passthrough makes a lot of things about Quest Life easier. The frame rate is smooth enough to stay comfortable as I walk around with the headset on, which makes it easier to wear for long stretches. It can automatically set your boundaries in your room, so you don’t have to scan the floor anymore. You can double-tap on the side of your headset at any time to jump into passthrough mode in case you need to look at something or see which dog / chair / family member you just whacked while playing Supernatural.

But the reason the passthrough really matters is because it’s what makes mixed reality possible. The Quest can take those camera feeds and superimpose content over them in real time. The headset first has you walk around to scan your surroundings — in my case, my messy basement — and then lets you play in them.

Technically speaking, the mixed reality on the Quest 3 is… fine. It struggles badly in low light, turning everything grainy and low-res, but if you’re in a well-lit space, it’s mostly accurate. There’s some warping a bit around the edges, so it can seem a little bit like the floor is moving or you’re on a light dose of some hallucinogenic drug. It also warps and distorts around your hands as they move through space. But for a first generation of mixed reality, it’s a solid start.

The problem is, there’s almost nothing compelling to do in mixed reality on the Quest 3. The single most fun MR experience I’ve had so far is First Encounters, a mini-game in which tiny Koosh ball-looking aliens blow holes in your room and try to attack you while you try and capture them. It’s fun, silly, and really does make it feel like an alien craft has crashed into your house. It’s much more fun to play First Encounters in my basement than it would be in a purely VR space. But First Encounters is the demo experience to teach you how to use mixed reality! It’s a bad sign that that’s the best thing on the platform. Practically everything else I’ve tried is fun but simple — like Cubism, a puzzle game — or still basically a tech demo.

Two screenshots of a menu screen on the Quest 3, one in a bright room and one in a dark one.
Passthrough looks pretty good when the room’s well lit — and pretty grainy in low light.

In the long run, while I think VR is perfectly suited to immersive gaming, MR is likely to be much more of a real-world tech. That’s why the form factor matters so much: walking around the world with a cool heads-up display is only really going to take off if that display doesn’t look stupid or gadget-y. MR will be cool for navigation, education, making Pokemon Go even more fun. The Quest 3 is mostly focused on MR for business and gaming, but both are better experiences in VR right now. For MR to really take off, we’re going to need more than just VR games reworked for passthrough; we’re going to need an entirely new class of apps and ideas. There’s not much of that in the Quest 3 yet.

That might help explain why even some of the MR games that do exist would be better off in VR. Drop Dead: The Cabin has an MR mode called “Home Invasion,” but its MR features work so badly the game’s basically unplayable in that mode. Figmin XR is a fun game for building stuff, but it seemed to have no idea that my coffee table is a hard surface that objects shouldn’t just fall through. Many of these games need to update for the Quest 3’s new depth sensor and passthrough abilities; others need to rethink their whole strategies. Very few things I tried actually interacted with my physical space in the way true mixed reality should.

I’m sure that will change eventually. The Quest 3 and Vision Pro are the first compelling reasons for developers to care about mixed reality, so I’m hopeful that over the next year or so, we’ll get a lot of good MR content and games. But right now, it’s pretty bleak out there. Even the exciting new games coming to the Quest 3, like Assassin’s Creed: Nexus and Roblox and the all-important Powerwash Simulator, are still VR games.

That’s because, for all it’s technically capable of, the Quest 3 is still a VR headset. A very good one, to be clear; my favorite one yet, even. But even great VR headsets are far from a mainstream product right now. If you believe mixed reality could change that and could entice even people who don’t care about VR headset and VR worlds to strap something to their face — and I do believe that — the Quest 3 just doesn’t quite deliver.

Maybe this is the headset before the headset, the one that helps entice developers to make cool stuff that turn into killer apps for the Quest 4. Heck, maybe none of this matters until the device itself is less “headset” and more “glasses” and until we’ve had a series of societal debates about whether you should make fun of people who wear these things in public. It’s going to take a lot of technical and social change to make mixed reality mainstream, and it’s probably going to take a few years.

Until then, the Quest 3 will remain what it is: an excellent VR headset and nothing else.

Photography by David Pierce / The Verge

❌
❌