Uncategorized

Project Aura smart glasses forANDROID XR is teased by Xreal

Android XR: A Key Vehicle for Google and Gemini: What has Google learned from Project Astra, Gentle Monster, and Warby Parker?

13 years of experience as a senior reporter focuses on wearables, health tech and more. Before coming to The Verge, she worked for Gizmodo and PC Magazine.

When I demoed the operating system, there was a plan to work with different partners to produce a product that would appeal to a wide range of people. That demo also made it abundantly clear that it viewed XR devices as a key vehicle for Gemini. So far, everything we know about Project Aura is aligned with that strategy. Meaning, Google’s approach to this next era of smart glasses is similar to how it first tackled Wear OS — Google provides the platform, while third parties handle the hardware. At least, until they feel like they are ready to jump into the action. That makes sense, given that google has a history with smart glasses hardware. But given the momentum we’ve seen through Project Astra and now, Android XR making it into the main Google I/O keynote? The smartglasses are returned to the menu.

The partnership shows that the internet company is taking style seriously this time around. If you want trendy glasses at a relatively affordable price, then you can get them from the direct-to-consumer eyewear brand, WarbyParker. Meanwhile, Gentle Monster is currently one of the buzziest eyewear brands that isn’t owned by EssilorLuxottica. The Korean brand is popular among Gen Z, thanks in part to its edgy silhouettes and the fact that Gentle Monster is favored by fashion-forward celebrities like Kendrick Lamar, Beyoncé, Rihanna, Gigi Hadid, and Billie Eilish. Partnering with both brands seems to hint that Android XR is aimed at both versatile, everyday glasses as well as bolder, trendsetting options.

As for what these XR glasses will be able to do, Google was keen to emphasize that they’re a great vehicle for using Gemini. So far, Google’s prototype glasses have had cameras, microphones, and speakers so that its AI assistant can help you interpret the world around you. That included demos of taking photos, getting turn-by-turn directions, and live language translation. That pretty much lines up with what I saw at my Android XR hands-on in December, but Google has slowly been rolling out these demos more publicly over the past few months.

One thing the Ray-Ban Meta glasses have argued in favor of, is that smart glasses need to look cool in order to go mainstream. Not only do Meta’s glasses look like an ordinary pair of Ray-Bans, Ray-Ban itself is an iconic brand known for its Wayfarer shape. They are glasses the average person wouldn’t feel put off wearing. Meta has put out a few limited edition versions since it launched the second-gen smart glasses. The smart glasses Meta is releasing for athletes could be branded by the same company as the ones it releases for athletes.

Project Aura: Bringing Out Your Headset and Your Personal X-ray Camera to the Proto-Research Experiment for the Consumer Product Launch

That suggests a hardware evolution compared to the current devices by Xreal. We don’t know which of the Chipsets Project Aura will use. Project Aura is counting on developers to build apps and use cases now, in order to be ready for the consumer product launch. The press release said that the apps for headsets can be brought over to Project Aura.

Details are sparse — Xreal spokesperson Ralph Jodice told me we’ll learn a bit more at Augmented World Expo next month. But we know it’ll have Gemini built-in, as well as a large field-of-view. In the product render you can see cameras in the hinges and nose bridge as well as microphones and buttons in the temples.

If Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature will be launched later this year, and it will always ask for permission before changing passwords.

A new feature being tested by the company is allowing you to upload a full-length photo of yourself and see how it will look on you. It uses a model that understands the human body.

Gmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to prewrite responses that sound more like you. The feature will suggest more formal responses to your boss if the tone of your conversation is taken into account.

In near real-time, your speech can be translated into your talking partner’s preferred language. For now the feature supports English and Spanish. It’s rolling out in beta to Google AI Pro and Ultra subscribers.

The 15 biggest announcements at Google I/O 2025: The artificial intelligence tools Stitch, Search Live, Imagen 4, and Veo 3

Stitch is a new artificial intelligence tool that can generate userinterfaces using selected themes. You can also make other designs, which Stitch can use to guide its output. The experiment is currently available on Google Labs.

The screensharing feature is no longer available for free for all mobile devices, but it will be accessible for free for all iPad and iPhone users.

The search giant is about to launch Search Live, a feature that works with the artificial intelligence assistant. If you pick the new Live icon, you will be able to show what’s on your camera while chatting with Search.

The new subscription is called the “ai ultra” and gives access to the company’s most advanced models and higher usage limits across apps. The subscription includes access to ProjectMariner which allows you to complete up to 10 tasks at once.

The latest version of its image generator, called Imagen 4, will allow it to generate text in more formats, like landscape and square, according to the company. Its next-gen AI video generator, Veo 3, will let you generate video and sound together, while Veo 2 now comes with tools like camera controls and object removal.

The Deep Think mode is for queries related to math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.

Source: The 15 biggest announcements at Google I/O 2025

Artificial Intelligence Announcements at I/O 2025: Highlights from Google’s AI Mode Adversarial App and Updates to Project Starline

The cost-efficient model of the Gemini app is getting an update and everyone can access it on the app.

Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can point out a mistake on your homework, for example, based on what it sees.

Project Starline, which began as a 3D video chat booth, is taking a big step forward. It will soon be available in an HP device, which will have a light field display, six cameras and allow you to create a 3D image of someone on a video call.

Google just wrapped up its big keynote at I/O 2025. There were a lot of announcements revolving around artificial intelligence, ranging from new features for Search and Gmail to updates for image and video generation models.

There was a new app and an update to Project Starline. If you didn’t catch the event live, you can check out everything you missed in the roundup below.

Google has announced that it’s rolling out AI Mode, a new tab that lets you search the web using the company’s Gemini AI chatbot, to all users in the US starting this week.