Meta will soon use AI chats for ad targeting because of course it will

Meta will start scraping conversations with AI chatbots to gather data for the purpose of ad targeting. The company says this data will be used to “personalize the content and ads” that people see across apps like Facebook and Instagram.

The “feature” goes into effect on December 16 and Meta will start sending out in-product notifications and emails about the move on October 7. The company says this change is coming to “most regions” throughout the world, but the launch won’t impact the EU and South Korea at first.

Meta

Meta gives an example of a user talking with an AI chatbot about hiking and then seeing ads about, you guessed it, hiking. “As a result, you might start seeing recommendations for hiking groups, posts from friends about trails or ads for hiking boots,” it wrote in a blog post.

“People’s interactions simply are going to be another piece of the input that will inform the personalization of feeds and ads,” Christy Harris, privacy policy manager at Meta, told Reuters.

This is the same type of ad targeting that has followed us around the internet for ages, but one-on-one conversations have typically been excluded from this sort of thing. This is just another reminder that AI chatbots are not your friends.

There will be no way to opt out of this, according to reporting by The Wall Street Journal. If you talk to a Meta chatbot, it’ll be scraping. The company notes that the chatbots will not scrape data pertaining to “topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs or trade union membership.” I’d recommend not discussing those things with an AI chatbot no matter what Meta says.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-will-soon-use-ai-chats-for-ad-targeting-because-of-course-it-will-153319626.html?src=rss 

Where Is Amanda Knox Now? Her Life Today After Prison & Wrongful Conviction

Knox’s imprisonment and wrongful conviction became one of the world’s most controversial stories. Find out what she’s up to now and learn about her case.

Knox’s imprisonment and wrongful conviction became one of the world’s most controversial stories. Find out what she’s up to now and learn about her case. 

Unistellar’s smart binoculars can tell you which mountain you’re looking at

It’s not every day I get to try out an entirely new type of tech product. Telescope company Unistellar recently gave me the chance to do just that with Envision, the first smart binoculars that can identify mountains and stars. The only things like it on the market are Swarovski’s smart binoculars, but those are triple the price and strictly for birds and wildlife.

At an event near Marseilles, I tried an Envision prototype with the design and most of the functionality of the final product (like several other Unistellar products, the company marketed it on Kickstarter and raised $2.7 million). Some features were a bit rough and it took practice to use the binoculars smoothly. But it’s an interesting amalgam of analog and digital tech that’s bound to be a hit with astronomers and travelers.

The Envision initially came out of a conversation between Unistellar engineers wondering why there were no binoculars with an AR-like digital overlay. They soon found out: It was a huge engineering challenge. Combining all the data into an overlay and getting it to line up with the optical view was particularly vexing. Reducing latency was another problem, so that the digital display wouldn’t lag behind the optical view.

The company eventually came up with a solution it borrowed from AR tech. Envision combines premium lenses with an augmented reality projection system that beams contextual info into the optical path via a bright, high-contrast microdisplay. That overlay only appears in one eye, but your brain transforms it into a complete image.

The Envision binoculars take data from inertial sensors and a compass using custom software to “guarantee precise positioning and low-drift orientation” of the digital display. It then pulls in topographic and cartographic info from a large database and merges it onto an AR overlay based on your location and viewing direction. This information comes from your phone’s internet connection, but the binoculars can be used offline as well if you load specific regions in advance.

Steve Dent for Engadget

I tested a hand-built prototype that lacked the quality control that will happen in full manufacturing. However, the materials, optics and electronics were nearly complete. For daytime testing, I went to the Citadelle de Forcalcquier that offers a panoramic view of mountain ranges in the region. While it was a bit overcast and rainy, distant peaks up to 30 miles away were still visible.

Though a bit heavier than regular binoculars, the Envision was comfortable to hold and use over a period of an hour thanks to the rubberized coating and high-quality plastics. To use the Envision, you set them up as you would any pair of binoculars. They have a diopter adjustment for your specific vision and you can retract the eyecups for use with glasses. There’s a width adjustment to match your eyes and a focusing wheel to sharpen the view.

With all of that set, there’s a rocker control on the left side that enables the AR overlay, which consists of monochrome red graphics like an old-school arcade game. The previous/next buttons let you switch between targets, which you can then select by hitting the “validate” button.

The last button, “target lock,” does two things. Clicking it once does exactly that, locking onto the target. Then, if you pass the binoculars to someone else, they’ll be guided by arrows to the same object. And to correct any drift that inevitably occurs, you press and hold the target lock button and move the binoculars until both the overlay and optical view align. Lastly, release the button and everything is re-synced.

As regular binoculars, they gave me a clear view of distant objects. I switched on the AR and waited a few seconds for my eyes to adjust. When looking at a mountainous horizon, the Envisions overlaid a red outline matching the topography, with the names of peaks and ranges displayed at the bottom center of the screen, along with their elevations and distance from the viewer. It was a half-inch or so off the real-world view, so I used the target lock control to align them perfectly.

The latency wasn’t bad, but if I moved the binoculars too quickly it took a second or so for the overlay to catch up. After scanning across the horizon a few times, the overlay would drift out of sync again, so I needed to use the target lock to realign the views once more. Both the latency and misalignment should improve with the final production version, Unistellar told me.

For now, the Envision can only identify mountain peaks, valleys and ranges. In the production version and via future updates, however, it will identify things like water springs, shelters, hiking paths, rivers and lakes. A companion app will provide the updates, and the software also lets the user select points of interest, access the geographical database and receive guided tours. Sadly, none of those features were available in the prototype I used.

The next test was star spotting using Envision’s Night mode. Fortunately, I didn’t need to go far (the hotel pool) as the clouds covering the sky for most of the day serendipitously broke apart to give us a crystal-clear starscape.

For a stargazing experience, the Envisions were transformational. With the binocular optics set up as before, switching on the AR view instantly displays the names of individual stars, linked together in their constellations by lines. For example, it pointed out Lynx, a constellation that’s faint with the naked eye, along with its fourth brightest star Alsciaukat (31 Lyncis). The final version of the binoculars will also display nebulae, galaxies, planets, moons, comets, asteroids and even human-made points of interest like the International Space Station (ISS) and Apollo landing sites.

This could make the Envision an outstanding educational tool. You can lock onto a star, then give the binoculars to someone else and they can quickly locate the same body by following the arrows. They’ll also see whatever constellation it’s part of. It would only take a few nights of stargazing for someone to learn a lot about the night sky.

At the same time, it’s a great way for aspiring astronomers to survey interesting targets to study with a more powerful telescope. I did just that, using the Envision to home in on a star cluster. With the name clearly displayed, I punched it into Unistellar’s Odyssey Pro smart telescope and quickly saw it with a larger, clearer view. Conversely, you’ll be able to enter a star name into Unistellar’s app and be guided to it by Envision, in the final production version.

The Envision does have some issues. If you’re someone who already has trouble seeing through binoculars, these may not be for you. The AR display can be hard to read at times, and adjusting the brightness (especially for night viewing) can be a challenge. One missing feature is a built-in camera like the one on Swarovski’s binoculars. That was a bit disappointing, as you can’t easily share your experience on social media. The only way to do so is to snap images with your smartphone through the eyepiece. That effectively requires you to lock the binoculars onto a tripod which, well, defeats the purpose of binoculars.

With that being said, I think Unistellar’s first crack at smart binoculars was a success, even in their unfinished form. They add an informational element to a true optical view and finally bring binoculars, which have been around for hundreds of years, into the informational age.

Steve Dent for Engadget

Like any early product (I’m thinking of Pebble’s smartwatch), it’s bound to improve significantly in future versions. Yes, there are smartphone apps that can identify stars and geographical features. But there’s something about looking through a lens and seeing a true image that can’t be beat. And with Envision, you’ll finally know exactly what you’re seeing.

Unistellar is opening pre-orders for its Envision smart binoculars starting today at $999, a fairly steep discount from the final $1,499 retail price, with deliveries set for October 2026. That’s a long way off, but if you’re willing to wait, Unistellar has a perfect track record with its smart telescope deliveries. Retail availability is even farther away, set for 2027.

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/unistellars-smart-binoculars-can-tell-you-which-mountain-youre-looking-at-140007104.html?src=rss 

Microsoft jacks the price of Game Pass Ultimate up to $30 a month

Microsoft has announced some major changes for Game Pass. It’s rebranding some of the tiers, which should make it a little easier to keep tabs on what games and features are available on each.

However, there are some painful price increases here. The high-end plan, Game Pass Ultimate now costs $30 per month. That’s a 50 percent price increase from the previous $20 per month, and there’s no annual or quarterly option available to make that sting less.

That means the price of a Game Pass Ultimate membership has nearly doubled in 15 months. Microsoft previously raised the price from $17 to $20 in July 2024. The latest change now means that, at $360 per year, Game Pass Ultimate is now more than twice as expensive as PlayStation Plus Premium, which is currently $160 on an annual plan.

Microsoft recently announced a price increase for its Xbox Series X/S consoles as well. The systems will be more expensive to buy in the US starting this Friday. Also, pre-orders for the ROG Xbox Ally handheld just went live, with Microsoft confirming that the higher-end model would cost $1,000. It’s getting really expensive to be an Xbox fan, folks.

This story is developing; refresh for updates.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-jacks-the-price-of-game-pass-ultimate-up-to-30-a-month-142441307.html?src=rss 

Ray-Ban Meta (2nd Gen) review: Smart glasses are finally getting useful

In a lot of ways, Meta’s hasn’t changed much with its second-gen Ray-Ban glasses. The latest model has the same design and largely the same specs as the originals, with two important upgrades: longer battery life and improved video quality. 

At the same time, the Ray-Ban Meta glasses have a lot of features that didn’t exist when I first reviewed them two years ago, largely thanks to AI. And with the release of its second-generation frames, there’s still a lot to look forward to, like new camera features and AI-powered audio. The good news is that Meta isn’t limiting these updates to its newest frames, so if you have an older pair you’ll still see the new features. But, if you’ve been on the fence about getting a pair, there’s never been a better time to jump in. 

Same look, (slightly) better specs

Meta and EssilorLuxottica haven’t strayed too far from the playbook they’ve used for the last two years. The second-generation Ray-Ban Meta glasses come in a handful of frame styles with a number of color and lens variations that start at $379. I tried out a pair of Wayfarer frames in the new “shiny cosmic blue” color with clear transition lenses. 

I personally prefer the look for the slightly narrower Headliner frames, but the second-gen glasses still look very much like traditional Wayfarer glasses. I’ve never been a fan of transition lenses for my own prescription eyewear, but I’m starting to come around on them for smart glasses. As Meta has improved its cameras and made its AI assistant more useful, I’ve found more reasons to wear the glasses indoors. 

The second-generation Ray-Ban Meta glasses come with clear frames, with polarized and transition lenses available as an upgrade.

Karissa Bell for Engadget

Also, if you’re going to be paying $300 or more for a pair, you might as well be able to use them wherever you are. It also helps that the transition lenses on the second-gen Ray-Ban Meta glasses get a bit darker than my first-gen Wayfarers with transition lenses. Upgrading from the standard clear lenses will cost you, though. Frames with polarized lenses start at $409, transitions start at $459 and prescription lenses can run significantly more. 

As with the recent Oakley Meta HSTN glasses, the second-gen Ray-Bans come with a longer battery life and better camera. Meta says the battery can last up to eight hours on a single charge with “typical use.” I was able to squeeze a little more than five and a half hours of continuous music playback. That’s a noticeable step up from the battery on my original pair which, after two years, is starting to show its age. The glasses also now support higher-resolution 3K video recording, but the 12MP wide-angle lens shoots the same 3,024 x 4,032 pixel portrait photos as earlier models.

The second-gen glasses have the same design as the first-gen, with a capture button on the right side of the frames. The charging case provides an additional 48 hours of battery life.

Karissa Bell for Engadget

For videos, there’s a noticeable quality boost, but I still think it’s probably not necessary for most people if you’re primarily sharing your clips on social media. It does make the glasses more appealing for creators, though, and judging by the number of them in attendance at Connect, I suspect Meta sees them as a significant part of its user base. I’m looking forward to Meta adding the ability to record Hyperlapse and slow-motion videos, though, as I think these may be more interesting than the standard POV footage for everyday activities. 

Meta AI + what’s coming

Two years ago, I was fairly skeptical of Meta’s AI assistant. But since then, Meta has steadily added new capabilities. Of those, the glasses’ translation abilities have been my favorite. On a recent trip to Argentina, I used live translation to follow along with a walking tour of the famous Recoleta cemetery. It wasn’t perfect — the feature is meant more for back-and-forth conversations rather than extended monologues — but it allowed me to participate in a tour I would have otherwise had to skip. (A word of warning: using the live translation for an extended period of time is a major battery killer.)

Meta AI can also provide context and translations in other scenarios, too. I spent some time in Germany while testing the latest second-gen Ray-Ban glasses and found myself repeatedly asking Meta to translate signs and notices. For example, here’s how Meta AI summarized this collection of signs. 

Meta AI was able to translate these signs (left) when I asked it “what do these signs say?”

Karissa Bell for Engadget

As I wrote in my review of the Oakley Meta HSTN glasses, I still haven’t found much use for Live AI, which lets you interact with the assistant in real-time and ask questions about your surroundings. It still feels like more of a novelty, but it makes for a fun demo to show off to friends who have never tried “AI glasses.” There are also some very interesting accessibility use cases that take advantage of the glasses’ cameras and AI capabilities. Features like “detailed responses” and support for “Be My Eyes” show how smart glasses can be particularly impactful for people who are blind or deal with low vision.

One AI-powered feature I haven’t tried out yet is Conversation Focus, which can adjust the volume of the person you’re speaking to while dampening the background noise. Meta teased the feature at Connect, but hasn’t said exactly when it will be available. But if it works as intended, I could see it being useful in a lot of scenarios.

I’m also particularly intrigued by Meta’s Connect announcement that it will finally allow third-party developers to create their own integrations for its smart glasses. There are already a handful of partners, like Twitch and Disney, which are finding ways to take advantage of the glasses’ camera and AI features. Up to now, Meta AI’s multimodal tools have shown some promise, but I haven’t really been able to find many ways to use the capabilities in my day-to-day life. 

Allowing app makers onto the platform could change that. Disney has previewed a smart glasses integration for inside of its parks that would allow visitors to get real-time info about the rides, attractions and other amenities as they walk around. Golf app 18Birdies has shown off an app to deliver stats and other info while you’re on the course.

Should you buy these? And what about privacy?

When the Ray-Ban Meta glasses came out two years ago, this was a pretty straightforward question to answer. If the idea of smart glasses with a good camera and open-ear speakers appealed to you, then buying a pair was a no-brainer. 

Now, it’s a bit more complicated. Meta is still updating its first-gen Ray-Ban glasses with significant new features, like Conversation Focus, new camera modes and third-party app integrations. So if you already have a pair, you won’t be missing out on a ton if you don’t upgrade. (And with a starting price of $299, the first-gen glasses are still solid if you want a more budget-friendly option.)

There are also other options to consider. The upcoming Oakley Meta Vanguard glasses come with more substantial hardware upgrades and other unique features that will appeal to athletes and anyone who spends a lot of time outdoors. And on the higher end, there are the $799 Meta Ray-Ban Display glasses that blend AR elements with its existing features in an intriguing way. 

Meta has already previewed several new features, like new camera modes and Conversation Focus.

Karissa Bell for Engadget

I also have many of the same concerns about privacy as I did when I reviewed Meta’s first Ray-Ban branded glasses back in 2021. I’m well aware Meta already collects an extraordinary amount of data about us through its apps, but glasses just feel like they provide much more personal, and potentially invasive, access to our lives.

Meta has also made some notable changes to the privacy policy for its glasses in recent months. It no longer allows users in the United States to opt out of storing voice recordings in its cloud, though it’s still possible to manually delete recordings in the Meta AI app. 

The company says it won’t use the contents of the photos and videos you capture to train its AI models or serve ads. However, images of your surroundings processed for the glasses’ multimodal features like Live AI can be used for training purposes (these images aren’t saved to your device’s camera roll). Meta’s privacy policy also states that it uses audio captured via voice commands for training. And it should go without saying, but anyone using Meta’s glasses should be very careful about sharing their interactions with its AI app, as a bunch of users have already seemingly inadvertently shared a ton of highly-personal interactions with the world. 

If any of that makes you uncomfortable, I’m not here to convince you otherwise! We’re still grappling with the long-term privacy implications of generative AI, much less generative AI on camera-enabled wearables. At the same time, as someone who has been wearing Meta’s smart glasses on and off for more than four years, I can say that Meta has been able to turn something that once felt gimmicky into a genuinely useful accessory. 

This article originally appeared on Engadget at https://www.engadget.com/wearables/ray-ban-meta-2nd-gen-review-smart-glasses-are-finally-getting-useful-124720393.html?src=rss 

Amazon Luna will offer controller-free party games in an attempt to woo Prime subscribers

After a few years of mostly humming along in the background, Amazon’s game streaming service is receiving an update. Amazon Luna will still act as a game streaming service with a rotating library of free games for Prime users, but now, Amazon also plans to offer “GameNight,” a collection of social party games that you can play with your friends with just a smartphone.

Luna’s GameNight collection includes over 25 multiplayer games, some that are reinterpretations of classic games like Angry Birds, Exploding Kittens or Ticket to Ride, and others that are entirely original and developed by Amazon, like Courtroom Chaos: Starring Snoop Dogg. If you’ve played any of Jackbox’s various multiplayer games, GameNight seems to use a similar setup. You load up the game in Luna, whoever’s playing scans an onscreen QR code with their phone and then they can join the game using their device as a controller.

Amazon hopes these smartphone-controlled games will lower the barrier to entry for anyone intimidated by a controller, or who hasn’t already taken advantage of Luna as part of their Prime subscription. For everyone else, though, the company says the service is getting a collection of new high-profile games in the near future, including Indiana Jones and the Great Circle, Kingdom Come: Deliverance II and Dave the Diver. As before, if you’re willing to pay for one of Amazon’s add-on subscriptions you can add even more games to your library, too. Unlike GameNight games, though, all of these titles will require a controller to play, whether it’s Amazon’s Luna Controller or a Bluetooth controller connected to the Luna app.

While not a radical reinterpretation of Luna, Amazon is at least trying to differentiate the service from something like Xbox Cloud Gaming or Google’s failed Stadia service. It’s not clear if game streaming is as important to Amazon as it is to Microsoft, but if the company is willing to pay, offering more games and more ways to play them seems like a good move.

This article originally appeared on Engadget at https://www.engadget.com/gaming/amazon-luna-will-offer-controller-free-party-games-in-an-attempt-to-woo-prime-subscribers-130004416.html?src=rss 

The Google Home Speaker is getting a Gemini-driven refresh

Google announced a wave of hardware updates today, including giving some love to the Google Home Speaker. We saw a teaser for the revamped smart speaker last month, so the announcement isn’t a surprise, but it does provide some specifics about what’s coming to the company’s smart home efforts.

This new Google Home Speaker puts the Gemini AI assistant front and center, as is the case with so much Google hardware these days. The light ring will also flash different colors to show when the AI model is listening, processing or responding. If you have a Google Home Premium subscription, you’ll also be able to use the Home to access Gemini Live. The blog post promises “more natural conversations” with this model, which it says has custom processing to support the demands of running an AI assistant.

Google is also bringing 360-degree audio to the Home Speaker. The upcoming iteration will be able to connect a pair of Home Speakers to the Google TV Streamer, allowing for a surround-sound home theater setup. The Home will still be able to connect to other Google Nest speakers as well. And for the privacy-minded, there’s a physical button to toggle the microphone off.

The new speaker won’t be available until spring 2026 and will retail for $99. It has four color options: porcelain, hazel, jade and berry. The Google Home Speaker will be available in the US, Canada, UK, Ireland, France, Germany, Spain, Italy, Netherlands, Denmark, Norway, Sweden, Finland, Belgium, Switzerland, Austria, Japan, Australia and New Zealand.

The announcement follows hot on the heels of Amazon’s fall hardware event, which also had some big updates for smart speakers centered on its own Alexa+ AI assistant, including a brand new form factor called the Echo Dot Max.

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/the-google-home-speaker-is-getting-a-gemini-driven-refresh-130004673.html?src=rss 

Google’s new Nest Doorbell and Nest Cams have 2K video and new AI chops

A day after Amazon updated its security cameras, Google followed suit with its competing suite. A trio of new Nest security cams is available starting today. The latest Nest Doorbell and Nest Cams have higher-resolution (2K HDR) video and a wider field of view. That not only makes for better images, but it also opens the door to new (paid) AI features.

Google’s new additions include the Nest Cam Indoor (3rd gen), Nest Cam Outdoor (2nd gen) and Nest Doorbell (3rd gen). The company says the devices were designed to “provide the rich, detailed data our multimodal AI uses to understand.” The results, according to the company, are “better alerts” and the ability to “find important moments, faster.”

Google says DxOMARK rated all three as first in their class for image quality. The Nest Cams each have 2,560 x 1,400 resolution with a 152-degree diagonal field of view (FOV). The Nest Doorbell uses a 2,048 x 2,048 sensor with a 166-degree FOV. All three support up to 6x digital zoom.

Alerts can digitally zoom in on subjects.

Google

The company says the combination boosts the cameras’ ability to capture video in low-light conditions. Specifically, Google claims they offer 120 percent more light sensitivity than their predecessors. “This means the cameras can now stay in full-color mode much longer at dawn and dusk than before,” the company wrote.

The sharp resolution also allows you to digitally zoom in on a specific area in the Home app, cropping out the rest. Google says the feature could be handy for hot spots like a garden bed or walkway. Similarly, your alerts will include animated previews that zoom in on the subject. This could make it easier to tell at a glance who or what triggered the notification.

Ask Home (left) and Home Brief

Google

The upgraded Gemini AI chops include a new chatbot feature called Ask Home. It lets you do things like ask what ate your plants. (In Google’s example, the chatbot explains that it was rabbits, producing photo evidence.) It also lets you perform smart home tasks or create automations using natural language. There’s another new AI feature called Home Brief that gives you an AI-generated summary of the day’s activities. Both of the new AI features require a Google Home Premium subscription.

All three cameras are available beginning today at the Google Store and with retail partners. The Nest Cam Indoor costs $100. The Nest Cam Outdoor will set you back $150. And the Nest Doorbell costs $180.

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/googles-new-nest-doorbell-and-nest-cams-have-2k-video-and-new-ai-chops-130006878.html?src=rss 

Google has overhauled its smart home app to feature Gemini

As part of Google’s smart home announcements today, the company has unveiled a new look for its Google Home app that will begin rolling out globally today. The heart of the redesign is about Gemini for Home, which will replace the Google Assistant role in smart devices and promises a more conversational way to interact with and direct the company’s AI. The Google Home app is also now where customers will control their Nest devices.

There’s now a Home tab with a consolidated view of the system, an Activity tab that collects the notifications from all connected devices and an Automations tab for managing the hands-off side of the smart home hardware. The app also now has a persistent AI-powered “Ask Home” option in the header that Google describes as a “natural language command center for your entire home.” The company promises that it will be able to execute naturally written commands, such as searching for specific moments in a camera clip or creating more open-ended automations. However, some of those features will require a Google Home Premium subscription to access. 

In addition to the new Gemini features, the Google Home app has been rebuilt for increased reliability and performance. The software loads “significantly faster,” reportedly more than 70 percent faster on some Android devices. Camera views in the app should load 30 percent faster and playback failures should be down 40 percent in the new version. Google is also boasting a reduction of almost 80 percent in app crashes and said it is additionally working to improve battery draw.

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/google-has-overhauled-its-smart-home-app-to-feature-gemini-130041497.html?src=rss 

Gemini for Home is the official replacement for Google Assistant on smart devices

Google is finally ready to explain how Gemini will replace Google Assistant in your smart home. The company’s original voice assistant will be replaced with the aptly named Gemini for Home starting this month, ushering in what might be an easier-to-use and more conversational smart home era in the process.

Like Google teased at CES 2025, the biggest change Gemini for Home will introduce for Google Assistant devotees is an end to rigid commands. While you’ll still need to use a “Hey Google” wake word, the days of having to be precise are over. Google claims Gemini grasps context enough to not only remember what your last request was, but also understand that if you’re saying “Hey Google, I’m about to watch a movie, turn off the lights,” you specifically mean the lights in your living room. You’ll also be able to string multiple requests together into the same sentence, and create automations without having to whip out the Google Home app, just by describing them. And when you want to ditch wake words entirely, you can start a Gemini Live chat and have a smooth back and forth with Gemini about whatever you choose.

AI-based improvements will also extend to any cameras you have in your smart home. Google says Gemini can create more useful notifications if a camera detects motion or films a notable event from around your home, thanks to its semantic understanding of visuals. You can also pull a specific piece of footage with natural language requests and even receive answers based on things your smart home recorded via a new feature called “Ask Home.” Like Ask Photos in Google Photos, Ask Home understands the context and meaning of footage you’ve captured to provide answers to questions like “Did I leave the car door open.” And for a larger overview of what’s going on at home, the “Home Brief” can identify important events you’ve filmed and “summarizes hours of footage into a quick, digestible summary you can read to catch up on what happened while you were away,” Google says.

Google

Google says Gemini for Home will be available on all of its smart home devices released in the last decade, including new Gemini for Home-compatible doorbells and cameras created by Walmart. Unfortunately, if you’re interested in features like Gemini Live, AI-powered notifications, Ask Home and Home Brief, you’ll have to pay for a $10-per-month Google Home Premium subscription to use it. The subscription also unlocks an additional 30 days of cloud storage for any videos your smart home captures and comes included with Google’s AI Pro and Ultra subscriptions at no additional cost.

To try out Gemini for Home as soon as possible, you can sign up for early access in the Google Home app. Google says the update will roll out throughout the month of October, and come to smart speakers and smart displays “toward the end of the month.”

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/gemini-for-home-is-the-official-replacement-for-google-assistant-on-smart-devices-130041482.html?src=rss 

Generated by Feedzy
Exit mobile version