Google’s Project Gameface hands-free ‘mouse’ launches on Android

At last year’s Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming “mouse” that allows users to control a computer’s cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology. 

The tool relies on the phone’s front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to “select” items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action. 

The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses. 

For the tool’s Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project’s open source code on GitHub and is hoping that more developers decide to “leverage it to build new experiences.”

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss 

The Morning After: The biggest news from Google’s I/O keynote

Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.

Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.

Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.

But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.

Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.

And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen. 

While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.

— Mat Smith

The biggest stories you might have missed

Google wants you to relax and have a natural chat with Gemini Live

Google Pixel 8a review

Google unveils Veo and Imagen 3, its latest AI media creation models

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Google reveals its visual AI assistant, Project Astra

Full of potential.

Google

One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.

Continue reading.

X now treats the term cisgender as a slur

Elon Musk continues to add policy after baffling policy.

The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.

Continue reading.

OpenAI co-founder Ilya Sutskever is leaving the company

He’s moving to a new project.

Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss 

OpenAI co-founder and Chief Scientist Ilya Sutskever is leaving the company

Ilya Sutskever has announced on X, formerly known as Twitter, that he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident that OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati, he continued. In his own post about Sutskever’s departure, Altman called him “one of the greatest minds of our generation” and credited him for his work with the company. Jakub Pachocki, OpenAI’s previous Director of Research who headed the development of GPT-4 and OpenAI Five, has taken Sutskever’s role as Chief Scientist. 

After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…

— Ilya Sutskever (@ilyasut) May 14, 2024

While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal last year. In November, OpenAI’s board of directors suddenly fired Altman and company President Greg Brockman. “[T]he board no longer has confidence in [Altman’s] ability to continue leading OpenAI,” the ChatGPT-maker announced back then. Sutskever, who was a board member, was involved in their dismissal and was the one who asked both Altman and Brockman to separate meetings where they were informed that they were being fired. According to reports that came out at the time, Altman and Sutskever had been butting heads when it came to how quickly OpenAI was developing and commercializing its generative AI technology. 

Both Altman and Brockman were reinstated just five days after they were fired, and the original board was disbanded and replaced with a new one. Shortly before that happened, Sutskever posted on X that he “deeply regre[tted his] participation in the board’s actions” and that he will do everything he can “to reunite the company.” He then stepped down from his role as a board member, and while he remained Chief Scientist, The New York Times says he never really returned to work. 

Sutskever shared that he’s moving on to a new project that’s “very personally meaningful” to him, though he has yet to share details about it. As for OpenAI, it recently unveiled GPT-4o, which it claims can recognize emotion and can process and generate output in text, audio and images.

Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less…

— Sam Altman (@sama) May 14, 2024

This article originally appeared on Engadget at https://www.engadget.com/openai-co-founder-and-chief-scientist-ilya-sutskever-is-leaving-the-company-054650964.html?src=rss 

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it’s clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google’s teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss 

Engadget Podcast: The good, the bad and the AI of Google I/O 2024

We just wrapped up coverage on Google’s I/O 2024 keynote, and we’re just so tired of hearing about AI. In this bonus episode, Cherlynn and Devindra dive into the biggest I/O news: Google’s intriguing Project Astra AI assistant; new models for creating video and images; and some improvements to Gemini AI. While some of the announcements seem potentially useful, it’s still tough to tell if the move towards AI will actually help consumers, or if Google is just fighting to stay ahead of OpenAI.

Listen below or subscribe on your podcast app of choice. If you’ve got suggestions or topics you’d like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!

Subscribe!

iTunes

Spotify

Pocket Casts

Stitcher

Google Podcasts

Livestream

Credits 

Hosts: Cherlynn Low and Devindra Hardawar
Music: Dale North

This article originally appeared on Engadget at https://www.engadget.com/engadget-podcast-the-good-the-bad-and-the-ai-of-google-io-2024-221741082.html?src=rss 

Everything announced at Google I/O 2024 including Gemini AI, Project Astra, Android 15 and more

At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates that Google announced at the event.

Gemini 1.5 Flash and updates to Gemini 1.5 Pro

Google

Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Project Astra

Google

Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”

In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.

The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.

Ask Google Photos

Google

Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited” when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.

Veo and Imagen 3

Google

Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.

Big updates to Google Search

Google

Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.

But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.

Gemini on Android

Google

Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.

There were a bunch of other updates too. Google said it would add digital watermarks to AI-generated video and text, make Gemini accessible in the side panel in Gmail and Docs, power a virtual AI teammate in Workspace, listen in on phone calls and detect if you’re being scammed in real time, and a lot more.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/everything-announced-at-google-io-2024-including-gemini-ai-project-astra-android-15-and-more-210414580.html?src=rss 

X now treats the term cisgender as a slur

The increasingly discriminatory X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his bigoted brigade of blue-check sycophants, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X made good on the regressive provocateur’s stance and reportedly began posting an official warning that the LGBTQ-inclusive terms could result in a ban from the platform. Not that you’d miss much.

TechCrunch reported on Tuesday that trying to publish a post using the terms “cisgender” or “cis” in the X mobile app will pop up a full-screen warning reading, “This post contains language that may be considered a slur by X and could be used in a harmful manner in violation of our rules.” It then gives you the choice of continuing to publish the post or conforming to the backward views of the worst of us and deleting it.

Of course, neither form of the term cisgender is a slur.

As the historically marginalized transgender community finally began finding at least a sliver of widespread and long overdue social acceptance in the 21st century, the term became more commonly used in the mainstream lexicon to describe people whose gender identity matches their sex at birth. Organizations including the American Psychological Association, World Health Organization, American Medical Association, American Psychiatric Association recognize the term.

But some people have a hard time accepting and respecting that some humans are different from others. Those fantasizing (against all evidence and scientific consensus) that the heteronormative ideals they grew up with are absolute gospel sometimes take great offense at being asked to adjust their vocabulary to communicate respect for a community that has spent centuries forced to live in the shadows or risk safety due to the widespread pathologization of their identities. 

Musk seems to consider those the good ol’ days.

This isn’t the billionaire’s first ride on the Transphobe Train. After his backward tweet last June (on the first day of Pride Month, no less), the edgelord’s platform ran a timeline takeover ad from a right-wing nonprofit, plugging a transphobic propaganda film. In case you’re wondering if the group may have anything of value to say, TechCrunch notes that the same organization also doubts climate change and downplays the dehumanizing atrocities of slavery.

X also reversed course on a policy, implemented long before Musk’s takeover, that banned the deadnaming or misgendering of transgender people.

This article originally appeared on Engadget at https://www.engadget.com/x-now-treats-the-term-cisgender-as-a-slur-211117779.html?src=rss 

Animal Well speedrunners are already beating the game in under five minutes

Animal Well is one of the hottest games around. It quickly shot to the top of Steam’s top-seller chart after it was released to glowing reviews last Thursday. 

While most players complete the main story in four to six hours, it hasn’t taken long for speedrunners to figure out how to blaze through solo developer Billy Basso’s eerie labyrinth. YouTubers are already posting runs of under five minutes and the any% record (i.e. the best recorded time without any restrictions) is being smashed over and over. 

Within a couple of hours of Hubert0987 claiming the world record with a 4:44 run on Thursday, The DemonSlayer6669 appeared to snag bragging rights with one that was 18 seconds faster and perhaps the first recorded sub-4:30 time. (Don’t watch the video just yet if you haven’t beaten the game and would like to avoid spoilers.)

Animal Well hasn’t even been out for a week, so you can expect records to keep tumbling as runners optimize routes to the game’s final plunger. It’s cool to already see a speedrunning community form around a new game as skilled players duke it out, perhaps for the chance to show off their skills at the next big Games Done Quick event.

This article originally appeared on Engadget at https://www.engadget.com/animal-well-speedrunners-are-already-beating-the-game-in-under-five-minutes-195259598.html?src=rss 

Gemini will be accessible in the side panel on Google apps like Gmail and Docs

Google is adding Gemini-powered AI automation to more tasks in Workspace. In its Tuesday Google I/O keynote, the company said its advanced Gemini 1.5 Pro will soon be available in the Workspace side panel as “the connective tissue across multiple applications with AI-powered workflows,” as AI grows more intelligent, learns more about you and automates more of your workflow.

Gemini’s job in Workspace is to save you the time and effort of digging through files, emails and other data from multiple apps. “Workspace in the Gemini era will continue to unlock new ways of getting things done,” Google Workspace VP Aparna Pappu said at the event.

The refreshed Workspace side panel, coming first to Gmail, Docs, Sheets, Slides and Drive, will let you chat with Gemini about your content. Its longer context window (essentially, its memory) allows it to organize, understand and contextualize your data from different apps without leaving the one you’re in. This includes things like comparing receipt attachments, summarizing (and answering back-and-forth questions about) long email threads, or highlighting key points from meeting recordings.

Google

Another example Google provided was planning a family reunion when your grandmother asks for hotel information. With the Workspace side panel, you can ask Gemini to find the Google Doc with the booking information by using the prompt, “What is the hotel name and sales manager email listed in @Family Reunion 2024?” Google says it will find the document and give you a quick answer, allowing you to insert it into your reply as you save time by faking human authenticity for poor Grandma.

The email-based changes are coming to the Gmail mobile app, too. “Gemini will soon be able to analyze email threads and provide a summarized view with the key highlights directly in the Gmail app, just as you can in the side panel,” the company said.

Summarizing in the Gmail app is coming to Workspace Labs this month. Meanwhile, the upgraded Workspace side panel will arrive starting Tuesday for Workspace Labs and Gemini for Workspace Alpha users. Google says all the features will arrive for the rest of Workspace customers and Google One AI Premium users next month.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/gemini-will-be-accessible-in-the-side-panel-on-google-apps-like-gmail-and-docs-185406695.html?src=rss 

Google Search will now show AI-generated answers to millions by default

Google is shaking up Search. On Tuesday, the company announced big new AI-powered changes to the world’s dominant search engine at I/O, Google’s annual conference for developers. With the new features, Google is positioning Search as more than a way to simply find websites. Instead, the company wants people to use its search engine to directly get answers and help them with planning events and brainstorming ideas.

“[With] generative AI, Search can do more than you ever imagined,” wrote Liz Reid, vice president and head of Google Search, in a blog post. “So you can ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork.”

Google’s changes to Search, the primary way that the company makes money, are a response to the explosion of generative AI ever since OpenAI’s ChatGPT released at the end of 2022. Since then, a handful of AI-powered apps and services including ChatGPT, Anthropic, Perplexity, and Microsoft’s Bing, which is powered by OpenAI’s GPT-4, have challenged Google’s flagship service by directly providing answers to questions instead of simply presenting people a list of links. This is the gap that Google is racing to bridge with its new features in Search.

Starting today, Google will show complete AI-generated answers in response to most search queries at the top of the results page in the US. Google first unveiled the feature a year ago at Google I/O in 2023, but so far, anyone who wanted to use the feature had to sign up for it as part of the company’s Search Labs platform that lets people try out upcoming features ahead of their general release. Google is now making AI Overviews available to hundreds of millions of Americans, and says that it expects it to be available in more countries to over a billion people by the end of the year. Reid wrote that people who opted to try the feature through Search Labs have used it “billions of times” so far, and said that any links included as part of the AI-generated answers get more clicks than if the page had appeared as a traditional web listing, something that publishers have been concerned about. “As we expand this experience, we’ll continue to focus on sending valuable traffic to publishers and creators,” Reid wrote. 

In addition to AI Overviews, searching for certain queries around dining and recipes, and later with movies, music, books, hotels, shopping and more in English in the US will show a new search page where results are organized using AI. “[When] you’re looking for ideas, Search will use generate AI to brainstorm with you and create an AI-organized results page that makes it easy to explore,” Reid said in the blog post.

Google

If you opt in to Search Labs, you’ll be able to access even more features powered by generative AI in Google Search. You’ll be able to get AI Overview to simplify the language or break down a complex topic in more detail. Here’s an example of a query asking Google to explain, for instance, the connection between lightning and thunder.

Google

Search Labs testers will also be able to ask Google really complex questions in a single query to get answers on a single page instead of having to do multiple searches. The example that Google’s blog post gives: “Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.” In response, Google shows the highest-rated yoga and pilates studios near Boston’s Beacon Hill neighborhood and even puts them on a map for easy navigation.

Google

Google also wants to become a meal and vacation planner by letting people who sign up for Search Labs ask queries like “create a 3 day meal plan for a group that’s easy to prepare” and letting you swap out individual results in its AI-generated plan with something else (swapping a meat-based dish in a meal plan for a vegetarian one, for instance).

Google

Finally, Google will eventually let anyone who signs up for Search Labs use a video as a search query instead of text or images. “Maybe you bought a record player at a thriftshop, but it’s not working when you turn it on and the metal piece with the needle is drifting unexpectedly,” wrote Reid in Google’s blog post. “Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot.”

Google said that all these new capabilities are powered by a brand new Gemini model customized for Search that combines Gemini’s advanced multi-step reasoning and multimodal abilities with Google’s traditional search systems.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-search-will-now-show-ai-generated-answers-to-millions-by-default-174512845.html?src=rss 

Generated by Feedzy
Exit mobile version