Overwatch 2 will test 6v6 role queue matches starting December 17

A new season starts for Overwatch 2 next week, but one of the special modes we’ll be getting in the near future is a throwback to the past. For a limited time during season 14, there will be a 6v6 role queue mode, bringing back the original game’s composition of two tanks, two damage and two support. This mode will be available from December 17 through January 6.

The switch to five players on a team when Overwatch 2 launched was one of the more controversial choices at the time, we heard rumblings recently that Blizzard might walk back the decision. The current season included a full-on nostalgia trip mode where you could play 6v6 with only the original heroes as they were designed at launch. Yes, back in the days of self-healing Bastion mowing down everybody and Mercy undoing it all with full-team rez. The season 14 approach to 6v6 will be for the heroes as they exist now, with the current balance design in full effect. Blizzard said in October that they would explore how the community felt about the increased team size and consider if six-player teams should have more of a presence in the live game based on the player reactions.

It’s the right time to experiment with different tank playstyles, because that’s the role for the newest hero joining the game in season 14. Hazard is a spiky punk who deals a lot of damage at close range and can crowd control opponents by summoning a thorny wall. Think of him as a cross between Doomfist and Mei, with a Scottish accent.

The world of Avatar: The Last Airbender is coming to Overwatch 2! ⬇️✨

Join the fun when our latest collaboration arrives in-game on Dec 17 🤩 pic.twitter.com/Z0HvK17NXv

— Overwatch (@PlayOverwatch) December 5, 2024

The Overwatch X account also teased that the coming season will have another anime crossover. After collaborations with Cowboy Bebop and My Hero Academia, the next season will be channeling the elements with skins themed on Avatar: The Last Airbender. Omnic monk Zenyatta is clearly going to be reimagined as Aang, but the full lineup of cosmetics will also be unveiled on December 17.

And in a final piece of good Overwatch news, Blizzard shared that sales of the Pink Mercy charity skins earlier this year raised $12.3 million for the Breast Cancer Research Foundation. Well played, people.

This article originally appeared on Engadget at https://www.engadget.com/gaming/overwatch-2-will-test-6v6-role-queue-matches-starting-december-17-194524335.html?src=rss 

Google DeepMind’s Genie 2 can generate interactive 3D worlds

World models — AI algorithms capable of generating a simulated environment in real-time — represent one of the more impressive applications of machine learning. In the last year, there’s been a lot of movement in the field, and to that end, Google DeepMind announced Genie 2 on Wednesday. Where its predecessor was limited to generating 2D worlds, the new model can create 3D ones and sustain them for significantly longer.

Genie 2 isn’t a game engine; instead, it’s a diffusion model that generates images as the player (either a human being or another AI agent) moves through the world the software is simulating. As it generates frames, Genie 2 can infer ideas about the environment, giving it the capability to model water, smoke and physics effects — though some of those interactions can be very gamey. The model is also not limited to rendering scenes from a third-person perspective, it can also handle first-person and isometric viewpoints. All it needs to start is a single image prompt, provided either by Google’s own Imagen 3 model or a picture of something from the real world.

Introducing Genie 2: our AI model that can create an endless variety of playable 3D worlds – all from a single image. 🖼️

These types of large-scale foundation world models could enable future agents to be trained and evaluated in an endless number of virtual environments. →… pic.twitter.com/qHCT6jqb1W

— Google DeepMind (@GoogleDeepMind) December 4, 2024

Notably, Genie 2 can remember parts of a simulated scene even after they leave the player’s field of view and can accurately reconstruct those elements once they become visible again. That’s in contrast to other world models like Oasis, which, at least in the version Decart showed to the public in October, had trouble remembering the layout of the Minecraft levels it was generating in real time.

However, there are even limitations to what Genie 2 can do in this regard. DeepMind says the model can generate “consistent” worlds for up to 60 seconds, with the majority of the examples the company shared on Wednesday running for significantly less time; in this case, most of the videos are about 10 to 20 seconds long. Moreover, artifacts are introduced and image quality softens the longer Genie 2 needs to maintain the illusion of a consistent world.

DeepMind didn’t detail how it trained Genie 2 other than to state it relied “on a large-scale video dataset.” Don’t expect DeepMind to release Genie 2 to the public anytime soon, either. For the moment, the company primarily sees the model as a tool for training and evaluating other AI agents, including its own SIMA algorithm, and something artists and designers could use to prototype and try out ideas rapidly. In the future, DeepMind suggests world models like Genie 2 are likely to play an important part on the road to artificial general intelligence.

“Training more general embodied agents has been traditionally bottlenecked by the availability of sufficiently rich and diverse training environments,” DeepMind said. “As we show, Genie 2 could enable future agents to be trained and evaluated in a limitless curriculum of novel worlds.”

This article originally appeared on Engadget at https://www.engadget.com/ai/google-deepminds-genie-2-can-generate-interactive-3d-worlds-200708207.html?src=rss 

Threads is testing post analytics

Threads’ latest test could help creators and others understand more about how their posts are performing on the platform. The company is testing an expanded version of its analytics feature, which will show users stats for specific posts, Adam Mosseri said in an update.

Up to now, Threads has had an “insights” feature, but it showed aggregated stats for all posts, so it was hard to discern which posts were performing well. Now, insights will be able to surface detailed metrics around specific posts, including views and interactions. It will also break down performance among followers and non-followers.

“Now that your posts will be shown to more people who follow you, it’s especially important to understand what’s resonating with your existing audience,” Mosseri wrote. Threads recently updated its highly criticized “for you” algorithm to surface more posts from accounts you follow, rather than random unconnected accounts.

The change could also address criticism from creators on Threads, who have said they often don’t understand how the app’s algorithm works. More detailed analytics could also help Meta entice more brands to the app as the company reportedly is gearing up to begin running ads on the service as soon as next month.

This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-is-testing-post-analytics-203548697.html?src=rss 

Foamstars’ next season will be its last

The fizzy, hot pink writing appears to be on the wall for Foamstars. Square Enix said on Thursday that the next season of the 4×4 “party shooter” will be its last. To be fair, all the game’s online services will remain available after the final season’s conclusion, and there will be events for those who hang around. But with development winding down (after switching to a free-to-play model in October), it’s hard to imagine the Splatoon-meets-Fortnite shooter will be long for this world.

The final season of Foamstars, the loudly capitalized “PARTY GOES ON!”, will run from December 13 to January 17. You’ll be able to customize each character’s shots in the “concluding update” (never an encouraging phrase). After the final season is a wrap, Square Enix will bring back all season passes for you to switch between at any time. This will let you obtain all seasons’ items and rack up the full collection.

Foamstars launched this past February on PlayStation Plus. The game has unique mechanics like spraying bright foam to build terrain, sliding on top of it and… dancing on a duck’s head to push it toward a finish line (as one does). However, after today’s announcement, the bright, loud and full-of-attitude shooter appears to be sliding toward a finish line of its own, with its development team presumably moving to other projects.

You can read more about the final season and changes coming after that on Square Enix’s update page.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/foamstars-next-season-will-be-its-last-185127904.html?src=rss 

OpenAI wants $200 a month for its most advanced features

OpenAI kicked off its “12 Days of OpenAI” series of livestreams with the announcement of a new, more expensive tier for its flagship chatbot. Starting today, ChatGPT users can pay $200 per month for ChatGPT Pro. Included in the package is unlimited access to the company’s latest model, o1, which following a limited preview earlier in the year, is now faster and 34 percent less likely to produce a major error when answering difficult real-world questions.

ChatGPT Pro also comes with access to GPT-4o, o1-mini and the company’s Advanced Voice mode, but the reason most power users are likely to splurge is the addition of an o1 “pro mode” that gives the chatbot additional compute power to reason through the most complex problems. “In evaluations from external expert testers, o1 pro mode produces more reliably accurate and comprehensive responses, especially in areas like data science, programming, and case law analysis,” OpenAI says of the feature.

OpenAI o1 is more concise in its thinking, resulting in faster response times than o1-preview.

Our testing shows that o1 outperforms o1-preview, reducing major errors on difficult real-world questions by 34%.

— OpenAI (@OpenAI) December 5, 2024

In the future, OpenAI says it will add more “powerful, compute-intensive productivity features” to ChatGPT Pro, with some of those enhancements arriving as early as later this week and into next week as the company continues to show off what it’s been working on over the last 11 months. More broadly, ChatGPT users can expect support for web browsing and file uploads to arrive in the future, though during the company’s livestream, OpenAI CEO Sam Altman didn’t definitively say when those features would arrive.

For the rest of us, OpenAI will continue to offer its existing ChatGPT Pro subscription, which will continue to cost $20 per month and include early access to new features.  

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-wants-200-a-month-for-its-most-advanced-features-191054506.html?src=rss 

Forza Motorsport on PC is getting an enhanced lighting upgrade

Gearheads, rejoice! Forza Motorsport for PC is getting a graphics update on Monday that adds a new realistic lighting system to the game. Nvidia announced that Ray-Traced Global Illumination (RTGI) will be part of the PC-only upgrade for Turn 10’s driving simulator.

RTGI simulates how light interacts with surfaces in a virtual environment to create more realistic looking images. The upgrade for Forza Motorsport on PC will be able to create “more accurate indirect lighting and occlusion across tracks and cars in real-time, amping up visual fidelity and realism,” according to Nvidia’s post.

The new RTGI lighting will be applied across all modes, cinematics and features in Forza Motorsport for PC. You’ll be able to take super sleek photos of your McLaren 720S Spider executing a perfect power slide in Photo Mode, marvel at your favorite cars in your Homespace and even watch enhanced cinematics with the new enhanced lighting system.

RTGI lighting must be turned on in settings in order to use it once the update is complete. Nvidia recommends setting “Raytracing Quality” to “Full Reflections + RTGI” and choosing a quality level under “RTGI Quality” to enable it.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/forza-motorsport-on-pc-is-getting-an-enhanced-lighting-upgrade-191017416.html?src=rss 

EA just made a whole bunch of accessibility patents open-source

EA has just made 23 accessibility patents open-source, as reported by Game Developer. This means that other developers throughout the industry can use the technology at no cost. The news comes after EA made a pledge back in 2021 not to sue rival companies for co-opting these types of tools.

As of today, third parties can openly use a whole lot of patented tech to improve accessibility for users. This includes new speech recognition tools, simplified speech tech in games and the ability to create personalized speech detection algorithms. EA says other devs can use this technology to “make it possible for those players’ speech to be more effectively recognized and reflected in-game in a way that is representative of their age, emotion, language and speaking style.”

There’s also an internal plugin for Unreal Engine 5 that went into the open-source pile. This one incorporates EA’s photosensitivity analysis tech, called IRIS, and should allow developers to quickly catch potential problems that could impact players with certain health issues related to vision or the nervous system.

Kerry Hopkins, EA’s SVP of global affairs, says this new group of open-source patents “encourages the industry to work together to make video games more inclusive by removing unintended barriers to access.” The company also says that this is just the beginning of its efforts to improve accessibility across the industry, as it’s going to start running accessible design workshops and expanding its testing capabilities.

This isn’t the first time EA has made some of its proprietary accessibility technology free for competitors. It has done so for the ping system originally found in the battle royale hit Apex Legends, which gives players a way to discuss in-game strategy without having to rely on voice chat. It also makes it easier to relay location data to teammates. The tech has popped up in other games like Call of Duty: Warzone and Fortnite.

This article originally appeared on Engadget at https://www.engadget.com/gaming/ea-just-made-a-whole-bunch-of-accessibility-patents-open-source-181131893.html?src=rss 

Rivian is now letting other EVs charge at its stations

Over the last year or so, electric vehicle makers have been a little friendlier to each other, at least when it comes to their charging networks. Many automakers are now supporting Tesla’s North American Charging System (NACS), which is fast becoming the industry standard. Now Rivian’s opening its doors to drivers of other brands’ EVs.

For the first time, drivers of non-Rivian EVs will be able to top up their batteries at the company’s charging locations. This pertains to next-gen Rivian Adventure Network charging locations. The first of these opens today at Joshua Tree Charging Outpost in California. Before the year is out, Rivian plans to open more charging locations in Texas, Colorado, Illinois, Montana, Pennsylvania, Michigan and New York.

The stations offer rapid charging up to 900 volts and have CCS connectors that work with NACS vehicles that have an adapter. Rivian says support for native NACS connectors will become available later.

This isn’t entirely an altruistic step, of course. Rivian sees it as a way to generate revenue from EV drivers who perhaps happen to be closer to one of its charging stations than any other. The chargers have a tap to pay option and the Rivian app isn’t required.

Rivian plans to have more than 3,500 DC fast chargers in its Adventure Network. According to Ars Technica, the automaker has 91 Adventure Network sites in the US, with plans for 12 more. However, Rivian drivers can use Tesla Superchargers as well.

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/rivian-is-now-letting-other-evs-charge-at-its-stations-182702444.html?src=rss 

Samsung’s One UI 7 is out in beta and it’s chock full of security features

Samsung’s One UI 7 operating system is finally out in the wild, albeit in a beta format. The updated UI focuses a lot of its improvements on security and privacy, which is never a bad thing in today’s world.

There’s further integration with the company’s Knox Matrix security protocol, which began popping up in Samsung devices back in 2023. Knox Matrix continually monitors devices via a “secure private blockchain” and shows all connected gadgets on a dashboard.

This dashboard lets users instantly see the security status of various Samsung smart devices, including other Galaxy handsets, tablets, TVs and appliances. If a device shows as green in the dashboard, that means that it’s “up to date and no risks are detected.” If something is at risk, Knox Matrix will provide actionable recommendations. All Samsung devices will soon fall under the One UI umbrella, which should make this integration more seamless.

Samsung

There are also new security measures put in place for recovering data from the cloud. Enhanced Data Protection makes sure that all connected devices are synchronized and secure and helps users make a backup plan in the event of data loss. To that end, One UI 7 lets users sign in to a new device by verifying the credentials of their previous device.

One UI 7 lets people create and use passkeys to log into a Samsung account and gives users more control over network connections. To the latter point, folks can block 2G service, which is not that safe, and make it so phones won’t automatically connect to unknown networks.

Text messages and photos have even gotten a bit of tough love to improve security. Users can remove location data from photos and block hyperlinks from text messages. Shared photo albums can also be blocked, as can automatic attachment downloads. Users can block USB connections for an added security boost. The port will still work for charging, but not for anything else.

Samsung

Finally, there’s a new theft protection tool. Samsung devices could already be remote locked, but now there’s a feature called Identity Check. This opt-in software forces users to prove they are who they say they are if a PIN becomes compromised.

The new UI also brings a simplified design, broader availability of AI tools and a redesigned camera app. The full release will also include something called the Now Bar, which is a new notification system that Samsung promises will “transform the lock screen experience.” It sounds a lot like Apple’s Dynamic Island and Live Activities feature. Samsung’s Now Bar isn’t part of the beta, so we have to wait a bit longer to get our hands on it. 

The One UI 7 beta program is available now for Galaxy S24 series devices in a bunch of different countries, including the US, Germany, India, South Korea and Poland. Users have to apply via the Samsung Members program. The full version of UI 7 drops sometime in the first quarter of 2025.

This article originally appeared on Engadget at https://www.engadget.com/mobile/samsungs-one-ui-7-is-out-in-beta-and-its-chock-full-of-security-features-163820698.html?src=rss 

Android’s latest round of AI features improve accessibility, file sharing and more

If you’re an Android user, today is your lucky day; Google has announced a swath of new AI features for the entire ecosystem. Broadly speaking, the features make Android devices more accessible, but there’s something here for everyone.

For instance, one of the new enhancements, Expressive Captions, automatically generates subtitles that attempt to capture the emotion and intensity of what’s being said. So, let’s say you’re video chatting with a friend who groans after you make a lame dad joke. The feature will not only transcribe what they said, but it will also include “[groaning]” in the transcription. This works for other subtleties of human speech, too, such as when someone gasps or whispers something, and is compatible across Android, including streaming and social media apps. Per Google, Expressive Captions are available on Pixel 6 and newer Pixel phones, as well as “other compatible” Android devices.

Separately, Google has enhanced Android’s Image Q&A in Lookout feature. The latest version of the tool makes use of the company’s Gemini 1.5 Pro model to provide more helpful image descriptions. Image Q&A is primarily designed to assist blind and low-vision users, but in reality, anyone can use the feature to get Android to describe a picture in a natural-sounding voice.

Speaking of Gemini, Google is supercharging the AI agent with new extensions that provide better integration with some of the most popular Android apps. For instance, a new Spotify plugin allows Gemini to play your favorite songs for you and find playlists that suit your current mood. In the future, the company is promising tighter integration with Google Maps and even smart home devices that are linked to your Google account.

Additionally, Gemini now features the capability to remember things about you so that it can provide more personalized responses. For example, you can tell Gemini you’re a vegetarian, and the agent will remember that about you the next time you ask it to recommend a new dinner recipe. Google notes it has made it easy to view, edit and delete any personal information you’ve told Gemini to remember.

Google

Another more practical update comes in the form of a Google Drive feature called auto-enhancements. The next time you upload a scanned document to the service, it will automatically edit the image to optimize the contrast and adjust the white balance, as well as remove any shadows and blurring.

And if you want to share a file with someone, Google has made that easier, too, with an improvement to Android’s Quick Share functionality. There’s a new feature that allows you to transfer pictures, videos and documents by displaying a QR code on your phone. Using this tool, you don’t need to add the recipient as a contact or fiddle with your Quick Share settings.

Lastly, if you’re a Pixel user, you can look forward to all of the above features and more. Most notably, there are improvements to the Pixel Screenshots app. For one, now you can save things you find with Circle to Search directly to the software. Google suggests this feature will be handy for holiday gift ideas. Pixel Screenshots will now also automatically categorize your snaps for you and provide suggestions, such as calendar invites and directions, based on the information you saved. 

As with most Android updates, it can take some time for Google to roll out all the new features it announced today to every user, so be patient if you don’t see them on your device right away.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/androids-latest-round-of-ai-features-improve-accessibility-file-sharing-and-more-170020518.html?src=rss 

Generated by Feedzy
Exit mobile version