Meta wants to become the Android of robotics

Assuming it can turn its Project Orion augmented reality glasses into a real product people can buy, Meta apparently wants to get into robots next. That’s according to Sources‘ Alex Heath, who spoke to Meta CTO Andrew Bosworth and reports that much like Apple, Google and Tesla, Meta is researching robotics.

Unlike those other companies, though, Meta apparently isn’t all that focused on competing in hardware. It has a “Metabot” in the works, but its real goal is to create software that other companies can license, much like Google does with Android. “Software is the bottleneck,” according to Bosworth, and the hope is that the combined powers of Meta’s robotics team — led by Marc Whitten, the former CEO of Cruise —  and its highly publicized Superintelligence Labs can produce a solution.

That work apparently starts with the development of a “world model” that can help a robot “do the software simulation required to animate a dexterous hand,” but will presumably extend to more complicated movements and tasks down the road. In February 2025, Meta was reportedly looking at building a robot that could handle household chores like cleaning or folding laundry. Given how early everything sounds, that’s likely a long way off.

Meta isn’t alone in pursuing robotics. Apple is reportedly working on its own home robots, starting with a table-mounted arm with a display. Tesla has regularly demoed versions of its Optimus robot to the public, though often in highly-controlled scenarios. Meta has yet to realize its goal of usurping the smartphone with AR glasses. Whether or not it does, it sounds like robots will be the thing it burns money on next.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/meta-wants-to-become-the-android-of-robotics-220701800.html?src=rss 

DJI loses lawsuit over Pentagon’s ‘Chinese military company’ list

It’s been nearly a year since DJI sued the Department of Defense over its designation as a “Chinese military company.” On Friday, a judge ruled against the drone maker. US District Judge Paul Friedman said the DoD presented enough evidence that DJI contributes to the Chinese military.

“Indeed, DJI acknowledges that its technology can and is used in military conflict but asserts that its policies prohibit such use,” Friedman wrote in his opinion. “Whether or not DJI’s policies prohibit military use is irrelevant. That does not change the fact that DJI’s technology has both substantial theoretical and actual military application.”

DJI challenged the designation in October 2024. It told the court it is “neither owned nor controlled by the Chinese military.” The company claimed in its filing that it suffered “ongoing financial and reputational harm” as a result of the inclusion. The designation can prevent companies from accessing grants, contracts, loans and other programs.

The drone maker has a contentious history with the US government. The Department of Commerce added it and 77 other companies to its Entity List in 2020, effectively blocking US businesses from dealing with them. A year later, the Treasury Department included DJI on its “Chinese military-industrial complex companies” list. That designation was for its alleged involvement in the surveillance of Uyghur Muslim people in China. Last year, US customs began holding up DJI’s consumer drones at the border.

The company now faces a potential import ban in the US by the end of this year. The ban was initially scheduled for 2024. But a clause in the $895 billion US Defense Bill gave it a year to prove that its products don’t pose a national security risk. In March, DJI pleaded with five national security agencies (DHS, DoD, FBI, NSA, and ODNI) to begin evaluating its products “right away.”

This article originally appeared on Engadget at https://www.engadget.com/big-tech/dji-loses-lawsuit-over-pentagons-chinese-military-company-list-204804617.html?src=rss 

YouTube Premium adds high-quality audio and 4x playback for iOS, Android and desktop

Google is expanding access to YouTube Premium features like faster playback speeds and high-quality audio to more types of devices. Most people subscribe to YouTube Premium to remove ads from YouTube and access to YouTube Music, but Google also includes a variety of “power-user” features that give subscribers more granular control over their viewing or listening experience. Now those features will be available in more places.

YouTube Premium’s faster playback speeds (in 0.5x increments from 1x to 4x speed) are now available on Android, iOS and the web, after initially only being available in the mobile YouTube app. The ability to have YouTube automatically download Shorts to view offline or watch Shorts in a picture-in-picture window is now also available on both iOS and Android, after originally launching on Android. Google says Premium’s Jump Ahead feature for skipping to “key moments” of a video is now also available on smart TVs and game consoles.

In terms of the music side of the house, the big change has to do with audio quality. When you’re watching a music video, Google says you’ll now be able to select “High” from the audio settings and listen at a 256kbps bitrate. This change applies to “Art Tracks” as well, which are videos of songs available on the wider YouTube platform that don’t have an official music video. The “High” quality option was originally only available in the YouTube Music app, but now Google says you can access it across the Android and iOS version of both YouTube Music and YouTube.

None of these updates change what the main benefit of a $13.99-per-month YouTube Premium subscription is, of course, but for the price, it’s good Google is trying to unify the experience across devices.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-premium-adds-high-quality-audio-and-4x-playback-for-ios-android-and-desktop-212214797.html?src=rss 

Apple reportedly made a ChatGPT-clone to test Siri’s new capabilities

In the pursuit of actually releasing the updated version of Siri the company promised way back at WWDC 2024, Apple is taking a page out of OpenAI’s book. According to Bloomberg, the company has created a ChatGPT-inspired app to test Siri’s new capabilities ahead of the release of the improved voice assistant next year.

This new app, called “Veritas” internally, will likely never make its way to the public in its current form, but offers Apple employees a faster way to test Siri’s new skills. That includes letting users search through personal data stored on their phone, like their emails and messages, or taking action in apps, like editing photos. The new app is apparently also a way for Apple to “gather feedback on whether the chatbot format has value,” Bloomberg writes.

While an internal app doesn’t make it any clearer how useful Apple’s updated Siri will be, it does suggest the project is in a more advanced stage than before. Given the difficulty the company’s faced actually releasing its various AI products — including publicly delaying the Siri update back in March 2025 — that’s meaningful.

Apple’s original promise for Apple Intelligence was that it could offer a curated selection of AI-powered features with a level of privacy and polish that its competitors couldn’t muster. The reality is that Apple shipped a collection of so-so features that worked, but couldn’t pull off its truly impressive demo: a Siri informed on the context of your life and with the ability to actually do things on your phone.

Apple is only realizing that vision in 2026, Bloomberg reports, through a combination of its own AI models, and at least one third-party model from its competitors. In June, the company was reportedly considering using a model from either OpenAI or Anthropic, but as of August, the company is now apparently circling a partnership with Google.

This article originally appeared on Engadget at https://www.engadget.com/ai/apple-reportedly-made-a-chatgpt-clone-to-test-siris-new-capabilities-194902560.html?src=rss 

What Is Sinclair Broadcast Group? How Many ABC Stations it Owns, Its Influence on TV & More

Sinclair, an ABC affiliate, still temporarily suspended ‘Jimmy Kimmel Live!’ from its stations despite the company’s confirmation of his return.

Sinclair, an ABC affiliate, still temporarily suspended ‘Jimmy Kimmel Live!’ from its stations despite the company’s confirmation of his return. 

The Social Network 2 is coming next fall and stars Jeremy Strong as Mark Zuckerberg

The long-awaited sequel to The Social Network will hit theaters next fall, according to a report by Deadline. The official release date is set for October 9, 2026, which is just about 16 years after the first film dropped.

We also have plenty of other information, including the full cast and the actual name of the movie. The official name is The Social Reckoning, which makes sense as the movie follows recent events in which Facebook got into legal and political trouble when a whistleblower alleged that the company knew the platform was harming society but did nothing about it.

The cast is being led by Jeremy Strong from Succession, who takes over Zuckerberg duties from actor Jesse Eisenberg. Mikey Madison is playing the aforementioned whistle blower Frances Haugen and The Bear’s Jeremy Allen White portrays Wall Street Journal reporter Jeff Horowitz.

Bill Burr is also appearing in this flick, though we don’t know in what capacity. The Hollywood Reporter has suggested he will play a fictional character invented for the film that will be an amalgamation of several people. Aaron Sorkin is both writing and directing this one. He wrote the first movie, but David Fincher directed it.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/the-social-network-2-is-coming-next-fall-and-stars-jeremy-strong-as-mark-zuckerberg-191021848.html?src=rss 

‘One Battle After Another’: Leonardo DiCaprio’s Dystopian Movie Release Date, Plot, Cast & More

The Oscar winner leads the action thriller film as a former revolutionary who must find and rescue his daughter. Here’s everything you need to know about the film set in an authoritarian version of America.

The Oscar winner leads the action thriller film as a former revolutionary who must find and rescue his daughter. Here’s everything you need to know about the film set in an authoritarian version of America. 

YouTube Music is testing AI hosts that present relevant stories, trivia and commentary

YouTube just announced YouTube Labs, which is being described as a “new way for users to take our cutting edge AI experiments for a test drive.” This looks like a YouTube-centric version of the pre-existing Google Labs, which is another place for folks to test out experimental AI tools.

There’s already something new to play with here. YouTube Labs is testing AI hosts for its Music app. These hosts are designed to deepen a listening experience by providing “relevant stories, fan trivia and fun commentary about your favorite music.” This is just the latest music-streaming platform to introduce AI hosts, as Spotify introduced an AI DJ earlier this year.

YouTube Labs is only available for Premium members. Sign-ups are open right now, but just for a “limited number of US-based participants.” We don’t have any data as to how many people will get accepted to join the AI tomfoolery.

Regular YouTube users have probably noticed the proliferation of AI slop on the platform these past several months. It’s becoming a whole thing. While the prospect of virtual music hosts is rather innocuous, it will likely lead to even more AI being forced on the platform.

YouTube recently added a boatload of AI tools for creators, including the ability to turn spoken dialogue into a slop-filled song. It’s also handing over age verification to AI and is testing its own version of Google’s famous (or infamous) AI overviews.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-music-is-testing-ai-hosts-that-present-relevant-stories-trivia-and-commentary-174042191.html?src=rss 

Microsoft’s fix for PC shader compilation stutter could take years to fully implement

Microsoft has a fix for long shader compilation wait times. The system is called Advanced Shader Delivery, and it’s being first introduced for ASUS ROG Xbox Ally handhelds and games listed on the Xbox app.

Just about every PC gamer knows the feeling of booting up a highly anticipated new AAA title, excited to explore its sprawling environments or open world, only to be hit with “compiling shaders” and a progress bar that seems to move at a snail’s pace. Depending on what specs you’re rocking and what game you’ve just installed, the wait could be as much as one to two hours for those with slower CPUs and older systems.

While it seems increasingly common that huge games are using these shader compilation screens before even getting to the main menu (looking at you Hogwarts Legacy), games that choose not to use them still need to load and compile shaders. If they aren’t done ahead of time, then they must be done during gameplay, which can lead to in-game stuttering that many gamers are also familiar with.

Advanced Shader Delivery would preempt this by doing the entire compilation process ahead of time and storing those compiled shaders in the cloud. The catch is that shader compilation is hardware-specific, and since there are myriad GPU and driver combos, it would take a few dozen sets of compiled shaders to cover all the most common setups, and that’s per game. Extrapolate that out even just to all the AAA titles released yearly, and you’ve got yourself a massive database.

This is similar to how shader compilation works on consoles, but you’re talking about at most two or three versions per console, or even fewer in the case of the Nintendo Switch. In fact, that’s precisely why Microsoft is starting with the ASUS ROG Xbox Ally handhelds, which comprises only two hardware configurations.

Microsoft’s Agility SDK for game developers now supports Advanced Shader Delivery, meaning devs could start building it into new games already. In practice, it can take years to fully capitalize on new technologies like this.

That’s exactly what we’ve seen with Direct Storage, another Microsoft technology meant to reduce asset load times. Three years after its release, we still see only a handful of big titles incorporating Direct Storage. It might be a long time before we see Advanced Shader Delivery incorporated into most popular games and available on different store fronts like Steam.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/microsofts-fix-for-pc-shader-compilation-stutter-could-take-years-to-fully-implement-183904449.html?src=rss 

Generated by Feedzy
Exit mobile version