Samsung’s Galaxy XR doesn’t give me much hope for Android XR

So Samsung made a “Vision Pro Lite.” That was my immediate takeaway after this week’s debut of the Galaxy XR, the first Android XR device to hit the market. While Samsung deserves credit for offering something close to the Vision Pro for nearly half the price, an $1,800 headset still won’t get mainstream consumers rushing out the door to experience the wonders of mixed reality. And with the limited amount of content in Android XR at the moment, the Galaxy XR is in the same position as the Vision Pro: It’s just a well-polished developer kit. 

The only logical reason to buy a Galaxy XR would be to test out apps for Android XR. If you just want to experience VR and dabble in a bit of augmented reality, you’re better off spending that money on a gaming laptop and the excellent $500 Meta Quest 3. (The Meta Quest Pro, the company’s first high-end mixed reality device, was unceremoniously killed after launching at an eye-watering $1,500.) 

But even for developers, the Galaxy XR feels like it’s lacking, well, vision. Samsung has done an admirable job of copying almost every aspect of the Vision Pro: The sleek ski goggle design, dual micro-OLED displays and hand gesture interaction powered by a slew of cameras and sensors. But while Apple positioned the Vision Pro as its first stab at spatial computing, an exciting new platform where we can use interactive apps in virtual space, Samsung and Google are basically just gunning to put Android on your face. 

There aren’t many custom-built XR apps, aside from Google’s offerings like Maps and Photos. (Something that also reminds me of the dearth of real tablet apps on Android.) And the ability to view 360-degree videos on YouTube has been a staple of every VR headset for the last decade — it’s not exactly notable on something that costs $1,800. Samsung and Google also haven’t said much about how they plan to elevate XR content. At least Apple is attempting to push the industry forward with its 8K Immersive Videos, which look sharper and more realistic than low-res 360-degree content.

For the most part, it seems as if Google is treating Android XR as another way to force its Gemini AI on users. In its press release for the Galaxy XR, Samsung notes that it’s “introducing a new category of AI-native devices designed to deliver immersive experiences in a form factor optimized for multimodal AI.” 

…What? 

In addition to being a crime against the English language, what the company is actually pitching is fairly simple: It’s just launching a headset that can access AI features via camera and voice inputs. 

Who knows, maybe Gemini will make Android XR devices more capable down the line. But at the moment, all I’m seeing in the Galaxy XR is another Samsung device that’s shamelessly aping Apple, from the virtual avatars to specific pinch gestures. And Google’s history in VR and interactive content doesn’t inspire much hope about Android XR. Don’t forget how it completely abandoned Google Cardboard, the short-lived Daydream project and its hyped up Stadia cloud service. Stadia’s death was particularly galling, since Google initially pitched it as a way to revolutionize the very world of gaming, only to let it fall on its face.

There’s no doubt that Samsung, Apple and Meta have a ton of work left ahead in the world of XR. Samsung is at least closer to delivering something under $1,000, and Meta also recently launched the $800 Ray-Ban Display. But price is only one part of the problem. Purpose is another issue entirely. After living with the Vision Pro since its debut, I can tell that Apple is at least thinking a bit more deeply about what it’s like to wear a computer on your face. Just look at the upgrades its made around ultra-wide Mac mirroring, or the way Spatial Personas make it feel as if you’re working alongside other people. With Android XR, Google seems to just be making a more open Vision Pro.

Honestly, it’s unclear if normal users will ever want to use any sort of XR headset regularly, no matter how cheap they get. The experience making these headsets could help Google, Apple and Meta develop future AR glasses, or eyewear that offer some sort of XR experience (Samsung already has something in the works with Warby Parker and Gentle Monster). But while Apple and Meta have broken new ground in XR, Google and Samsung just seem to be following in their footsteps.

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/samsungs-galaxy-xr-doesnt-give-me-much-hope-for-android-xr-110000129.html?src=rss 

Fujifilm’s X-T30 III adds a film simulation dial and 6K video

When Fujifilm launched the X-T50 last year, no one was sure what would happen with its aging X-T30 lineup. The company just answered that question with the launch of the X-T30 III, boosting the speed and improving autofocus of the last model, while adding a film simulation dial seen on other recent models. It’s very light for travel or street photography, but has some powerful features like 6.2K video and subject-detect autofocus, all at a reasonable price. 

The original X-T30 first arrived in 2019 and was replaced in 2022 by the X-T30 II that was more of a mild update than an all-new camera. However, the X-T30 III has a number of key updates that bring it in line with other recent models like the X-M5 and X-T50. It does have the same 26.1MP X-Trans sensor as before (with a 1.5x crop compared to a full-frame camera), but now uses Fujifilm’s latest image processor that doubles image processing speed and significantly improves video capabilities. 

Ryan Tuttle for Fujifilm

The X-T30 III is meant to be taken on adventures, so it’s still very light at just 378 grams or 13.33 ounces, a touch less than the previous model. Control-wise, the biggest addition is a film simulation dial just like the one on the X-M5 and X-T50, replacing the mode dial from the X-T30 II. It’s designed to make it easy to switch between film simulations like Reala Ace and Nostalgic Neg, while offering three customizable positions to let users save “recipes” of their own making. 

Otherwise, the X-T30 III has a generous complement of dials and buttons something that allows for precise control but may intimidate newbies. The rear display tilts up but doesn’t flip out, and the 2.36-million-dot electronic viewfinder is on the low end for resolution. The main feature missing on the X-T30 III is in-body stabilization, so you’ll need either a stabilized (OIS) lens or electronic stabilization for video. 

Fujjifilm

Burst shooting speeds are the same as before at 8 fps with the mechanical shutter and 20 fps in electronic mode. However, more of your shots are likely to be sharp thanks to the updated, faster autofocus. Along with the extra speed, Fujifilm introduced new AI subject detection modes including Auto-Tracking, Animals, Birds and Vehicles. 

Video also gets a big upgrade. The X-T30 III can now shot 6.2K 30 fps video using the entire sensor (up from 4K 30p before), or 4K at 60 fps with a mild 1.18x crop. All of those resolutions are available with 10-bit modes to boost dynamic range. However, the X-T30 III lacks in-body stabilization, has a weird 2.5mm microphone input and a display that only tilts and doesn’t flip out. That makes it fine as a hybrid camera, but if you mostly shoot video, a model like the X-S20 may be a better choice. 

Fujifilm

Other key features include a microHDMI port for RAW video output, a single SD memory card (that’s of the low-speed UHS-I variety unfortunately), and improved battery life with up to 425 shots to a charge. Fujifilm also introduced a new lens, the Fujinon XC13-33mmF3.5-6.3 OIS that offers an interesting ultrawide full-frame equivalent zoom range of around 20-50mm. 

The X-T30 III is now on pre-order for $999 in multiple colors (black, charcoal silver and silver) with shipping set to start in November 2025. The Fujinon XC13-33mmF3.5-6.3 OIS will also ship around the same time for $399

This article originally appeared on Engadget at https://www.engadget.com/cameras/fujifilms-x-t30-iii-adds-a-film-simulation-dial-and-6k-video-072148245.html?src=rss 

Amazon’s smart glasses with AI will help its drivers deliver packages faster

Amazon has revealed that it’s currently working on smart glasses designed for delivery drivers, confirming previous reports about the project. The company said that glasses use AI-powered sensing capabilities and computer vision to detect what their cameras are seeing. Drivers then get guidance through the glasses’ heads-up display (HUD) embedded right into the lens. Based on Amazon’s announcement, it’s been working on the glasses for a while, and hundreds of delivery drivers had already tested early versions to provide the company with feedback. 

The glasses automatically activate after the driver parks their vehicle. They then show users the right packages to deliver, according to their location. Users will see the list of packages they have to take out on the HUD, and the glasses can even tell them if they pull out the right package from their pile. When they get out of their vehicle, the glasses will display turn-by-turn navigation to the delivery address and will also show them hazards along the way, as well as help them navigate complex locations like apartment buildings. Simply put, the device allows them to find delivery addresses and drop off packages without having to use their phones. Drivers will even be able to capture proof of delivery with the wearable. 

Amazon’s glasses will be paired with a vest that’s fitted with a controller and a dedicated emergency button drivers can press to call emergency services along their routes. The device comes with a swappable battery to ensure all-day use and can be fitted with prescription and transitional lenses if the drivers need them. Amazon expects future versions of the glasses to be able to notify drivers if they’re dropping a package at the wrong address and to be able to detect and notify them about more hazardous elements, like if there’s a pet in the yard. 

In the annual event wherein the company announced the device, Amazon transportation vice president Beryl Tomay said it “reduces the need to manage a phone and a package” and helps drivers “stay at attention, which enhances their safety.” She also said that among the testers, Amazon had seen time savings of 30 minutes for a given shit. 

The company didn’t say anything about developing smart glasses for consumers, but The Information’s previous report said that it’s also working on a model for the general public slated to be released in late 2026 or early 2027. 

This article originally appeared on Engadget at https://www.engadget.com/wearables/amazons-smart-glasses-with-ai-will-help-its-drivers-deliver-packages-faster-041009681.html?src=rss 

Reddit sues Perplexity and three other companies for allegedly using its content without paying

Reddit is suing companies SerApi, OxyLabs, AWMProxy and Perplexity for allegedly scraping its data from search results and using it without a license, The New York Times reports. The new lawsuit follows legal action against AI startup Anthropic, who allegedly used Reddit content to train its Claude chatbot.

 As of 2023, Reddit charges companies looking access to posts and other content in the hopes of making money on data that could be used for AI training. The company has also signed licensing deals with companies like Google and OpenAI, and even built an AI answer machine of its own to leverage the knowledge in users’ posts. Scraping search results for Reddit content avoids those payments, which is why the company is seeking financial damages and a permanent injunction that prevents companies from selling previously scraped Reddit material.

Some of the companies Reddit is focused on, like SerApi, OxyLabs and AWMProxy, are not exactly household names, but they’ve all made collecting data from search results and selling it a key part of their business. Perplexity’s inclusion in the lawsuit might be more obvious. The AI company needs data to train its models, and has already been caught seemingly copying and regurgitating material it hasn’t paid to license. That also includes reportedly ignoring the robots.txt protocol, a way for websites to communicate that they don’t want their material scraped.

Per a copy of the lawsuit provided to Engadget, Reddit had already sent a cease-and-desist to Perplexity asking it to stop scraping posts without a license. The company claimed it didn’t use Reddit data, but it also continued to cite the platform in answers from its chatbot. Reddit says it was able to prove Perplexity was using scraped Reddit content by creating a “test post” that “could only be crawled by Google’s search engine and was not otherwise accessible anywhere on the internet.” Within a few hours, queries made to Perplexity’s answer engine were able to reproduce the content of the post.

“The only way that Perplexity could have obtained that Reddit content and then used it in its ‘answer engine’ is if it and/or its co-defendants scraped Google [search results] for that Reddit content and Perplexity then quickly incorporated that data into its answer engine,” the lawsuit claims.

When asked to comment, Perplexity provided the following statement:

Perplexity has not yet received the lawsuit, but we will always fight vigorously for users’ rights to freely and fairly access public knowledge. Our approach remains principled and responsible as we provide factual answers with accurate AI, and we will not tolerate threats against openness and the public interest.

This new lawsuit fits with the aggressive stance Reddit has taken towards protecting its data, including rate-limiting unknown bots and web crawlers in 2024, and even limiting what access the Internet Archive’s Wayback Machine has to its site in August 2025. The company has also sought to define new terms around how websites are crawled by adopting the Really Simple Licensing standard, which adds licensing terms to robots.txt.

This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-sues-perplexity-and-three-other-companies-for-allegedly-using-its-content-without-paying-205136436.html?src=rss 

YouTube is adding a timer to Shorts so you don’t scroll the day away

YouTube is adding a timer to Shorts to help curb all of that incessant doomscrolling, according to a report by TechCrunch. This feature is rolling out to all users after being spotted in an Android APK file earlier this year, which was originally reported on by Android Authority.

Here’s how it works. Users set a daily time limit for Shorts via the app’s settings. Once reached, they’ll see a pop-up reminding them to take a break. This pop-up is easily dismissed with a tap, but it’s the thought that counts. At the very least, it’ll remind people of just how long they’ve been laying in bed and watching random Curb Your Enthusiasm clips.

This doesn’t currently integrate with parental controls, but that’s coming next year. At that point, parents or guardians will be able to set specific time limits on how long kids can scroll the Shorts feed. That pop-up will not be dismissible by children.

This isn’t the first move by YouTube to help improve digital well-being. There’s a way to set “take a break” reminders at various increments, and the same goes for a pop-up at bedtime.

Why the renewed focus on limiting user engagement? Well, there are nearly 2,000 lawsuits floating around right now directed toward social media companies, according to a report by Bloomberg Law. Many of these suits accuse the companies of intentionally designing their platforms to be addictive.

This article originally appeared on Engadget at https://www.engadget.com/apps/youtube-is-adding-a-timer-to-shorts-so-you-dont-scroll-the-day-away-185204383.html?src=rss 

Apple dumps dating apps Tea and TeaOnHer from the App Store over privacy and moderation issues

Apple has removed dating apps Tea and TeaOnHer from the App Store for violating rules related to content moderation and user privacy. The company told TechCrunch that it pulled the apps as they broke several of its rules, including one mandating that apps can’t share or otherwise use an individual’s personal info without getting their permission first. 

Apple said they also violated a rule concerning user-generated content, which stipulates that apps need to allow for reporting offensive or concerning material, an option to block abusive users and the ability to filter “objectionable material from being posted.” In addition, Apple claimed the apps broke rules related to user reviews. It told TechCrunch they had an “excessive” volume of negative reviews and complaints from users, including ones related to minors’ personal details being shared. The company noted that it raised these issues’ with the apps’ developers, but they were not resolved.

As it stands, both apps are still available on Android through the Google Play Store. Tea (which is formally called Tea Dating Advice) enables women to post details about men they’ve met or dated. It allows them to post and comment on photos, look up public records on individuals, carry out reverse image searches, share their experiences and rate or review men. Users can, for instance, say whether they’d give a man a “green flag” or a “red flag.”

TeaOnHer flips that format on its head, with men sharing info about women. Both are pitched as dating safety apps, with Tea telling users they can “ask our anonymous community of women to make sure your date is safe, not a catfish and not in a relationship.”

Tea first emerged in 2023 and it went viral this year. In July, hackers breached the app and leaked tens of thousands of images, including around 3,000 selfies and photo IDs that users submitted to verify their accounts. The other images included posts, comments and private messages. A second hack exposed more than a million private messages.

Days after TeaOnHer went live in August (ripping off text from Tea’s App Store description in the process), it emerged that app had its own security issues. It was possible to view photo IDs and selfies that users had submitted for account verification, as well as their email addresses. 

This article originally appeared on Engadget at https://www.engadget.com/apps/apple-dumps-dating-apps-tea-and-teaonher-from-the-app-store-over-privacy-and-moderation-issues-191305457.html?src=rss 

Google Gemini will arrive in GM cars starting next year

Google Gemini is coming to GM vehicles in 2026. The company will be integrating a conversational AI assistant powered by Google’s platform into many of its cars, trucks and SUVs.

GM says this assistant will be able to access vehicle data to suss out maintenance concerns, alerting the driver when necessary. The company also promises it’ll be able to help plan routes and explain various features of the car. It should also be able to do stuff like turn on the heat or air conditioning, even before entering the vehicle.

This will replace the “Google built-in” operating system that already exists in many GM vehicles. This OS already offers access to stuff like Google Maps, Google Assistant and related apps. The upcoming Gemini-based chat assistant will do the same type of things, but it should perform better.

“One of the challenges with current voice assistants is that, if you’ve used the, you’ve probably also been frustrated by them because they’re trained on certain code words or they don’t understand accents very well or if you don’t say it quite right, you don’t get the right response,” GM VP Dave Richardson told TechCrunch. “What’s great about large language models is they don’t seem to be affected by that.”

One brand-new feature that Gemini will bring to the table is web integration. This will let drivers ask the chatbot questions pertaining to geographic location and the like. GM gives an example of someone asking about the history of a bridge they are passing over.

The Gemini assistant will be available via the Play Store after launch as an over-the-air upgrade to Onstar-equipped vehicles. It won’t be limited to newer releases, as GM says it’ll work with vehicles from the model year 2015 and above. The company also says it’s working on its own AI chatbot that has been “custom-built for your vehicle.” There’s no timetable on that one. 

GM ran into hot water recently when it was found that it had been selling some customer information sourced from its OnStar Smart Driver program to insurance companies without user consent. This led to the FTC banning the company from selling any driver data for five years. Richardson says the Gemini integration will be privacy-focused and the software will let drivers control what information it can access and use.

GM

The company made these announcements at the GM Forward media event, where it also discussed other forthcoming initiatives. It has scheduled a rollout of its self-driving platform for 2028. It’s also developing its own computing platform, also launching in 2028. This does mean that GM will be sunsetting integration with Apple CarPlay and Android Auto. This software will be phased out over the next few years. 

This article originally appeared on Engadget at https://www.engadget.com/transportation/google-gemini-will-arrive-in-gm-cars-starting-next-year-181249237.html?src=rss 

Google says it made a breakthrough toward practical quantum computing

Enabled by the introduction of its Willow quantum chip last year, Google today claims it’s conducted breakthrough research that confirms it can create real-world applications for quantum computers. The company’s Quantum Echoes algorithm, detailed in a paper published in Nature, is a demonstration of “the first-ever verifiable quantum advantage running the out-of-order time correlator (OTOC) algorithm.”

A core belief in quantum computing is that developing computer systems with qubits — which can represent multiple states at once, as opposed to binary ones and zeroes — could lead to greater understanding of the quantum systems surrounding us. Google believes its new algorithm is further proof of that assumption. The Quantum Echoes algorithm is able to illustrate how different parts of a quantum system interact with each other, in a way that’s repeatable by other quantum computers and that “runs 13,000 times faster on Willow than the best classical algorithm on one of the world’s fastest supercomputers.”

The “echo” in Quantum Echoes comes from how Google’s algorithm interacts with a quantum system, in this case the Willow chip. “We send a carefully crafted signal into our quantum system (qubits on Willow chip), perturb one qubit, then precisely reverse the signal’s evolution to listen for the ‘echo’ that comes back,” the company explained in its announcement blog. That echo is magnified by the “constructive interference” of quantum waves, making the measurement Google is able to take extremely sensitive.

That sensitivity suggests quantum computers could be an important tool in modeling things like the interaction of particles or the structure of molecules. In a separate experiment with the University of California, Berkeley, Google tried to prove that by running the Quantum Echoes algorithm to study two different molecules, and comparing it to the Nuclear Magnetic Resonance (NMR) method currently used by scientists to understand chemical structure. The results from both systems matched, and Google says Quantum Echoes even “revealed information not usually available from NMR.”

In the longterm, a full-scale quantum computer could be used for everything from drug discovery to the development of new battery components. For now though, Google believes its Quantum Echoes research means real-world quantum computer applications could arrive within the next five years.

This article originally appeared on Engadget at https://www.engadget.com/science/google-says-it-made-a-breakthrough-toward-practical-quantum-computing-183502245.html?src=rss 

Daniel Naroditsky’s Parents: Meet His Father Vladimir & Mother Lena

Amid the shock of the former child chess prodigy’s passing at age 29, learn more about his parents, Vladimir and Lena, and the role they played in his remarkable career.

Amid the shock of the former child chess prodigy’s passing at age 29, learn more about his parents, Vladimir and Lena, and the role they played in his remarkable career. 

No Man Sky’s latest update let you explore a space wreck

One of the greatest comeback stories in the history of video games just keeps writing itself new chapters. If you’ve checked in on the wildly ambitious space sim, No Man’s Sky, in the last few years then you’ll know just how much UK-based developer Hello Games has turned things around for a game that was once the subject of a false advertising investigation. And its latest update introduces fully explorable space wrecks.

The follow-up to August’s substantial Voyagers expansion, the new Breach update adds floating wrecks that you’re invited to salvage to your heart’s content. The materials you find can be used to unlock new ship-building parts, and after Voyagers — which finally allowed players to build their own fully crewed spaceships — was such a hit with the No Man’s Sky audience, Hello Games says it has also improved and expanded its workshop feature in the latest update.

Breach also introduces a new expedition, which are No Man’s Sky’s time-limited story-driven events. In this one, players will “traverse a desolate and abandoned universe to discover what happened to a mysterious abandoned wreck.” Those able to successfully voyage to the edge of space and ransack the remains of the ship will be rewarded with rare parts to use on their own creations, including “glowing Atlas-themed wings.” The expedition looks suitably spooky for a Halloween playthrough with your pals.

No Man’s Sky celebrates its 10th anniversary in 2026, and the Breach update seemingly rounds off what Hello Games founder Sean Murray calls a “crazy year” for his team. It seems unlikely that we won’t get more updates in what will be a landmark year for the game when 2026 rolls around, but Murray also appeared to tease the studio’s next project, the open-world survival game Light No Fire, when announcing the arrival of the Voyagers expansion back in August. In a message, he seemed to suggest that the ship construction tech introduced in the expansion will also be utilized in Light No Fire, which is currently without a release date.

This article originally appeared on Engadget at https://www.engadget.com/gaming/no-man-skys-latest-update-let-you-explore-a-space-wreck-164042151.html?src=rss 

Generated by Feedzy
Exit mobile version