DJI Mavic 4 Pro review: A new standard for consumer drones

DJI’s much-awaited Mavic 4 Pro drone has launched, but there’s bad news. Due to Trump’s tariffs and other market uncertainties (like a potential DJI ban), it’s not yet on sale in America and there’s no word on when it will be, or for what price. That will disappoint US buyers, because the Mavic 4 Pro is the most technologically advanced consumer drone ever.

With a triple-camera system housed in a wild-looking round gimbal, it can shoot up to 6K at 60 fps with 16 stops of dynamic range, beating any drone and even most cameras. DJI also boosted top speed, battery life and range, while improving obstacle avoidance in low light via a new LiDAR system. Plus, the company introduced an all-new RC Pro 2 controller with a useful new foldable design.

I’ve had the Mavic 4 Pro for a couple of weeks now and it performs even better than the specs would suggest. Though a bit expensive, this drone is so polished and capable that nothing else remotely compares to it.

Design

The trademark feature, as we’ve seen from leaks and a teaser last week, is the big round triple-camera “Infinity Gimbal” module at the front. The idea is to allow the gimbal to rotate freely for improved stabilization and interesting camera angles. It boosts the tilt-up angle from 60 to 70 degrees and lets the gimbal rotate at any angle, allowing off-kilter “Dutch” camera angles for more interesting shots. And of course, it can rotate 90 degrees to deliver full-resolution vertical video for social media.

With that new housing and slightly larger batteries, the 2.3 pound Mavic 4 Pro is 0.2 pounds heavier than the Mavic 3 Pro. The propeller arms have been updated so they can be folded and unfolded in any order, unlike the previous model. The body is also more aerodynamic to boost efficiency and speed.

Tucked in the left propeller arm is a forward-facing LiDAR sensor that greatly aids nighttime obstacle detection. The drone also has six omnidirectional fisheye sensors that work in as little as 0.1 lux, which is equivalent to a night sky lit by a full moon.

Where the Mavic 3 Pro had just 8GB of internal memory, the standard Mavic 4 Pro is equipped with 64GB of storage (42GB usable), and the Creator Combo version comes with a generous 512GB of high-speed storage (460GB usable). Both models have a microSD card slot as well.

DJI’s new RC2 Pro screen controller folds up and flips sideways.

Steve Dent for Engadget

The Mavic 4 Pro ships with the RC2 screen controller first seen with the Air 3 in the basic and Fly More kits. However, DJI also introduced the RC2 Pro, a controller unlike any I’ve seen before. The bright 7-inch HDR display folds up to provide a multi-angle view and tilts 90 degrees (automatically flipping the Mavic 4 Pro’s camera 90 degrees) for vertical video. 

There’s no longer a need to stow the joysticks (though you can) as they automatically collapse into the body when the screen is folded back. It has a full complement of controls for flying and camera operations, including a button that flips the camera 90 degrees. The RC2 Pro also has an HDMI and a USB-C port, along with Wi-Fi 6 support for high-speed data transfers.

The new 95Wh batteries are rated to offer up to 51 minutes of flight time, or around 40-45 minutes in typical use. That’s a significant boost from the 30-35 minute real-world battery life on the Mavic 3 Pro. Better still, they charge faster than ever at 51 minutes for one battery (via USB-C) or 90 minutes for three with the Fly More kit charger.

Performance and features

The Mavic 4 Pro can now hit 56 mph (90 km/h) in sport mode without obstacle detection, up from 47 mph before, which will be a big help for filming motor vehicles. Top speed in normal mode with tracking and obstacle detection is also faster at 40 mph. It’s still a big SUV of a drone, though, so it lacks the agility of DJI’s Mini 4 Pro. And with no prop guards, it’s not advisable to maneuver around people or in tight spaces like the Avata 2.

Noise from the Mavic 4 Pro is unchanged from the Mavic 3 Pro at 83 dB. However, the frequency is lower and less bothersome, especially when it’s flying close to people — so, er, props to DJI for that improvement.

With the new O4+ transmission system, the Mavic 4 Pro now offers up to 18.6 miles (30 km) of video transmission range, double that of the Mavic 3 Pro. Though most pilots won’t venture that far away, I found the Mavic 4 Pro less susceptible to interference and dropouts than before. It also supports bright 10-bit HDR 1080p live video transmission for a higher-quality backup capture and better visibility in sunlight.

Steve Dent for Engadget

Subject tracking is available via the ActiveTrack 360 function (first seen on the Mini 4 Pro), which allows you to manually control the camera position while keeping your subject in frame. It’s also designed to keep subjects in focus even if they’re partially obscured.

I tested it with a car, mountain bike and while walking to see how it worked at various speeds. The Mavic 4 Pro stayed locked on a car driving at up to 30 mph with full obstacle avoidance. When filming the bike rider, the Mavic 4 Pro chose interesting and random routes around trees that often yielded cinematic greatness. That behavior also caused a crash into a small tree branch, but luckily, it caused no visible or functional damage. The drone also worked well as a vlogging tool, following me on a preset ActiveTrack 360 path.

Later, I took the Mavic 4 Pro out at night to test the LiDAR and low-light sensors, flying it up around trees and next to buildings. That would have been risky with past models, but it successfully detected and avoided all obstacles in my testing. The low-light capability will also help you bring the drone home safely at night, as long as you remember that the LiDAR only works in the forward direction. Return-to-home now functions without GPS in sufficient light, as the Mavic 4 Pro can memorize flight paths.

Video quality

Steve Dent for Engadget

With its new camera system, the Mavic 4 Pro has the best video quality I’ve seen on any consumer drone. The main Hasselblad-branded 28mm wide camera has a 100-megapixel , dual ISO 4/3 sensor that supports 6K 60 fps or 4K 120 fps capture along with DJI’s professional D-Log and D-LogM modes to max out dynamic range. It also delivers up to 100MP photos. The Mavic 4 Pro is one of few drones with a variable aperture (f/2.0 to f/11) for better depth of field control and more usability in sunlight.

The Infinity Gimbal also houses a 70mm medium telephoto lens with a 1/1.3-inch sensor like the one on the Mini 4 Pro. Plus, there’s a longer 168mm camera with a 1/1.5-inch sensor that should be ideal for things like wildlife tracking. Both feature an f/2.8 aperture, 4K 60p video and D-Log/D-LogM and HDR, along with subject-tracking AF and dual native ISO.

6K and 4K video quality on the Hasselblad camera is incredibly sharp and color-accurate, while providing good dynamic range, particularly with sky and cloud details. Switching over to D-Log mode further boosts dynamic range, but makes color correction a bit trickier. I liked using D-LogM to get a good balance between ease of adjustment and dynamic range.

The base Mavic 4 Pro captures H.265 video up to 180 Mbps, but the Creator Combo version — with its faster internal storage — also supports 1,200 Mbps H.264 All-I for easier editing. However, DJI dropped the ProRes 4:2:2 HQ option that was available on the Cine version of the Mavic 3 Pro.

Steve Dent for Engadget

Where the Mavic 4 Pro really beats other drones is in low light, thanks to the native dual ISO capability. When shooting a cityscape at dusk at ISO 6400 and up, grain was easy to tamp down with a bit of noise reduction. And if shooting at dawn or dusk, the D-Log mode provides enough dynamic range to balance light sky and dark ground areas.

Photo quality is also outstanding in the 100MP mode with so much detail that I could zoom in and clearly see tiny objects. For low-light shooting, though, it’s best to stick with 25MP to reduce noise. DJI’s RAW DNG capture makes it easy to fine-tune photos shot in contrasty lighting conditions.

The 70mm (3x) 1/1.3-inch telephoto camera is great for taking portraits or getting in tighter when shooting landscapes. With support for 4K at up to 120 fps and D-Log or D-LogM, it can easily be mixed and matched with footage from the main camera, though detail and low-light capability isn’t as good. The 168mm 1/1.5-inch camera produces mediocre video quality, but it’s great for capturing wildlife.

Wrap-up

Steve Dent for Engadget

The Mavic 4 Pro is a great example of how DJI stays far ahead of rivals by consistently updating and perfecting its products. It’s an improvement over the Mavic 3 Pro in nearly every area, and as mentioned, it doesn’t really have any competition in the consumer space. The closest alternative in price and capability is Autel’s Evo II Pro 6K, but that drone has a single-camera system with a smaller sensor, less endurance and shorter range.

The lack of US availability is a major issue that’s bound to cause a furor with drone enthusiasts in the States. And there’s still a great risk that sales of DJI drones will be completely banned in America by the end of the year. The company insists that its drones pose no national security risk and says it welcomes any scrutiny.

The Mavic 4 Pro is now on sale in most regions except the United States starting at €2,099 or £1,879 (about $2,360) with the RC2 controller. You can also get it in the Fly More kit with the RC2 controller, a bag, three batteries and a charger for £2,459 and €2,699 ($3,040). The Creator Combo, which includes everything in the Fly More kit plus the RC2 Pro controller, 512GB of storage and All-I video capture, is €3,539 (£3,209) or about $3,980. The RC2 Pro controller by itself is €999 and €879 (about $1,125).

This article originally appeared on Engadget at https://www.engadget.com/cameras/dji-mavic-4-pro-review-a-new-standard-for-consumer-drones-120006235.html?src=rss 

Apple is bringing accessibility labels to the App Store later this year

Each year, on the same week as Global Accessibility Awareness Day, the accessibility team at Apple shares a slew of upcoming assistive features ahead of their public release. This time around, the company has a huge number of updates as it commemorates “40 years of accessibility innovation at Apple,” according to a press release. This year’s group of enhancements covers all its platforms and a variety of types of disabilities, and one of them is a new initiative that should make more apps more inclusive.

Later this year, the App Store will get “Accessibility Nutrition Labels,” which will be a new section in app pages. These will give a quick preview of the accessible features that each listing offers, including things like support for the VoiceOver screen reader, Larger Text, Sufficient Contrast, Captions, Voice Control, Reduced Motion and more. Tapping on each preview will bring up a page with more details on other accessibility features that are available and explanations on each of them. 

The labels will be available worldwide, and Apple will make more guidance available to developers on the criteria their apps should meet before they display the relevant accessibility information on their pages. 

With these labels, people can find out if apps will meet their needs without having to first download them. If you are colorblind, for example, you can see if a matching game offers “Differentiate without color alone” before installing it and going into its settings to verify. 

By bringing these labels to the App Store, Apple is delivering what the gaming industry is seeking to do with the Accessible Games Initiative (AGI) that was announced in March. While the AGI is a broader effort with promised participation from companies like Microsoft, Nintendo and Electronic Arts, it has yet to share a firm timeline for the implementation of the system. The Entertainment Software Association said in March that timing would depend on each company. 

Now, we’ll have to wait and see exactly how much later in the year Apple will start displaying these labels, but the company will be sharing more guidance with developers at its Worldwide Developer Conference (WWDC) in June, so it might take till the second half of 2025 for the changes to appear. 

Apple also shared plenty more on new assistive features coming to iPhones, iPads, Macs, Apple Watches and the Vision Pro, including a new Magnifier for Mac, an Accessibility Reader and updated Braille Access.  

This article originally appeared on Engadget at https://www.engadget.com/apps/apple-is-bringing-accessibility-labels-to-the-app-store-later-this-year-120020185.html?src=rss 

Apple brings Magnifier to Macs and introduces a new Accessibility Reader mode

This Thursday is Global Accessibility Awareness Day (GAAD), and as has been its custom for the last few years, Apple‘s accessibility team is taking this time to share some new assistive features that will be coming to its ecosystem of products. In addition to bringing “Accessibility Nutrition Labels” to the App Store, it’s announcing the new Magnifier for Mac, an Accessibility Reader, enhanced Braille Access as well as a veritable cornucopia of other updates to existing tools. 

According to the company’s press release, this year in particular marks “40 years of accessibility innovation at Apple.” It’s also 20 years since the company first launched its screen reader, and a significant amount of this year’s updates are designed to help those with vision impairments. 

Magnifier for Mac

One of the most noteworthy is the arrival of Magnifier on Macs. The camera-based assistive feature has been available on iPhones and iPads since 2016, letting people point their phones at things around them and getting auditory readouts of what’s in the scene. Magnifier can also make hard-to-read things easier to see, by giving you the option to increase brightness, zoom in, add color filters and adjust the perspective. 

With Magnifier for Mac, you can use any USB-connected camera or your iPhone (via Continuity Camera) to get feedback on things around you. In a video, Apple showed how a student in a large lecture hall was able to use their iPhone, attached to the top of their MacBook, to make out what was written on a distant whiteboard. Magnifier for Mac also works with Desk View, so you can use it to more easily read documents in front of you. Multiple live session windows will be available, so you can keep up with a presentation through your webcam while using Desk View to, say, read a textbook at the same time. 

Accessibility Reader

Magnifier for Mac also works with another new tool Apple is unveiling today — Accessibility Reader. It’s a “new systemwide reading mode designed to make text easier to read for users with a wide range of disabilities, such as dyslexia or low vision.” Accessibility Reader will be available on iPhones, iPads, Macs and the Apple Vision Pro, and it’s pretty much the part of Magnifier that lets you customize your text, with “extensive options for font, color and spacing.” It can help minimize distractions by getting rid of clutter, for instance.

Accessibility Reader also supports Spoken Content, and as it’s built into the Magnifier app, can be used to make real-world text like signs or menus easier to read as well. You can also launch it from any app, as it’s a mode available at the OS level. 

Apple

Braille Access

For people who are most comfortable writing in Braille, Apple has supported Braille input for years, and more recently started working with Braille displays. This year, the company is bringing Braille Access to iPhones, iPads, Macs and Vision Pros, and it’s designed to make taking notes in Braille easier. It will come with a dedicated app launcher that allows people to “open any app by typing with Braille Screen Input or a connected braille device.” Braille Access also enables users to take notes in braille format and use Nemeth code for their math and science calculations. Braille Access can open files in the Braille Ready Format (BRF), so you can return to your existing documents from other devices. Finally, “an integrated form of Live Captions allows users to transcribe conversations in real time directly on braille displays.”

Apple Watch gets Live Captions; Vision Pro gets Live Recognition

Wrapping up the vision-related updates is an expansion of such accessibility features in visionOS. The Zoom function, for instance, is getting enhanced to allow wearers to magnify what they see in both virtual reality and, well, actual reality. This uses the Vision Pro’s cameras to see what’s in your surroundings, and Apple will make a new API available that will “enable approved apps to access the main camera to provide live, person-to-person assistance for visual interpretation in apps like Be My Eyes.” Finally, Live Recognition is coming to VoiceOver in the Vision Pro, using on-device machine learning to identify and describe things in your surroundings. It can also read flyers or invitations, for example, and tell you what’s on them.

For those who have hearing loss, the Live Listen feature that’s already on iPhones will be complemented by controls on the Apple Watch, plus some bonus features. When you start a Live Listen session on your iPhone, which would stream what its microphone picks up to your connected AirPods, Beats headphones or compatible hearing aids, you’ll soon be able to see Live Captions on your paired Apple Watch. You’ll also get controls on your wrist, so you can start, stop or rewind a session. This means you can stay on your couch and start Live Listen sessions without having to go all the way over to the kitchen to pick up your iPhone and hear what your partner might be saying while they’re cooking. Live Listen also works with the hearing health and hearing aid features introduced on the AirPods Pro 2. 

Background Sounds, Personal Voice, Vehicle Motion Cues and Eye Tracking get updates

While we’re on the topic of sound, Apple is updating its Background Sounds feature that can help those with tinnitus by playing white noise (or other types of audio) to combat symptoms. Later this year, Background Sounds will offer automatic timers to stop after a set amount of time, automation actions in Shortcuts and a new EQ settings option to personalize the sounds.

Personal Voice, which helps those who are at risk of losing their voice preserve their vocal identity, is also getting a major improvement. When I tested the feature to write a tutorial on how to create your personal voice on your iPhone, I was shocked that it required the user to read out 150 phrases. Not only that, the system needed to percolate overnight to create the personal voice. With the upcoming update, Personal Voices can be generated in under a minute, with only 10 phrases needing to be recorded. The resulting voice also sounds smoother and with less clipping and artifacts. Apple is also adding Spanish language support for the US and Mexico.

Last year, Apple introduced eye-tracking built into iPhones and iPads, as well as vehicle motion cues to alleviate car sickness. This year, it continues to improve those features by bringing the motion cues to Macs, as well as adding new ways to customize the onscreen dots. Meanwhile, eye-tracking is getting an option to allow users to dwell or use a switch to confirm selections, among other keyboard typing updates. 

More across Apple TV, CarPlay, Head Tracking and Settings

Apple’s ecosystem is so vast that it’s almost impossible to list all the individual accessibility-related changes coming to all the products. I’ll quickly shout out Head Tracking, which Apple says will enable people to more easily control their iPhones and iPads by moving their heads “similar to Eye Tracking.” Not much else was shared about this, though currently head-tracking on iPhones and iPads is supported through connected devices. The idea that it would be “similar to Eye Tracking” seems to imply integrated support, but we don’t know if that is true yet. I’ve asked Apple for more info and will update this piece with what I find out.

Speaking of connected devices, Apple is also adding a new protocol to Switch Control that would enable support for Brain Computer Interfaces (BCIs). Theoretically, that would mean brainwave-based control of your devices, and Apple lists iOS, iPadOS and visionOS as those on deck to support this new protocol. Again, it’s uncertain whether we can go as far as to say brainwave-based control is coming, and I’ve also asked Apple for more information on this.

For those who use Apple TV, Assistive Access is getting a new custom Apple TV app featuring a “simplified media player,” while Music Haptics on the iPhone will offer the option to turn on haptics for an entire track or just the vocals, as well as general settings to fine-tune the intensity of taps, textures and vibrations.

The Sound Recognition feature that alerts those who are deaf or hard of hearing to concerning sounds (like alarms or crying babies) will add Name Recognition to let users know when they are being called. Sound Recognition for CarPlay, in particular, will inform users when it identifies crying children (in addition to the existing support for external noises like horns and sirens). CarPlay will also get support for large text, which should make getting glanceable information easier.

Other updates include greater language support in Live Captions and Voice Control, as well as the ability to share accessibility settings quickly and temporarily across iPads and iPhones so you can use a friend’s device without having to painstakingly customize it to your needs.

There are plenty more accessibility rollouts from Apple across its retail locations, Music playlists, Books, Podcasts, TV, News, Fitness+ and the App Store, mostly around greater representation and inclusion. There isn’t much by way of exact release window for most of the new features and updates I’ve covered here, though they have usually showed up in the next release of iOS, iPadOS, macOS and visionOS. 

We’ll probably have to wait until the public rollout of iOS 19, iPadOS 19 and more to try these on our own, but for now, most of these seem potentially very helpful. And as always, it’s good to see companies design inclusively and consider a wider range of needs.

This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/apple-brings-magnifier-to-macs-and-introduces-a-new-accessibility-reader-mode-120054992.html?src=rss 

Google is reportedly planning to unveil a Pinterest alternative at I/O 2025

Google is set to debut a new feature that The Information describes as “Pinterest-like” at its annual I/O developer conference next week. It reportedly shows users image results, based on their queries, that can give them ideas for fashion and interior design. Users can then save the images in different folders of their choice if they want to keep them separated based on certain themes. While The Information has likened it to Pinterest, it could be more similar to Cosmos, which is a more pared-down version of the idea. Cosmos lets users curate anything they saved from the web into clusters, which they can then share with other people. 

As the publication has noted, Google might be debuting a Pinterest competitor in order to secure its ads revenue from commercial queries. Google has been losing searches in homework and math from ChatGPT, a company executive told the court during a hearing related to a previous court decision that the company maintains an illegal monopoly in search. While those queries don’t typically generate ads revenue, Google knows that it’s inevitable for the company lose earnings from ads for commercial inquiries. Giving users a more interesting way to get search results that an AI couldn’t provide through a new feature or a new product could help Google retain revenue from advertisements. 

In addition to the Pinterest-like competitor, Google could also introduce a “software development lifecycle agent” that could help software engineers identify bugs or flag security vulnerabilities while they’re developing programs. It could also demonstrate the voice-powered integration of the Gemini AI chatbot into its Android XR glasses and headset. Previous clues point to Google launching the integration of Gemini Live inside the Chrome desktop browser, as well. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-is-reportedly-planning-to-unveil-a-pinterest-alternative-at-io-2025-123033887.html?src=rss 

The Morning After: Samsung’s Galaxy S25 Edge is $1,100 and thin

Samsung’s long-teased Galaxy S25 Edge has arrived, way ahead of the rumored iPhone Air. It’s a very S25-looking device, but the company is pitching it as a design-centric addition to its, let’s admit, bulging S25 family. The S25 Edge’s body is 5.8 millimeters (0.22 inches) thick if we ignore the camera bump like everyone else does. Granted, it’s not a huge bump.

Samsung says it engineered the lenses to be substantially thinner than those on the S25 Ultra while keeping the same 200-megapixel camera sensor. And there are only two cameras on the back this time. Gasp! Unfortunately, Samsung has gone for an ultrawide secondary shooter rather than a telephoto, likely due to the handset’s size constraints.

Image by Mat Smith for Engadget

This makes the S25 Edge the latest addition to the trend of fewer cameras, joining the Pixel 9a, but for a very different $1,100. You can check out my first impressions and all the crucial specs in my hands-on. Are you willing to handle possible battery life decreases and less zoom on your smartphone camera?

— Mat Smith

Get Engadget’s newsletter delivered direct to your inbox. Subscribe right here!

Even more Switch 2 stuff

Ticketmaster proudly announces it will follow the law and show prices up-front

Jamie Lee Curtis publicly shamed Mark Zuckerberg to remove a deepfaked ad

How to pre-order the Samsung Galaxy S25 Edge

Philips Fixables will let you 3D print replacement parts for your electric razors and trimmers

iOS 18.5 arrives with a new wallpaper for Pride Month

And not much else.

Apple pushed iOS 18.5 to devices on Monday, and the biggest visual change is a new rainbow-shaded wallpaper in honor of Pride Month. I’m honored. Otherwise, it’s a few minor tweaks and bug fixes.

Continue reading.

You can actually turn lead into gold

All you need is a Large Hadron Collider.

Bones

Scientists with the European Organization for Nuclear Research, better known as CERN, have converted lead into gold using the Large Hadron Collider (LHC). Unlike the examples of transmutation we see in anime pop culture, scientists smashed subatomic particles together at ridiculously high speeds to manipulate lead’s physical properties to become gold. Briefly. Lead atoms only have three more protons than gold atoms. The LHC causes the lead atoms to drop just enough protons to become a gold atom for a fraction of a second — before immediately fragmenting into a bunch of particles.

Continue reading.

The only thing I want from Apple’s big 2025 redesign is a

That’s a, not α.

Apple

This is where Deputy Editor Nathan Ingraham decries one of Apple’s latest design quirks. For over 600 words. Apple’s decision to use α instead of a in its Note App has got him mad. 

We’ve reached out to check if he’s OK.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-engadget-newsletter-111526456.html?src=rss 

Samsung Galaxy S25 Edge hands-on: Less smartphone, more compromises

After teasing us for months, Samsung has formally revealed the Galaxy S25 Edge. At just 5.8 millimeters (0.22 inches) thick, it’s the slimmest member of the S25 family — and its slimmest smartphone ever. It’s available to preorder now and launches on 30 May, starting at $1,100.

Samsung said that S25 Edge “unlocks a new era of growth for the mobile industry” — and it’s easy (too easy!) to cynically see this as a way of hawking another Samsung phone into a thinner — and yet similar — slice of hardware, with familiar cameras, technical specs and AI software. This isn’t an Ultra, nor is it a new foldable. However, it could be a new direction for Samsung’s flagship S series.

What’s the difference between the S25 and S25 Edge?

Image by Mat Smith for Engadget

Surprise! The Edge is thinner. While the S25 Edge measures in at less than the base S25, which is 6.4mm (0.25 inches) thin, it weighs almost the same. In fact, there’s only a gram’s difference between the two, despite the S25 Edge packing a much bigger 6.7-inch screen than the S25’s 6.2-inch one.

And boy, can you tell the difference. The most contemporary comparison I can make is when Apple switched to a titanium frame for the iPhone 15 Pro. Spec sheets and numbers be damned: I could sense how much lighter the device was. Despite having a much bigger screen than my iPhone 16 Pro, the thinner S25 Edge felt light yet premium. I easily slipped it in and out of my pants pockets, because, well, how else am I going to assess the biggest selling point for Samsung’s latest phone?

Image by Mat Smith for Engadget

The thing is: Device thickness isn’t an issue I have with flagship smartphones — it’s the screen size. The base Galaxy S25 (or the Pixel 9 Pro) hit the sweet spot for my hands. While this new S25 Edge may be easier to hold than similar-sized phones, a 6.7-inch screen isn’t for everyone.

Like most phones (the Pixel 9a is a curious outlier), the S25 Edge still has a substantial camera unit derailing otherwise clean hardware lines. The dual-camera setup protrudes a good 4.5mm (0.17 inches), although it features the same 200-megapixel sensor packed into the pricier S25 Ultra.

Samsung says it re-engineered the camera unit to ensure it could fit on the Edge, but it still sticks out — a lot. Before I got to handle the phone, I thought this would lead to the new phone being oddly unbalanced and top-heavy. But whatever Samsung has done to arrange the component furniture inside the S25 Edge, it worked. The phone doesn’t feel lopsided or fragile at all, but like any other premium flagship smartphone.

The Galaxy S25 Edge’s cameras

Image by Mat Smith for Engadget

Alongside the primary 200MP sensor, Samsung included a 12MP ultrawide camera with autofocus and macro photography support. And… that’s it. There’s no dedicated telephoto system, which typically takes up more space however you position the camera. The two lenses are stacked in a vertical arrangement, which I think looks cleaner than the camera cornucopias found on other devices — but many folks are going to miss the versatility of a true telephoto. Sure, you can digitally zoom by cropping in on that huge 200MP sensor, but it’s not the same.

The S25 Edge can also capture up to 8K video and packs all the other photography tricks you’d expect in an S-Series phone, like night photography and lossless zoom. Of cours,e it also handles post-capture AI tools like generative editing (removing photobombers and unwanted objects from your photos) and Audio Eraser for cleaning up video in loud environments.

And when it comes to AI — or Galaxy AI — you’re getting the same array of features that we saw in the base S25 and S25 Ultra, powered by a custom Snapdragon 8 Elite chip and 12GB of RAM. Those AI tools include the Now Brief and Now Bar which takes contextual clues from your apps and smartphone to lay out a plan for your day, remind you of the weather and more. Samsung’s integration of Google’s Gemini now includes Gemini Live, so you can tap into your camera feed to ask questions about your photos and things in your surroundings.

It’ll likely pick up any future Gemini and Android upgrades, too: The Galaxy S25 Edge will receive seven generations of OS updates and seven years of security updates.

Image by Mat Smith for Engadget

At a media briefing, Samsung also outlined how it’s trying to ensure the S25 Edge runs cool despite all the packed-in hardware, using a new Thermal Interface Material (TIM) for better heat dissipation within that limited space. I didn’t really get the time to push the device to its limits during the briefing, so we’ll wait for a review to assess whether it works well enough.

The company did have to make compromises to fit all the S25 Edge’s features into this svelte profile. It has a 3,900mAh battery, which is small for a phone with a 6.7-inch display that costs more than $1,000. In comparison, the base S25 has a 4,000mAh cell — that’s a bigger battery on a cheaper phone with a smaller screen. Then, there’s the S25+, which has a 4,900mAh battery with the same screen size as the S25 Edge.

In its defence, Samsung has made considerable progress on the battery life of its devices (particularly with this year’s crop of Galaxy S phones). The company claims the S25 Edge can run video for up to 24 hours. However, with a bigger screen inside a thinner device, battery life may be the biggest compromise—and it’s something we will have to test when we review the Edge properly.

The Galaxy S25 Edge is priced at $1,100 (£1,100) with 256GB of storage. It will launch on May 30 in three colors: Silver, Jet Black and Icy Blue. It’s hard to draw any concrete conclusions on whether the S25 Edge’s compromises for a smaller device footprint are worth it, but expect our review in the next few weeks.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/samsung-galaxy-s25-edge-hands-on-release-date-price-000022902.html?src=rss 

How to pre-order the Samsung Galaxy S25 Edge

The super slim Galaxy S25 Edge, which Samsung teased at the tail end of its January Unpacked event, has been officially revealed. During tonight’s Unpacked, we got full specs, pricing and shipping dates for Samsung’s latest phone, as well as a chance to put our hands on the new handset. 

Pricing is set at $1,100 for 256GB of storage or $1,220 for the 512GB model and the phones ship May 30. Pre-orders are open at Amazon, Best Buy and directly from Samsung, which is offering a $50 pre-order store credit and a no-charge upgrade to the 512GB model. 

The Edge’s headline features include its thin build and AI-supported photography chops — both of which we’d suspected from various leaks, only to have the rumors confirmed by Samsung’s own press release last week. And indeed, when Engadget’s Mat Smith got his hands on an Edge, he immediately felt how much lighter the device was — yet, he noted, it still felt premium. 

We now know for sure that the Galaxy S25 Edge has a 6.7-inch screen but measures just 5.8mm (0.22 inches) thick. That’s slightly less surface area than the 6.9-inch Ultra, but a full 2.4mm thinner. It weighs 163 grams, nearly the same as the standard Galaxy S25, but the Edge measures more than 10mm taller and 5mm wider than its base-model sibling.  

To shave off the grams and trim the width, Samsung developed a broader yet slimmer vapor chamber and a new Thermal Interface Material (TIM) for better heat dissipation. It uses Corning Gorilla Glass Ceramic 2 on the front display. It’s the first phone to use the material, which Corning says offers “enhanced drop performance on rough surfaces,” allowing for a thinner glass layer. The frame is made from titanium, like the Ultra model, a material many phone manufactures have put in higher-end models for its lightweight strength. 

As for camera power, the Edge has the same 200MP sensor as the S25 Ultra, which Samsung claims captures 40 percent brighter images in low light situations compared with the standard S25. There’s also a 12MP ultra-wide lens and a 12MP selfie cam up front. Those two rear cameras do protrude noticeably from the thin frame of the phone, but Mat was impressed how well-balanced the phone felt — it’s not lopsided at all. 

Of course, the phone also packs plenty of AI-powered tricks, including Pro Scaler, Audio Eraser and Drawing Assist. To support all the AI, Samsung is using the same Qualcomm Snapdragon 8 Elite chip as the other S25 phones and comes with 12GB of memory and either 256 or 512GB of storage. It packs a 3,900mAh battery, which is smaller than both the 4,000mAh one found in the standard S25 and the 5,000mAH power supply in the Ultra. Though Samsung claims the Edge can run for 24 hours on a charge. 

You can get the Samsung Galaxy S25 Edge in three color options: Titanium Silver, Titanium Jet Black, and Titanium Icy Blue. Samsung is promising seven years of security and software updates. We only spent a short time with the phone so far and our full review will be out shortly.  

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/how-to-pre-order-the-samsung-galaxy-s25-edge-000029138.html?src=rss 

Jamie Lee Curtis publicly shamed Mark Zuckerberg to remove a deepfaked ad

Jamie Lee Curtis is the latest celebrity to call attention to scam ads on Facebook and Instagram that use AI-manipulated video to hawk sketchy products. Curtis also appears to have encountered another issue familiar to many Facebook users: struggling to get the company’s attention.

In posts on Facebook and Instagram, the actress asked Mark Zuckerberg to intervene to stop the spread of a “totally AI fake commercial” of her. “My name is Jamie Lee Curtis and I have gone through every proper channel to ask you and your team to take down this totally AI fake commercial for some bullshit that I didn’t authorize, agree to or endorse,” she wrote. The post also included screenshots of the Meta CEO’s Instagram — Zuckerberg apparently doesn’t follow Curtis — and a screenshot from the scam ad.

“If I have a brand besides being an actor and author it is that I am known for telling the truth and saying it like it is and for having integrity and this use of my images … with new, fake words put in my mouth, diminishes my opportunities to actually speak my truth,” she wrote. “I’ve been told that if I ask you directly, maybe you will encourage your team to police it and remove it.”

It’s not clear what the video, which seemed to rely on manipulated footage from an interview Curtis did with MSNBC, was intended to promote. Curtis shared a screen grab with text that said “I’d want everyone suffering from.” But Curtis is far from the first celebrity to get caught up in such a scam.

Earlier this year, Engadget reported that dozens of Facebook pages were using AI tech to manipulate videos of Elon Musk and other celebrities in order to promote fake cures for diabetes. Many of those clips used similar phrasing, such as “If I were to die tomorrow, I’d want every diabetic, including you, to know this.”

The rise of cheap and readily available AI tools have made it relatively easy for scammers to impersonate celebrities to sell sketchy products or promote other schemes. Last year, Tom Hanks warned his followers about ads “promoting miracle cures and wonder drugs” using his name and voice. He said the ads were made “fraudulently” with the help of AI.

Johnny Depp also warned his fans about AI-enabled impersonators. “Today, AI can create the illusion of my face and voice,” he wrote. “Scammers may look and sound just like the real me.”

A spokesperson for Meta said the company was removing the video flagged by Curtis for violating its policies but declined to comment further. The company said last year it was cracking down on “celeb bait” scams, but hasn’t disclosed how many celebrities or public figures are participating in the program which relies on facial recognition technology.

In a comment on her Instagram post, Curtis confirmed that she did eventually get Meta’s attention. “IT WORKED! YAY INTERNET! SHAME HAS IT’S VALUE! THANKS ALL WHO CHIMED IN AND HELPED RECTIFY!”

This article originally appeared on Engadget at https://www.engadget.com/social-media/jamie-lee-curtis-publicly-shamed-mark-zuckerberg-to-remove-a-deepfaked-ad-225448916.html?src=rss 

Philips Fixables will let you 3D print replacement parts for your electric razors and trimmers

Philips is launching a new program called Fixables, where it will make plans available so that customers can 3D print replacement parts for the company’s personal care products. The video introducing the initiative touts it as a simpler and easier way to extend the lifetimes of functional items rather than throwing them out because a single part or attachment is broken. Philips has partnered with Prusa Research and LePub on this endeavor, and Printables is hosting the plans.

There are some caveats and limitations to this concept. The quality of the replacement part will depend on the materials used to create it, and not every customer has a 3D printer at home. (Although some public library systems, universities and local maker communities may have equipment that can be used or rented on site.)

It’s also still a project in its early stages. Fixables is initially launching in the Czech Republic. On the website for the Fixables program (which is in Czech), Philips explains that it’s starting with the home country for Prusa Research and it is reaching out to the 3D printing company’s existing maker community for this project. Another sign that the initiative is still in the early stages is that there’s only one part plan available: a comb attachment for the OneBlade trimmer. But per Google Translate, two more plans are labeled as “We are working on it” and there are three different icons with no descriptions that point to additional plans. The Fixables website also has an option for customers to submit a request for parts they want to be able to 3D print.

So while Fixables is a long way from making a real dent in waste from personal care products, it’s exciting to see a major brand making a serious effort to explore the potential of 3D printing for better sustainability.

This article originally appeared on Engadget at https://www.engadget.com/home/philips-fixables-will-let-you-3d-print-replacement-parts-for-your-electric-razors-and-trimmers-233025245.html?src=rss 

G is for gradient: Google has redesigned its app logo

Over the past few days, eagle-eyed Google users may have noticed that in some instances, the capital G logo for the company now sports a gradient softening the transitions between the four solid-color sections. The branding has been changed for the Google app on both Android and iOS devices as of this writing. However, there are still several places that continue to sport the classic color block look, including browser favicons. It’s also not included in Google’s official collection of images for press; the classic version is still being used as the entry for the Google app logo. 

None of the logos for other Google smartphone apps appear to have adopted a new gradient look. But perhaps notably, the branding for Google’s Gemini AI assistant does have a slight gradient on its star symbol. Maybe AI is leading the way for aesthetics as well as for technical choices at Google? Or maybe this is a trial run to gauge reactions before rolling out a full brand redesign? 

Whatever the reason, the biggest surprise isn’t that Google may be rolling out a logo refresh, but that the change seems to be happening with zero fanfare. When the company last redesigned its branding in 2015, there was a whole campaign explaining every last detail of the new look. Branding is a big deal for a corporation as big as Google. Even changes that seem minor would go through many iterations and committees and vetting before they go live. And any marketing exec knows that consistency is key, so it’s especially strange that, if this is a permanent change, it’s happening in a piecemeal approach. 

We’ve reached out to the company for more information about whether gradients will be the hot style trend for all Google products in 2025.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/g-is-for-gradient-google-has-redesigned-its-app-logo-220437771.html?src=rss 

Generated by Feedzy
Exit mobile version