Insta360 X4 Air is a lightweight 8K 360-degree camera

Insta360 has launched a new camera that sits between the X4 and its current flagship, the X5. The company says the new Insta360 X4 Air is lightest-ever 8K 360-degree camera at just 165 grams. It is definitely lighter than the 200-gram X5 and the 203-gram X4 cameras, but the company says it packs flagship-level features. The camera has 1/1.8-inch sensors and can capture footage with a 134 percent increase in pixel area per frame compared to recording by the X4. It has lenses that users can replace in seconds, with optical coating that doubles its drop resistance compared to its predecessor. The camera is also waterproof up to 49 feet underwater. 

The company explained that while the “X5 is engineered for precision, X4 Air is made for freedom.” It’s compatible with most X5 accessories, but since the camera itself is lighter, the whole setup will still be easier to handle. The X4 Air also has a built-in wind guard like the X5 and captures footage from every angle with its dual fisheye lenses. You can later reframe your footage in the Insta360 app to give your viewers various perspectives of what you shot. It has gesture controls and other intelligent tools the X5 also has, and its Invisible Selfie Stick enables drone-like shots like in other Insta360 cameras. Being lighter, however, means it has a battery with a lower capacity: It can last around 88 minutes while shooting at 8K 30fps, whereas the X5 can last up to 100 minutes. 

Insta360 X4 Air comes in black or white and is now available for certain regions on the company’s website and on Amazon. It’s coming “soon” in the US and Canada. The Standard bundle will set you back $400, while the Starter bundle that comes with a 114cm invisible selfie stick, a lens cap and an extra battery will cost you $440. Whatever you choose, you’re getting a free one-year subscription to the Insta360+ cloud service with 200GB of storage with your purchase. 

This article originally appeared on Engadget at https://www.engadget.com/cameras/insta360-x4-air-is-a-lightweight-8k-360-degree-camera-120017733.html?src=rss 

Adobe’s Firefly can now use AI to generate soundtracks, speech and video

Adobe has released the latest version of Firefly that now leans heavily on AI for nearly every facet of video and image post-production. The updated app can now use AI to generate narration, music, images and video clips, while even helping you to brainstorm ideas and piece together clips. Many creators may find it distasteful to lean on AI for nearly every aspect of production, but Adobe calls it “a tool for, not a replacement of, human creativity.”

Firefly has mostly been a content generation tool until now, but Adobe has now introduced the Firefly video editor into private beta. It’s a web-based multitrack timeline editor, not unlike Adobe Premiere Pro, that lets you generate, organize, trim and arrange clips, with tools to add voiceovers, soundtracks and titles. You can organize existing Firefly content or generate new ones inside the editor (with presets like claymation, anime and 2D), and combine that with captured media. All that can be edited with “frame-by-frame precision or through a built-in transcript,” Adobe said. 

On top of video, Firefly eliminates the need for humans to make voiceovers and music, too. Adobe’s new Generate Soundtrack (public beta) is a Firefly Audio Model-powered AI music generator that lets you select a style or comes up with one to match any clip you upload. It then syncs and times it precisely with that footage. 

Generate Speech, meanwhile, does the same thing for voiceovers. It gives you a choice between Firefly’s Speech Model and one for ElevenLabs, letting you generate “lifelike voices in multiple languages, and fine-tune emotion, pacing and emphasis for natural, expressive delivery.” 

Adobe

Adobe is also expanding access to its Fire!y Creative Production tool directly in the Fire!y app as a private beta to start with. It’s a complete AI-powered batch image editing system that lets creators piece together clips, automatically replace backgrounds, apply uniform color grading and crop in via a prompt-driven, no code interface. 

Then there’s Fire!y Boards, an “AI-powered ideation surface” to brainstorm new concepts. A feature called “Rotate Object” helps you convert 2D images into 3D so you can position objects and people in different poses and rotate them to new perspectives. Two others, PDF exporting and bulk image downloading, speed the the process of sharing visual concepts across projects.

Finally, Prompt to Edit (available now on Firefly) is a conversational editing interface that allows you to use everyday language to describe the edits you want to make to an image, much as you’d use text-to-image tools like Midjourney to create new images. It’s available with Adobe’s latest Fire!y Image Model 5 AI, along with partner models from Black Forest Labs, Google and OpenAI.

With Firefly’s AI now able to handle every aspect of production, you may be wondering if this will result in a wave of unwatchable AI “slop” appearing on YouTube and elsewhere. The answer is “probably,” but it won’t necessarily be cheap. Standalone Firefly subscriptions are $10/month for the basic plan (20 five-second videos), $20/month for the the Pro plan (40 five-second videos) and $199 for the Premium plan (unlimited videos). However, Adobe is throwing in free image and video generation (with some restrictions) for all Firefly and Creative Cloud Pro customers until December 1st. All the new tools are now available either as part of the update, in public beta or in private beta as mentioned above. 

This article originally appeared on Engadget at https://www.engadget.com/apps/adobes-firefly-can-now-use-ai-to-generate-soundtracks-speech-and-video-120018593.html?src=rss 

Adobe’s new Photoshop AI Assistant can automate repetitive tasks

Among the usual slew of AI enhancements to its Creative Cloud apps, Adobe has introduced a new Photoshop AI Assistant to help automate repetitive chores and provide personalized recommendations. At Adobe Max 2025, the company also introduced new tools for Photoshop, Premiere and Lightroom, while launching a new AI generative model and bringing in new third party models from Topaz and others. 

A key new feature in Photoshop and Express (Adobe’s all-in-one design, photo, and video tool) is the AI Assistant that lets you can chat with in a conversational manner to gain “more control, power and potential time-savings,” according to Adobe. With that, you can tell it to take on a series of creative tasks like color correction on resizing. You can easily switch between prompts with the agent and manual tools like sliders to adjust brightness and contrast. It can also provide personalized recommendations and offer tutorials on how to accomplish complex tasks. 

In a brief demo, Adobe showed that when you switch to Photoshop’s “agentic” mode in those apps, it minimizes the usual complex interface and leaves you with a simple prompt-based UI. You can then type in the task you want to accomplish, and the agent will perform those steps automatically. You can then jump back into the full interface to fine tune the result by changing things like brightness or levels. 

Along with the AI Assistant, Adobe introduced a few other AI tools for Photoshop. Chief among those are new partner models for generative fill that lets you easily remove unwanted objects and fill in the hole left behind. Those include Google Gemini 2.5 f!ash, Black Forest Labs FLUX.1 Kontext and Adobe’s latest Firefly Image Models. It also introduced Firefly Image Model 5, Adobe’s most advanced image generation model yet. 

Photoshop also gains new Generative Upscale option that uses Topaz Lab’s AI to upscale small, cropped and other low-resolution images into 4K with “realistic detail,” Adobe says. Another feature, Harmonize, lets you place objects or people into different environments in a realistic manner, eliminating much of work necessary for such compositing. Harmonize also matches the light, color and tone of foreground objects and people to the background.

Adobe

Premiere, meanwhile, introduced a similar feature called AI Object Mask that performs automatic identification and isolation of people and objects in video, so they can be edited and tracked without any manual rotoscoping. The app also gains new rectangle, ellipse and pen masking in Premiere to make targeted adjustments, along with a fast vector mask for quicker tracking. 

Finally, LIghtroom is getting a new feature called Assisted Culling. It lets you quickly and easily identify the best images in a large photo collection, with the ability to filter for things like focus level, angles and degrees of sharpness. 

Photoshop’s Generative Fill with Partner Models, Generative Upscale and Harmonize are now available to customers today. Premiere’s AI Object Mask, Rectangle, Ellipse and Pen Masking and Fast Vector Mask, along with Lightroom’s AI Assisted Culling, launch today in beta. Adobe’s Photoshop AI Assistant, meanwhile, will be available through a private beta waitlist. 

This article originally appeared on Engadget at https://www.engadget.com/apps/adobes-new-photoshop-ai-assistant-can-automate-repetitive-tasks-120032017.html?src=rss 

The Morning After: Rivian spinoff Also made a modular e-bike with a virtual drivetrain

Spinning off from Rivian, the TM-B e-bike is Also’s attempt at a do-it-all e-bike. It’s pitching it as flexible enough for commuting, trail riding or kid- and cargo-hauling because its modular frame can swap in bench seats or cargo racks. But the frame only comes in a single size. Still, Also (hate that name) says the standard battery is good for 60 miles of riding and can be charged via USB-C.

Engadget

Besides the modularity, another unique feature is its drive system, called DreamRide. Instead of a mechanical connection between the bike’s rear wheel and the pedals, the TM-B uses “software-defined pedaling,” so pedaling transfers to the generator (and the battery) instead of simply pushing you forward. It’s a different take on e-bike riding, and I’m not entirely sold on it.

From the people that brought you Rivian vehicles, there are plenty more tech touches, including a 5-inch touchscreen display and a built-in security system that automatically locks the frame and rear wheel when you walk away, a la Cowboy and VanMoof.

The first model to ship will be the $4,500 TM-B Limited Launch Edition, which has a range of up to 100 miles. There’s also a $4,500 TM-B Performance model, with a slightly different color scheme, available in the “first half” of 2026. Finally, there’s a base-level TM-B model with a range of up to 60 miles, which only comes with standard ride modes. Also hasn’t announced an exact price but says it will cost less than $4,000 when it ships “later in 2026.”

— Mat Smith

Get Engadget’s newsletter delivered direct to your inbox. Subscribe right here!

The news you might have missed

Federal investigators are looking into Tesla’s Mad Max mode, which reportedly defies speed limits

Microsoft apparently ordered its Xbox division to boost profits to an unrealistic level

How to improve your smartphone photography

Cinemark is adding more 70mm IMAX screens ahead of Christopher Nolan’s The Odyssey

How to cancel your Peacock subscription

Google’s AI health coach will soon be available to some Fitbit Premium users

You’ll chat with a bot.

Google

A preview version of Google’s long-awaited AI health coach launches tomorrow for some Fitbit Premium users in the US. Google says it’ll incorporate user feedback to “add, change or improve features and capabilities.” The company warns users that “initially, there will be some gaps” as it sort of beta tests the coach. The coach can be a sounding board for personal health, fitness and sleep goals and also acts as a personal trainer. Google says it can check progress, create workouts, give advice on trends and review and adjust fitness plans.

Continue reading.

US Customs and Border Protection will photograph visitors for facial recognition database

Welcome to America.

The US Customs and Border Protection (CBP) submitted a new measure that allows it — for facial recognition — to photograph any non-US citizen who enters or exits the country. CBP and the Department of Homeland Security want to crack down on threats of terrorism, fraudulent use of travel documents and anyone who exceeds their authorized stay, according to a filing with the government’s Federal Register. The government agency can already request photos and fingerprints from anyone entering the country, but this rule change would allow it to gather photos of anyone exiting as well.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-engadget-newsletter-111545206.html?src=rss 

X’s Grokipedia is online after it briefly crashed out

Grokipedia, the encyclopedia powered by xAI’s assistant Grok briefly went online Monday, before it promptly crashed. At the time of this writing, the website appears to be working, and contains more than 885,000 articles, according to a counter on its homepage.   

Musk, who has previously railed against Wikipedia, has described the project as a “a necessary step towards the xAI goal of understanding the Universe.” Musk and his allies have long claimed that Wikipedia is biased. Wikipedia founder Jimmy Wales has called Musk’s claims about the crowd-sourced encyclopedia “factually incorrect.” 

We are building Grokipedia @xAI.

Will be a massive improvement over Wikipedia.

Frankly, it is a necessary step towards the xAI goal of understanding the Universe. https://t.co/xvSeWkpALy

— Elon Musk (@elonmusk) September 30, 2025

Musk said last week that Grokipedia’s launch had been delayed in order “to do more work to purge out the propaganda.” Notably, some articles are nearly identical to their entries in Wikipedia, though Grokipedia doesn’t contain in-line links to sources in the same format. Such entries do have a small disclaimer that “the content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License.”

In other cases, social media users have already spotted instances where Musk’s worldview is more obvious in the “AI-powered encyclopedia.” Here’s an excerpt from the entry for “university,” as captured by Bluesky user Jeremy Cohen

Bluesky screenshot of a Grokipedia entry for “university.”

Bluesky

And here’s a screenshot of Grokipedia’s entry for Musk, which was captured by Bleusky user Miles Lee

Grokipedia entry for Elon Musk.

Bluesky

X didn’t immediately respond to a request for comment.

This article originally appeared on Engadget at https://www.engadget.com/social-media/xs-grokipedia-is-online-after-it-briefly-crashed-out-231108836.html?src=rss 

How Did Cody ‘Beef’ Franke Die? Updates on His ‘Unexpected’ Death

The golf influencer died suddenly, the ‘Fore Play’ podcast revealed days after he died attending a wedding in the Dominican Republic. Here’s what we know so far about Franke’s devastating death.

The golf influencer died suddenly, the ‘Fore Play’ podcast revealed days after he died attending a wedding in the Dominican Republic. Here’s what we know so far about Franke’s devastating death. 

Cinemark is adding more 70mm IMAX screens ahead of Christopher Nolan’s The Odyssey

The movie industry has been in a tailspin for years, with many people foregoing the theatrical experience in favor of watching films at home. I get it. Going to the movies can be expensive and, let’s face it, dealing with other people can be annoying (it’s been 10 years and I’m still mad about those teenagers who would not stop giggling all the way through my first viewing of The Witch). But there’s nothing quite like going to a theater and getting lost in a great film for a couple of hours. In addition, large-scale formats are growing in popularity and theater chains are trying to accommodate moviegoers.

Cinemark is installing more IMAX screens, including ones that support 70mm film projection. The company is adding such screens to its locations in Woodridge, Illinois (a suburb of Chicago); Colorado Springs, Colorado; and Rochester, New York. It’s also adding four IMAX with Laser systems — a 4K laser offering — to other locations in the US in the coming months. It’s upgrading its other 12 IMAX screens across the Americas with that tech too.

According to Variety, Cinemark plans to have the new IMAX 70mm film screens set up by July 17, 2026. That’s the release date for Christopher Nolan’s next film, The Odyssey, which is the first theatrical release to be shot entirely in IMAX. As it stands, only 30 movie theaters on the planet can screen films in IMAX 70mm, which is Nolan’s preferred format.

IMAX is proving popular with cinemagoers who are looking for a large-format experience that would be impossible at home (at least not without an obscene private screen). Indeed, many IMAX 70mm screenings of The Odyssey sold out a year in advance.

IMAX 70mm isn’t the only format with limited availability that’s drawing audiences to theaters. Paul Thomas Anderson’s One Battle After Another is the first movie in 60 years to be projected in the VistaVision format, but only at a few locations. The film is currently being screened in IMAX 70mm in some cinemas too.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/cinemark-is-adding-more-70mm-imax-screens-ahead-of-christopher-nolans-the-odyssey-194155824.html?src=rss 

Cody ‘Beef’ Franke’s Health: What Happened to the Barstool Sports Influencer Before His Death at 31?

The golf community is mourning the loss of Franke, who died from a ‘sudden medical issue.’ Here’s what we know about his health.

The golf community is mourning the loss of Franke, who died from a ‘sudden medical issue.’ Here’s what we know about his health. 

Battlefield 6’s free battle royale mode arrives on October 28

Battlefield 6 is getting a free battle royale mode on October 28. This follows numerous leaks that have been popping up ever since the mainline game hit store shelves on October 10.

It’s called Battlefield: RedSec, though we don’t have too many gameplay details. It’s free for everyone and you don’t need the full-priced game to play. That much we do know. This puts it in direct competition with Call of Duty’s own free-to-play battle royale, Warzone.

Eyes up.
Plates on.#REDSEC arrives tomorrow at 8:00 PT / 15:00 UTC 🔴

🔔 set reminder: https://t.co/xuRd1LETVr pic.twitter.com/Lpi7sufuay

— Battlefield (@Battlefield) October 27, 2025

We can assume some gameplay elements, as it’s a battle royale. It’ll likely feature players heading to a large map and battling one another until the last person is left standing. You know the drill. EA dropped a short teaser that seems to show four different classes of soldier to choose from.

We don’t know how or if this battle royale will interact with the main game. Warzone typically includes a story that ties into whatever’s going on with Call of Duty’s seasonal content drops. To that end, Season 1 of Battlefield 6 also releases on October 28. This update includes new maps, modes, vehicles, guns, attachments and cosmetic items.

Battlefield: RedSec will be available for download on various gaming platforms at 11AM ET. This could end up being a pretty big hit for EA, as the main game managed to sell 7 million copies in three days.

This article originally appeared on Engadget at https://www.engadget.com/gaming/battlefield-6s-free-battle-royale-mode-arrives-on-october-28-174419086.html?src=rss 

Google is bringing Beam, its 3D video conferencing tech, to deployed service members

Google has teamed up with the United Service Organizations (USO) to help deployed service members stay in touch with their families in a different way. As part of a pilot program, the company is bringing Google Beam, its 3D video communication tech, to USO service centers in the US and other countries starting in 2026.

Google suggests that Beam can help military families who are separated by many miles feel like they are in the same room. While family members can keep in touch with deployed loved ones through group chats and video calls, chatting via Beam could help them feel closer together, if the tech works as well as promised.

We got our first look at Beam — then known as Project Starline — in 2021. The holographic teleconferencing system uses 3D imaging, spatial audio and adaptive lighting to make video chats more immersive. Beam is primarily intended for enterprise clients (the first such device costs $25,000), but it’s interesting to see Google exploring other applications for the tech.

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/google-is-bringing-beam-its-3d-video-conferencing-tech-to-deployed-service-members-174500517.html?src=rss 

Generated by Feedzy
Exit mobile version