YouTube lets creators add multi-language voice tracks to their videos

YouTube viewers from around the world might start finding more videos with audio in their native language. The video-hosting website has launched a new feature that gives creators the capability to add voice tracks to their new and existing content in multiple languages. YouTube has been testing multi-language dubs with a handful of creators over the past year, but it’s now expanding the feature’s reach and making it accessible to thousands more. 

The website presents the new feature as a tool creators can use to grow their audiences around the world. Early testers apparently uploaded 3,500 videos in over 40 languages last month, and viewers watched over 2 million hours of dubbed video everyday in January. The creators who tested feature also found that around 15 percent of their watch time came from viewers playing their videos in another language. 

One of the most notable creators who tested YouTube’s multi-language tool was MrBeast, who has over 130 million subscribers worldwide. MrBeast runs multiple channels in 11 different languages, but in an interview, he said that it would be much easier to maintain just one. It’s also probably a plus that anybody clicking on a link shared by someone speaking another language will be able to understand it simply by changing the dubbed audio. 

After switching to their preferred language for the first time, the website will default to it whenever they watch videos with dubs. Viewers will also be able to search for content dubbed in their language, even if the video’s primary tongue is different, through translated titles and descriptions. YouTube didn’t say how it chose the thousands of creators getting access to the feature today, but we asked the website for an idea how it will roll out multi-language dubs until it’s available to everyone. 

 

‘Star Trek: Picard,’ cargo cults and the perils of success

The following contains spoilers for Star Trek: Picard, Season Three, Episode Two: “Disengage.”

Star Trek II: The Wrath of Khan is a 1982 movie that arguably saved Star Trek as a going concern. It was a cheap movie, but writer-director Nicholas Meyer made thriftiness a virtue, building a paranoid submarine thriller out of steely glances and jousting phone calls. Despite having no love of Trek, Meyer painted a broad sweep of an older Jim Kirk, his life, death and rebirth with the help of a son he never knew he had. It’s a sumptuous movie, full of smart dialogue and characterisation, with a drum-tight plot and great acting, not just a great Star Trek film, but a great film, period. And sometimes, I feel that its critical and commercial success was so big that it’s been to Star Trek’s overall detriment.

Whenever the creative well runs dry, Trek runs back to old comforts, and the Next Generation movies were perpetually looking for its own Khan. First Contact flipped the Moby Dick narrative, making Picard the Ahab against the Borg’s white whale. Insurrection borrowed the setting of Khan’s climatic finale, while Nemesis borrowed its plot beats; a wounded ship only saved by the heroic sacrifice of each series’ Tin Man character. Into Darkness then winkingly inverted those same plot beats, with Kirk nobly “dying” in place of his best friend.

Picard’s been telegraphing its intentions from the get-go, dropping every nod to fans about where we’d wind up. The Bennett-era movie callbacks remain en vogue here, and to my memory this is the first use of the Blaster Beam, or a soundalike, in a streaming era soundtrack. Much like all of the other nods, we’re watching a cargo cult being assembled in real time, boldly serving us up something we’ve only seen, oooh, four or five times at this point. So: Wounded hero ship facing off against a more powerful enemy? Check. Inside a nebula that’s disrupting normal starship functions? Check. With our lead suddenly presented with the news he has a son he never knew about? Check check check.

This week, Picard and Riker make it to the Helios to find Beverley in her stasis pod, guarded by her son, Jack. He’s a rakish Englishman who has already spoken two whole words in French while negotiating with a corrupt Fenris Ranger. After being rescued by the Titan, Riker starts hinting about the younger Crusher’s parentage, as if being the world’s most English Frenchman is a genetic trait. It isn’t long before Crusher is outed as an intergalactic con man and fugitive, and Shaw has him sent to the brig. He also, after several hours of allowing her to remain on the bridge giving orders to people, dismisses Seven for indulging two people we keep being told are “legends” and “heroes.”

There’s plenty of furrowed brows as Picard initially refuses to consider that he might have a son, and at no point does anyone suggest running a paternity test. You might expect it would be easy enough to whip out a tricorder or hypospray, or even the transporter records, and find the truth. But, you know, that would be too efficient, so we’re left with Picard and Jack facing off in the brig. Now, credit where due, Patrick Stewart and Ed Speelers sell the hell out of this scene, the first that feels in any way real so far.

All the while, the Titan is menaced by Amanda Plummer’s villain, who we know is evil because she’s smoking on the bridge of her ship, the Shrike, indoors! I wonder if this, too, is another nod to those older films given Plummer’s father faced off against Kirk in The Undiscovered Country. Maybe this is why I’m so out of step with so much of the (positive) critical consensus around this run. I find this raiding of Star Trek’s own text and paratext to be insular and repetitive, with it more interested in placating disaffected fanboys than telling a story with a point of view. If you want strange new worlds, new life forms and new civilizations, you’ll need to watch the show set 142 years earlier.

Then there’s Raffi. Last week, she uncovered that some nefarious type had stolen some deep tech from Aperture Science Starfleet. At the end of that episode, a Starfleet recruitment building big enough to fill the donut hole in Apple Park gets Portal-ed into dust, killing (just) 117 people. Now, looking to make amends for her, uh, failure? She’s looking into local crims in order to find out who exactly was responsible for the seemingly-unwarranted attack.

Now, this is the plot beat I alluded to in my preview, when Raffi, who is in recovery, is forced to do drugs in order to prove she’s not an undercover agent. The portentous music and Michelle Hurd’s acting sells the notion this isn’t a great idea, but Raffi’s committed to the cause. But while she’s incapacitated, her handler comes in to rescue everyone with some good, old-fashioned Mek’leth carnage. I couldn’t help but feel a punch in the air when Worf popped up in all his glory, but the tonal jump doesn’t sit well with me. 

You could be wondering why the Federation Ambassador to the Klingon Empire is doing covert intelligence work. But, by the end of the Next Generation movies, it was clear that Worf would just show up for a visit whenever the plot required. And even I’m not going to harp on about this too much, because it is never a chore to watch Michael Dorn do his work. As EW’s Darren Franich said in his definitive Star Trek essay series, “Michael Dorn knew Worf only got cooler when the show made him look goofy.” As goofy as he is here, he’s still Worf, and you just wish that Paramount had greenlit a Worf show three years ago instead.

I had hoped this episode, for its laggy table-laying, may be looking for a way to attack a well worn but fundamentally strong Star Trek trope. That being if it’s right and proper to hand over a potentially-innocent man to frontier justice, and if not, why not? There’s plenty of angles for the argument given the many shades of gray that most people can now comprehend. After all, the Titan is outside Federation space, and so you can’t, or shouldn’t, impose your values on those beyond your worldview. That can be countered by someone saying that natural justice is, or should be a universal virtue. And that these debates must sit side-by-side with the notion that the needs of the many (the 500-plus souls on the USS Titan) outweigh the needs of the few, or the (Jack Crusher) one. You could even have the supposedly “right” argument, the one aping Spock’s famous aphorism, espoused by the character most seen as an asshole, too. But no.

Unfortunately, Picard remains bad for all of the same reasons that pretty much every other Khan copy is bad: It has almost nothing to say. In fact, this episode seems to hinge on every person in the narrative suddenly becoming incapable of doing even the basic parts of their jobs. Since when would a security officer not search a prisoner for hidden technology before putting them in the brig? Since when would a ship at Red Alert be taken by surprise when a hostile vessel in front of them starts attacking? And why did nobody have the presence of mind to run a paternity test, which surely at this point in history could be done with the ship’s internal sensors? Not to mention, why didn’t Jack just tell the security guard he’d like to hand himself over rather than knocking him out? Maybe so we could have a few more moments of tension before the Titan chooses to make a break for the nebula and we roll the credits.

You may think I’m banging on unnecessarily about The Wrath of Khan but I think it’s justified here. If the production team weren’t looking to invite comparisons to a vastly superior project then they were unwise to take so many of its plot beats as its own. I mean, in Wrath of Khan, Kirk has sixty seconds to find a way to even things up between the wounded Enterprise and the Reliant. And he does so with a little bit of theatrics, some ingenuity, and by showing that he was a little cleverer than anybody gave him credit for being. When this version of Picard is placed in the same situation but given a whole hour to come up with something, what does he do? He marks time on the bridge while the younger actors with plausible-looking stunt performers can do the now obligatory punch fight so that the audience at home doesn’t start getting bored.

 

Panasonic S5 II review: The full-frame vlogging camera you’ve been waiting for

While popular with vloggers, Panasonic’s mirrorless cameras have been held back from true greatness by the lack of a phase-detect autofocus system. Finally, the company has rectified that problem with the launch of the S5 II. It has a new 24-megapixel sensor with phase-detect pixels that should get rid of the wobble and hunting that have plagued the contrast-detect AF on Panasonic cameras over the years.

To make it even better for content creators, Panasonic also brought over its new, more powerful stabilization system from the GH6. And you still get the powerful video features you’d expect on Panasonic cameras, like video up to 6K, monitoring tools and advanced audio features. The S5 II is also attractively priced at $2,000 – that’s $500 less than the Sony A7 IV and Canon EOS R6 II, its main competitors.

This is Panasonic’s first hybrid phase-detect autofocus system, so I was very curious to see how it stacks up against those cameras. I also wanted to see if it would let you leave your gimbal at home, as the company suggests in its ads. To find out, I took it around Paris and my hometown of Gien, France. 

Body and Handling

The S5 II’s body and control layout is identical to the S5, and that’s generally a good thing. At 740 grams, it is a bit heavier than its main rivals. However, it’s still a reasonably lightweight video camera that’s comfortable enough to shoot for a full day. 

It has a big comfortable grip, along with lots of manual controls that let you change settings without the need to dip into menus. It has all the controls you’d hope for like a joystick, dedicated AF control and more. The record button is placed on top so it’s easy to find when vlogging, but it would be nice to have a record button on front like the GH6.

Should you need to use the menus, Panasonic has nailed that part, with well-organized categories that make important adjustments fairly easy to find. It’s also quite easy to customize things, so as with any camera, I’d recommend doing that for your own workstyle. 

The 3-inch, 1.8 million dot rear display is the same one as before and is bright and sharp for video work. It of course fully articulates, so you can flip it around for vlogging, hold it high or low and more. The only drawback is that it can get tangled up with any cables, particularly the headphone jack.  

Luckily, Panasonic boosted the OLED electronic viewfinder (EVF) resolution to 3.68 million dots from 2.34 million dots on the original S5. It’s now decently sharp and clear, addressing one of my biggest complaints of the last model. 

Steve Dent/Engadget

Another welcome update is two fast UHS-II card slots, rather than one UHS-II and one UHS-I slot on the S5. That allows for faster transfer speeds and more reliable backups, if you like to shoot video to two cards at once. It also now uses a full-sized, rather than a micro HDMI jack as before, making it far more reliable when using an external recorder. 

There are of course headphone and mic jacks, but the S5 II now offers 4-channel recording via the DMW-XLR1 hotshoe audio adapter, just like the GH6. It also borrows the latter’s audio interface that gives you a central hub for all audio settings. It doesn’t have a dedicated button like the GH6, but you can assign any function button.

Finally, the batteries are borrowed from the S5 II, and deliver up to 470 shots on a charge or a solid two hours of 4K recording.

Video

Panasonic’s mirrorless cameras are primarily designed for video shooters, so let’s get into that first. The centerpiece of this camera is that new phase-detect autofocus, designed to eliminate the wobble or hunting that happened with past Panasonic models that had contrast-detect only autofocus. So how does it work for content creators

Steve Dent/Engadget

As with other recent models, the S5 II’s system includes regular continuous AF modes along with subject tracking, for both humans and animals. However, it’s not as sophisticated as recent rival cameras like the A7 IV and EOS R6II, though, as it can’t track things like cars and airplanes, and doesn’t distinguish between birds and other animals.

Luckily, the capabilities it does have are on par with those models. It smoothly tracks subjects and has very little lag if they move toward the camera, for example. Face and eye detection is good, though it struggles a bit if the subject turns, and can’t track their eyes if they’re not reasonably close to the camera. It’s also not quite as sticky as rival models. 

Still, it generally tracks focus reliably for interviews, vlogging and other situations. More importantly, the pulsing, hunting and wobbling is completely gone, so you can now rely on the S5 II’s autofocus in most situations.

There is one caveat that may be important to some users. As YouTuber CameraOfChoice notes, the phase-detect AF works great at all 4K and 6K resolutions, and 1080 25p. However, the camera switches to contrast-detect AF at 10-bit 1080p 60 fps and 120 fps resolutions, along with 3.3K 422/10L 25p. I’ve reached out to Panasonic for more information, but if you use those resolutions frequently, you may need to look at a different model. 

Steve Dent/Engadget

With autofocus issues mostly gone, the S5 II is a far more attractive vlogging and video camera thanks to its other powerful features.

You can shoot 5.9K video at 30p using the full width of the sensor, or full-width supersampled 4K at up to 30 fps. 60p 4K video is possible as well, but requires an APS-C crop and some loss of sharpness. The S5 II can also handle 4:3 anamorphic video at up to 6K using the full sensor width, or 3:2 “open gate” video that makes it easier to crop or deliver in social media formats.

There are few temperature-related time restrictions in any of these modes, thanks to the inclusion of a clever fan that only kicks in when you need it (below). Namely, there are no time restrictions on any video at 4K and below, including 1080p 120, while 6K is limited to 30 minutes. Panasonic is the only manufacturer to test its cameras at up to about 105 degrees Fahrenheit, so most users will likely never experience any problems. 

As with most Panasonic cameras, you can shoot 10-bit video with V-log to boost dynamic range. And it’s easier than ever to monitor V-Log. You can not only choose a standard Rec.709 output, but display your own custom look-up-table or LUT, too. You can even record those LUTs as your final video output, giving you unlimited “looks” and potentially saving time in post.

Steve Dent/Engadget

The main video drawback is the lowish data rates (200 Mbps and below) and the lack of any ProRes or All-I internal recording modes. There’s also no external RAW capture, though you’ll be able to add that later for a $200 fee. You can, however, capture other ProRes codecs to an Atomos Ninja V/V+ or BlackMagic Video Assist recorder. 

And that brings up Panasonic’s upcoming S5 IIx, announced at the same time as the S5 II. It’s priced at $2,200 and is mostly identical in terms of features. However, the extra $200 gets you not only RAW external video included but also ProRes capture to an external SSD via the USB-C port. With a small price difference to get such a useful feature, a lot of people might want to wait for this model.

Another terrific new capability is the updated in-body stabilization borrowed from the GH6. It’s now much better at smoothing out vertical step motion than the S5, though there’s still some side-to-side sway. It also has a “Boost IS” for handheld video where you don’t need to move, keeping shots locked off like the camera’s on a tripod. Can it replace your gimbal? In some cases, yes, but you’ll have to work carefully as it still can’t match a gimbal’s smoothness.

Steve Dent/Engadget

Video quality is excellent, with extremely sharp 4K 30p and 4K 60p that’s just a touch less so. Colors are accurate and pleasing straight out of the camera, with natural-looking flesh tones. The 10-bit V-log video delivers a very solid 14+ stops of dynamic range, just slightly below Nikon and Sony models. That gave me plenty of room for extra creativity or to correct over- and underexposed shots. 

The S5 is also good in low light, thanks to Panasonic’s Dual Native ISO system. Don’t expect Sony A7S III-level performance, but the Dual ISO system really keeps noise down at ISOs as high as 12,800 or even 25,600. You’ll of course see noise when you boost shadows at those ISOs, but the grain looks quite natural. Anything below ISO 6400 has very little visible noise. One quirk is that it’s best to use ISO 4000 instead of ISO 3200, as the Dual ISO is set for ISO 640 and ISO 4000. 

As for rolling shutter, the S5 II is middling in this regard. It’s most noticeable in 6K or supersampled 4K modes, but not bad at all with an APS-C crop. I’d rate it as better than the higher-resolution A7 IV and about the same as Canon’s R6 II.

Finally, Panasonic offers a lot of ways to monitor video not seen on rival cameras, including waveforms and vectorscopes. Those features are very useful to video pros, helping them nail exposure and color accuracy. And as mentioned, audio is very easy to work with thanks to a dedicated hub to adjust settings, along with both line and mic inputs. 

Photography

Most people likely won’t buy the S5 II for photography, but it’s not bad at all in this department. It can handle bursts at up to 7 fps with the mechanical shutter or 30 fps in electronic mode. The buffer is quite impressive, as it allows for 200 shots in RAW before throttling – a full 6-seconds of uninterrupted 30 fps burst shooting.

At those speeds the autofocus largely keeps up, though it’s not quite as fast or accurate as the R6 II and A7 IV AF systems. As with video, the photo autofocus isn’t quite as smart or tenacious with subjects as Sony’s A7 IV. Still, it’s much better than the contrast-detect AF of the last model and up there with recent Nikon and Fujifilm AF systems.

Despite the fast electronic burst speeds, the S5 II has limited usage as a sports camera. The rolling shutter would impact shots with fast moving subjects, unless you use it in APS-C mode. That’s a feasible option, but it reduces the resolution by half.  

Given how well it handles video, photos are a piece of cake for the image stabilization system. It locks things down so well that I was able to shoot down to a quarter-second or even less and still get sharp images

Despite the shift to a sensor with phase-detect pixels, image quality hasn’t suffered, with dynamic range just slightly below Sony and Nikon models. JPEGs offer a good balance between noise reduction and sharpness, while delivering natural colors and pleasing skin tones. If you want more control, the RAW photos dial up the dynamic range so you can claw back highlights or dig into shadows. 

As with video, it also excels in low light, with very little noise up to ISO 6400, nothing too objectionable at ISO 12800 and usable images at ISO 25600 if you don’t try to lift the shadows too much. Beyond that, the color grain in particular can get too harsh.

Wrap-up

Steve Dent/Engadget

With the autofocus finally keeping up with rival cameras, Panasonic’s S5 II is an awesome full-frame vlogging and video camera option. Priced at $2,000, it’s also a very strong value proposition, particularly for video shooters.

Its primary competition is the Sony A7 IV and Canon EOS R6 II. Both of those cameras are better for photography, but the S5 II is much better for video and particularly vlogging, thanks to the built-in monitoring tools and superior stabilization. If you want a better match of photography and video tools, Fujifilm’s 40-megapixel $2,000 X-H2 is the best option – if you don’t mind stepping down to an APS-C sensor.

In fact, the S5 II’s greatest rival might be the upcoming S5 IIx. I’d argue that many people paying $2,000 wouldn’t hesitate to spend an extra $200 to get some pretty valuable features like ProRes SSD recording. Either way, it’s Panasonic’s best vlogging camera since the original GH5 and should rise to the top of many content creators’ shopping lists.

 

Snapchat now suggests soundtracks for your videos

You might not hem and haw the next time you’re choosing a soundtrack for a Snapchat video. Snap has introduced automatic Sounds features that help you produce clips faster. Sounds Recommendations, for instance, suggests music relevant to the augmented reality Lens you’re using. Try a bread Lens and you’ll see plenty of toast-related songs alongside the most popular overall tracks.

Sounds Sync, meanwhile, creates montage videos in sync to the beat of tunes in the Sounds collection. You’ll need between four and 20 photos or videos, but this could help you summarize a vacation or social outing without stressing about suitably-timed songs.

Both features are available now for iOS users in the US, and are rolling out worldwide. Android users can also use Sounds Recommendations right away, but they’ll have to wait until March to try Sounds Sync.

Snap isn’t shy about its goals. The easier it is to create videos, the more likely you are to post on Snapchat. This is also as much about helping artists as it is users — Snap music strategy lead Manny Adler claims this is a “unique opportunity” for musicians to reach listeners who’ll (hopefully) play full songs after hearing them in someone’s video.

The introductions come at a good time for Snap. The company’s audience is growing after a turbulent 2022, having reached 750 million monthly active Snapchat users despite laying off roughly 1,300 workers last summer. While it’s still much smaller than rivals like Instagram, which had two billion monthly active users as of last fall, it’s enduring competition that is frequently mimicking features. Small additions like Sounds Recommendations and Sync may help Snap maintain that growth.

 

Samsung’s Galaxy S23+ is already $140 off

Samsung’s latest flagship smartphones haven’t even been out for a week, but you can already score a solid discount on one model. The Samsung Galaxy S23+ with 256GB of storage has dropped by $140 to $860. That makes it the same price as the standard Galaxy S23 with the same storage capacity. It’s worth noting that the discount only applies to the Phantom Black colorway.

At 6.6 inches, the S23+ has a larger screen than the 6.1-inch S23. It has a bigger battery too. The specs are otherwise the same, save for ultrawideband support on the S23+. The phone runs on a Snapdragon 8 Gen 2 for Galaxy with a 3.36GHz octa-core CPU and Adreno 740 GPU. There’s 8GB of RAM, WiFi 6e and Bluetooth 5.3. The S23+ lineup runs on Android 13 too.

The S23+ doesn’t boast the same 200MP camera as the Galaxy S23 Ultra, but it’s a worthy phone in its own right. It does have 50MP wide, 12MP ultra-wide and 10MP telephoto cameras. Although it’s more of an evolution from the S22 than a revolution, the S23+ is especially worth considering if you’ve been hanging onto the same phone for a few years or you’re looking to make the switch from iOS to Android.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

 

FTX co-founder Sam Bankman-Fried faces four new criminal charges

FTX co-creator Sam Bankman-Fried (aka SBF) is now dealing with four new charges over the collapse of his crypto exchange. A newly unsealed indictment in a New York federal court accuses SBF of fraudulent activity through both FTX and a linked hedge fund. The co-founder also allegedly violated federal campaign finance laws by making secret political donations using the names of two executives.

The expanded charges now include 12 counts. A source speaking to CNBC claims the additional allegations could lead to an additional 40 years in prison if SBF is convincted.

Developing…

 

‘Ant-Man and the Wasp: Quantumania’ broke me

Early on in Ant-Man and the Wasp: Quantumania, our hero Scott Lang (Paul Rudd) and his daughter Cassie (Kathryn Newton) are warped into a quantum-level universe. It’s filled with alien biology and vistas that wouldn’t be out of place on distant planets. But while that sounds like the perfect setup for a fun sci-fi romp, I never bought it. And, unfortunately, the actors didn’t appear to buy it either. The backgrounds looked like psychedelic screensavers, and, similar to the Star Wars prequels, there was an uncanny disconnect between the live humans and their mostly digital surroundings.

I found the aesthetic so viscerally ugly, it made me fear for the future of the Marvel Cinematic Universe, and for anything else made with ILM’s StageCraft technology (AKA “the volume”). That realization surprised me, since I’ve mostly enjoyed how that tech helped make The Mandalorian’s unique worlds come alive. The volume is a series of enormous LED walls that can display real time footage. Together with interactive lighting, it makes actors seem like they’re actually walking around artificial environments. Another plus? It also helps the lighting look far more realistic, something that was particularly noticeable on Mando’s polished armor.

So what the hell happened to Quantumania? Its artificiality seems partially intentional, as it’s trying to evoke pulp fantasy and even a bit of Star Wars. But somewhere along the line, director Peyton Reed forgot to ground its fantastical visuals with anything resembling human emotion. When Ant-Man, his daughter, or their tiny-tech compatriots, Hank Pym (Michael Douglass) and Janet Van Dyne (Michelle Pfeiffer), enter the Quantum Realm, there’s little room for awe and wonder. Sure, they occasionally quip about something weird: buildings that move! An alien intrigued by body holes! But we quickly move onto a rote sci-fi tale of rebellion against an evil conqueror (in this case it’s Kang, played by Jonathan Majors.)

Vulture’s Bilge Ebiri, who calls the film “a cry for help,” succinctly describes why Quantumania falls flat: “The action is tired, the universe unconvincing, and nobody on screen looks like they want to be there. They don’t even look like they know where there is.”

Marvel

Clearly, we can’t blame”the volume” for all of the film’s faults, it’s just another tool in a director’s kit. In an interview with Collider, Reed said that he wasn’t sure if the technology would work out for Quantumania, but eventually he found it to be “great for certain environments, but not necessarily right for other ones.” He later added “There are limitations to it [the volume], and we push that system to its limit on this movie… What works so well in Mandalorian is they have a lot of lead time, because they’re doing a whole series, to invest and create these environments, and on the schedule we were on, it’s not always right for that situation.”

Several anonymous VFX workers told Vulturethat Quantumania’s hectic production schedule was one reason its computer generated worlds fall so flat. The higher-profile Black Panther sequel, Wakanda Forever, was a higher priority for Marvel (no surprise when that first movie made over $1.3 billion globally) when it came to VFX work. And there were apparently late-stage changes to Quantumania that led to some rushed work – though it’s worth noting that isn’t unusual for a major Marvel film.

“Making big pivots late in the game has consequences, and there is a constant scramble from the VFX houses to keep up,” a former VFX worker told Engadget. (They requested anonymity due to confidentiality agreements around their work.] “And near the end, it’s almost always a disaster. Lots of miracles. Lots of clever solutions, not based on heightening the art, but just being able to do a week’s worth of work in 24 hours.”

While watching Quantumania, I couldn’t help but compare it to Avatar: The Way of Water, another big-budget science fiction epic that brings us to another alien, almost completely computer-generated world. That film goes even further than Ant-Man, since almost every scene involves actors playing CG Na’vi characters, one or two humans and elaborate sets. But I never once doubted the reality of The Way of Water.

You could tell that director James Cameron has actually been thinking about the world of Pandora for over a decade, so he has a strong vision of how the Na’vi are supposed to interact with their animal companions, or how a soulless corporation may view a pristine planet as a way to make more revenue. With Quantumania, there’s no clear sense of why that sub-atomic universe is special, or why Kang may want to rule it. We might as well be watching a lesser Star Wars movie.

Marvel

Perhaps that’s why the volume rubbed me the wrong way this time around. When you have a stronger grasp of character and story, as The Mandalorian (mostly) demonstrated, it can help to make the entire experience feel more epic. But if your narrative is dull and unfocused, the volume can easily heighten its flaws. There’s room to do something truly special with the idea of a sub-atomic universe, the sort of thing screenwriter Jeff Loveness frequently did on Rick and Morty.

In the end, though, Quantumania feels like an episode of that show stretched out to two hours, and molded to fit the plot machinations of the MCU. Any enjoyment I had while watching it was instantly warped to the quantum realm when it was over.

 

Apple’s Mac Mini M2 and M2 Pro models get their first Amazon discounts

Mac Mini computers with M2 and M2 Pro are the cheapest way to get Apple’s latest processors, and now Amazon is sweetening the deal a bit more. The entry-level 256GB Mini M2 is on sale at $580 for a savings of $19 over the regular price, while the 512MB Mini M2 is $770, or $29 off. And if it’s the 512 Mini M2 Pro model you’re seeking, it can be found at $1,250, netting you a $49 discount. These appear to be Amazon’s new normal prices, but they’re less than we’re seeing at Apple’s Store.

Shop Mac Mini M2 and M2 Pro on Amazon

The Mac Mini is tiny but mighty, with the M2 model easily powerful enough for productivity chores and multitasking. The M2 Pro, meanwhile, is a low-key content creation demon, beating the Mac Studio’s M1 Max version and on par with the MacBook Pro 14-inch with M2 Max.  

On top of that, you get killer connectivity, with two Thunderbolt 4 USB-C connections, HDMI 2.0 (with 4K 240Hz and 8K 60Hz output), two USB-A ports, a headphone jack and gigabit Ethernet (upgradeable to 10 gigabit). The M2 Pro model adds two additional USB-C ports, making it even more useful for creatives with a ton of accessories.

The Mac Mini M2 won’t replace your gaming machine, but it can handle nearly everything else you throw at it. We wouldn’t recommend the overpriced storage or RAM upgrades either, as the M2 is much more efficient with RAM than typical PCs. Still, if you’re looking for a cheap but powerful Mac, this is the way to go. 

 

The Morning After: Apple is reportedly closer to adding no-prick glucose monitoring tech to its Watch

Bloomberg sources claim Apple’s quest for no-prick blood glucose monitoring is now at a “proof-of-concept stage” and good enough that it could come to market once it’s smaller. The technology, which uses lasers to gauge glucose concentration under the skin, was previously tabletop-sized but has reportedly advanced nearer to an iPhone-sized prototype.

It’s been in the works for a long time. In 2010, when Steve Jobs headed up Apple, the company bought blood glucose monitoring startup RareLight. But no-prick monitors are a challenge. In 2018, Alphabet’s health subsidiary, Verily, scrapped plans for a smart contact lens that tried to track glucose using tears.

– Mat Smith

The Morning After isn’t just a newsletter – it’s also a daily podcast. Get our daily audio briefings, Monday through Friday, by subscribing right here.

The biggest stories you might have missed

Amazon officially becomes a health care provider after closing purchase of One Medical

Microsoft expands the Xbox Game Pass family plan to six more countries

‘No Man’s Sky’ Fractal update overhauls VR gameplay in time for its PS VR2 release

Bose portable speakers are up to 30 percent off right now

Apple is convinced my dog is stalking me

The best Bluetooth trackers for 2023

Spotify’s new AI DJ will talk you through its recommendations

The DJ uses OpenAI to tell you about the songs it chooses for you.

Generative AI is absolutely everywhere right now, and that includes Spotify. Its latest feature, simply called DJ, kicks off a personalized selection of music playing that combines Spotify’s well-known personalization tools, like Discover Weekly, as well as the content that populates your home screen, all with some AI tricks. The feature rolls out today across Spotify Premium in the US and Canada.

Continue reading.

Notion’s AI editor is now available to anyone who wants writing help

The company only began testing Notion AI late last year.

Last November, Notion, the popular note-taking app, began testing a built-in generative machine learning algorithm dubbed Notion AI. Now it’s ready for launch. Notion said anyone, including free users, can start using its AI-powered writing assistant. More than two million people signed up for the waitlist for the alpha version and, according to the company, most testers weren’t asking it to write blog posts and marketing emails from scratch. Instead, they were using it to refine their own writing. As a result, the company decided to “completely redesign” Notion AI to make it more “iterative and conversational.” The new version of the tool will generate follow-up prompts until you’re satisfied with its results.

Continue reading.

Twitter’s 2FA paywall is a good opportunity to upgrade your security practices

The platform could become less secure – but that doesn’t mean you have to be.

Twitter announced plans to pull a popular method of two-factor authentication for non-paying customers last week. Starting March 20th, if you don’t want to pay $8 to $11 per month for a Twitter Blue subscription (hi, that’s me!), you’ll no longer be able to use text message authentication to get into your account. There are still some options to keep your account secure. Software-based authentication apps like Duo, Authy, Google Authenticator and the 2FA authenticator built into iPhones either send you a notification or, in the case of Twitter, generate a token to complete your login, or you can use hardware-based security keys that plug into devices. We walk you through the options if you want to stick around on Twitter.

Continue reading.

Uber puts a cute lil ride tracker on the iPhone lock screen

The app now supports iOS 16’s Live Activities feature.

Uber

Uber has rolled out an update for its iPhone app that shows whether it’s time to head out the door and meet the ride you ordered. You even get a cute car icon moving along to illustrate it. The company has launched support for Live Activities, an iOS 16 feature that puts real-time events from compatible apps on top of the lock screen and on the iPhone 14 Dynamic Island when your device is unlocked.

Continue reading.

 

The creator of PlayStation’s iconic logo sound has died

You may not know the name Tohru Okada, but if you’ve ever owned a PlayStation console, you’ll be familiar with one of his most iconic creations: The sound that plays every time the PS logo appears. According to Japanese-language sources (via GameSpot), Okada has passed away on the 14th due to heart failure. He was 73 years old. Okada was reportedly hospitalized early last year due to a compression fracture and was undergoing rehabilitation in hopes of performing at a music festival in April.

In addition to the PlayStation logo sound, Okada also composed the music for a series of Crash Bandicoot advertisements that aired in the ’90s, as well as for some anime titles like Mobile Suit SD Gundam. For long-time fans of Japanese rock, though, he was more than just a game and anime composer. He was the keyboardist for a rock band called Moonriders, where he played with Keiichi Suzuki, who made music for Nintendo’s Mother series that’s also known as EarthBound outside Japan.

While the PS “bing” sound is short and unobtrusive, Sony has been using it for over 25 years. You can watch the video below to hear what it sounded like for various PS ads and loading screens from the very first PlayStation. 

 

Generated by Feedzy
Exit mobile version