DJI Mini 4 Pro review: The best lightweight drone gains more power and smarts

Last year, DJI showed what was possible in a small drone with the Mini 3 Pro by fitting tons of technology and a high-quality camera into a sub-250 gram drone. Following that up was never going to be easy, but now (after numerous leaks) it’s unveiled the Mini 4 Pro with a long list of new features.

Aside from one improvement, the camera is largely the same. However, it has new omnidirectional obstacle sensors that eliminate the blind spots on the Mini 3 Pro. It also comes with a new feature called ActiveTrack 360 that lets you program camera moves when tracking a subject.

Small drones are the best way to track fast-paced action, as they’re maneuverable and less prone to damage when crashing. With all the improvements, the Mini 4 Pro is better and safer at that than its predecessor — at least on paper. Now, let’s find out if it lives up to that in the real world.

Design

The Mini 4 Pro is still under 250 grams so it can be flown without a license or registration in many regions, but it has subtle design changes all around. The four forward- and rear-facing sensors are now placed so that they can see to the sides as well, and the body is a bit more streamlined. It has larger cooling vents, slightly smaller rear arms and new landing feet at the front. The camera/gimbal shield is smaller and easier to put on, and it has a new guard that protects the propellers when it’s stored.

As before, the gimbal tilts up 60 degrees and down 90, and the camera flips 90 degrees to give you full vertical resolution for social media. At the rear is a microSD port, and the Mini 4 Pro has 2GB of internal storage for emergencies.

The Mini 4 Pro comes with one of two controllers, the basic RC-N2 that requires a smartphone, and the RC2 with a built-in screen. Since it uses DJI’s new Ocusync 4 transmission, first introduced with the Air 3, it only works with the new controllers and not the older models — for now, anyway.

You can buy it with a $55 ND filter set for sunny days, which I’d recommend if you can afford it. DJI also offers a wide-angle 18mm equivalent lens attachment ($40), but it has significant barrel distortion and can cause focus issues.

The drone also supports DJI’s Lightcut, an editing app that lets you generate quick videos for social media. As DJI says, it allows “one tap generation of captivating videos by merging ActiveTrack, MasterShots, and QuickShots footage,” while automating sound effects and more. It also works wirelessly, so there’s no need to download footage to your smartphone.

Performance

Given its small size and maneuverability (and the same sensor as DJI’s Action 4), you can think of the Mini 4 Pro as a flying action camera. The light weight (and low price compared to, say, a Mavic 3 Pro) also makes crashes less consequential.

Maximum speed is a decently fast 35 MPH in sport mode, or 26 MPH in regular operation. It can handle winds up to 24 MPH, an impressive figure for a sub 250 gram drone. In operation, it can look like it’s being buffeted fairly hard by the wind, but you wouldn’t know it from the footage thanks to DJI’s gimbal and stabilization technology.

Steve Dent for Engadget

The Mini 3 Pro was effectively blind on the sides, but the Mini 4 Pro offers protection all around like the Mavic 3 Pro and Air 3 — thanks to four new omnidirectional sensors on top and two on bottom (along with a time-of-flight sensor). It also uses DJI’s APAS, which offers automatic braking and obstacle bypass for extra security.

If you’re spending $760+ on a drone, you may not want to test the obstacle detection limits. That’s my job, so I had it follow me while I walked and biked among trees and other potential snags. I did have a few crashes, but here’s what I learned on how to avoid them.

The sensors are visual, so they don’t work in dim light. And dense forest with fine branches is a no go — the omni sensors can miss those, but the propellers won’t. Finally, the Mini 4 Pro detects obstacles best when traveling forward, less so when going sideways, and worst of all when flying backwards.

It did work around well-spaced trees with thick branches and plenty of leaves, near buildings and generally around well-defined obstacles. It was able to maneuver around those, choose decent routes and reacquire subjects if they disappeared. That helped me capture some nice action footage, though one should always remain wary of accidents.

Steve Dent for Engadget

ActiveTrack 360 adds camera moves to the usual subject tracking to create dramatic shots. It looks confusing at first, but the idea is pretty simple. You use the so-called steering wheel to “draw” a route on concentric circles, and the drone will follow it, ducking any obstacles it encounters.

You can change parameters including the inner and outer radius, inner and outer height, camera speed and ground proximity. That makes it possible to get a wide variety of shots. The tricky part was figuring out which side the drone considered to be forward and backward — DJI should work on this to make things clearer.

If you plan carefully you can get some gorgeous, swooping ActiveTrack shots. The usual obstacle caveats apply, though, and it also adds complexity — because you have to figure out where the drone is going to be when you arrive at your end point. With all that, it’s best to practice in an open area before trying it in a complex environment.

Of course, the Mini 4 Pro still has DJI’s automatic modes aimed at social media users, like MasterShots, QuickShots and Panorama along with Hyperlapse. It even includes the Waypoint feature from the Mavic 3 Pro, which lets you pre-program complex drone moves and repeat them — a sophisticated feature for a small drone.

Steve Dent for Engadget

For the latter feature, you launch the drone and select the Waypoint function, fly to a spot of interest and set the correct camera angle. Once there, you tap “+” on the screen (or hit the C1 button the RC2 remote) to program a waypoint. Repeat that process through all your points of interest, and once you’re done, you can play back the sequence. The drone will smoothly fly to each point the same way each time, so you use it to show a scene during the day and then later at night, for instance.

The Mini 4 Pro uses Ocusync 4 transmission first seen on the Air 3 that sends 1080/60p video up to 20 km, compared to 1080/30p over 12 km with Ocusync 3. Those distances are lower here in Europe because of transmitter power rules.

In use, it provides a noticeable improvement in connectivity, with fewer dropouts and much greater range — even if the drone goes behind obstacles. The change is very noticeable here in Europe with smoother video and dropouts now very rare. It should be even better in the US, where you could send the Mini 4 Pro on a pretty long trip.

Steve Dent for Engadget

The standard 2,590 mAh Intelligent Flight Battery has a bit more capacity than the Mini 3 Pro’s 2,453 mAh cell, but range remains the same at 34 minutes. In real-world flying, we saw about 25 minutes before hearing the return-to-home warning, so plan accordingly.

If you have a Mini 3 Pro, its cells appear to be compatible with the new drone, so that’s a good thing if you already own that model. In the US, you can get the Plus batteries that provide up to 45 minutes of range, but local rules block their use in Europe.

The RC-2 first seen on the Air 3 is DJI’s third screen controller after the RC and the RC Pro. It’s significantly better than the RC, with a brighter screen, better feel and more precise controls. The other option is the non-screen RC-N2 (requiring a smartphone), which is similar to the RC-N1 model but with O4 compatibility.

Camera

Steve Dent for Engadget

The Mini 4 Pro’s camera has the same 1/1.3 dual ISO sensor as its predecessor, using an identical 24mm equivalent lens with a fixed f/1.7 aperture. That’s a pretty sizable sensor for such a small drone, just a bit smaller than the 1-inch sensor on the Mavic Air 2S.

The difference is that it now supports 4K slow-mo at up to 100 fps, or 1080p at 200 fps. The motion is embeded 30fps, but it’s still a nice feature for wildlife, crashing waves and more. That’s on top of 4K at up to 60fps and 1080p at 120fps. It has a two times digital zoom for 4K and four times at 1080p, with a slight loss in sharpness.

There’s also support for DJI’s D-LogM, which boosts dynamic range and gives you more flexibility in post. DJI has a LUT that makes it easy to convert it to regular video — but some editing is required for best results. You can also shoot in DJI’s HLG mode, which again boosts dynamic range. You can see the results right away on an HDR TV, but you’ll need to do an HLG to REC.709 color-space transform to use it with regular video. Both support 10-bit 4:2:0 capture for improved fidelity and reduced banding.

Quality is about the same as a really good smartphone — but not on par with a mirrorless camera or DJI’s pro-level Mavic 3. Video is sharp with accurate colors. The automatic mode delivers nice video, though it sometimes over- and underexposes on sunny or dark days. You can change exposure compensation, but nothing else. Luckily, a fully manual pro mode is available for better control of color balance, LOG, HLG, shutter, ISO and more

The Mini 4 Pro can shoot sharp 48-megapixel images or combine four pixels into one for 12-megapixel images with improved night sensitivity. You can easily fix over- or under-exposed photos if you use the RAW DNG format.

Low-light sensitivity is good but not great —- better than, say, a GoPro 12. Shooting at twilight, video was less clear than a similar scene shot with the Mavic 3 Pro. The drone also offers a “night” mode that effectively boosts dynamic range, making dimly-lit scenes pop better.

In all, image quality isn’t perfect, but remember that this is a $1,000 lightweight drone. It beats all other models in that category, and it’s better than many heavier drones, too.

Wrap-up

Steve Dent for Engadget

Once again, DJI’s Mini 4 Pro sets a benchmark for small drones. It has multiple new useful features, including updated obstacle detection, ActiveTrack 360, O4 transmission and Waypoints. All of those make it a solid budget choice for action sports, events, aerial photography, industrial applications and more.

Its main competition is the Autel Evo Nano Plus, currently on sale for $580. That model has a similar 50-megapixel 1/1.27-inch camera sensor, three-way obstacle avoidance, subject tracking, and more. However, it’s limited to 4K 30p and doesn’t offer a remote with a screen. If you have a bit more to spend, DJI’s Air 3 offers more stability and an extra tele camera.

All that said, the Mini 4 Pro isn’t cheap for a budget drone. It’s priced at $759 for the drone with a battery and RC-N2 controller, $959 with the RC2 controller and $1,099 for the Fly More kit with three batteries and a charger, the RC2, a carrying case and extra props. Still, if you’re in the market for a drone in that price range, nothing else can really touch it.

This article originally appeared on Engadget at https://www.engadget.com/dji-mini-4-pro-review-the-best-lightweight-drone-gains-more-power-and-smarts-130012755.html?src=rss 

The Morning After: Tinder’s $500 a month tier is now open to everyone who can afford it

Hey big spender. Tinder Select, the dating app’s most exclusive tier, is rolling out now. It will cost love seekers $500 per month (or $6,000 annually — no bulk discounts) for features like exclusive search and matching.

The company has only offered Tinder Select to the less than one percent of users it considers “extremely active” — does anyone want that label? Tinder told Bloomberg it’ll open applications for Tinder Select on a rolling basis, but it didn’t say exactly when. Tinder’s exclusive membership was originally hinted at all the way back in 2019.

The owners of Tinder, Match Group, have dabbled in exclusive dating apps before, like The League, which it bought in 2022, so it’s not too much of a shock to see Tinder also get reframed for the lonely rich. Is this worse than paying for verification when you have less than 1,000 followers on other social media networks? Yes. Yes, it is.

— Mat Smith

The biggest stories you might have missed

What the Elon Musk biography revealed about his tumultuous Twitter takeover

Drop BMR1 PC speaker review: Not bad, but not amazing

The best October Amazon Prime Day early access deals for 2023

Hitting the Books: Beware the Tech Bro who comes bearing gifts

The Morning After: Microsoft’s bad week, and Alexa gets an attitude

Last week’s biggest news meets Engadget’s lens.

Engadget

Our short-but-sweet YouTube edition of this week’s news covers includes Microsoft’s rough, rough week, a sassier Alexa from Amazon and whether the iPhone 15 Pro is worth the extra bucks. Also: viewers take umbrage at my ‘fake’ glasses. Which are not fake.

Watch here.

Sony ZV-E1 camera review

The best vlogging camera, by a big margin.

Engadget

I’ve been waiting for this. Sony fully embraced amateur / semi-pro content creators back in 2020, with the launch of the ZV1 camera. It has since added no less than four models to its ZV lineup, and this is the latest: the 12-megapixel full-frame ZV-E1. It uses the same sensor as the $3,500 A7S III, a video-focused camera — and a low-light marvel. However, the ZV-E1 costs $1,300 less. While Sony has cut some minor corners, it combines outstanding video features and AI tricks, and I might have to start saving for one. 

Check out the full review.

Samsung leaks its next family of smartphones, earbuds and tablets

Don’t get too excited. It’s the Fan Edition ones.

Samsung

Eagle-eyed visitors to Samsung’s Argentinian website — I visit it weekly — have spotted something a little unexpected: a product page for new Galaxy Buds FE earbuds, along with images of a Galaxy S23 FE smartphone and Galaxy Tab S9 FE tablet. Samsung’s Fan Edition devices have proven popular, packing in solid features for a more reasonable price than Samsung’s flagship models.

The company hasn’t let slip any specs for the phone and tablet yet. However, there are some details on the Galaxy Buds FE, Samsung’s first Fan Edition earbuds. They’re slated to have a single 12mm driver, three microphones in each earbud to bolster active noise cancellation and a three-way speaker.

Continue reading.

The best foldable phones for 2023

Are flip phones back?

Foldables have come a long way since the original Galaxy Fold went on sale back in 2019. They’re smaller, they’re tougher and, while they still aren’t a great option for people on a budget, they’re now more affordable too. (Kind of?) We walk through the crucial specs, durability concerns and our favorite picks.

Continue reading.

The Engadget Podcast

iPhone 15 Pro reviews, and Microsoft picks AI over Surface.

This week, Cherlynn chats about her experience reviewing the iPhone 15 Pro and Apple Watch Series 9. Does a 5X camera zoom make much of a difference? Meanwhile, Microsoft is basically consolidating all of the Copilot products it’s already announced for Edge, MS 365 and Windows, but maybe this will be less confusing in the long run?

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-tinders-500-a-month-tier-is-now-open-to-everyone-who-can-afford-it-111517880.html?src=rss 

Amazon’s bet on Anthropic’s AI smarts could total more than $4 billion

Amazon is investing up to $4 billion in OpenAI rival Anthropic as a way to provide advanced deep learning and other services to its Amazon Web Service (AWS) customers, the company wrote in a press release. In return, AWS becomes Anthropic’s “primary cloud provider” to train and deploy its future foundation models. It’s the second large investment in the company, founded by former OpenAI executives, following Google’s $400 million partnership with the firm. 

The e-commerce company will start with a $1.25 billion investment to gain a minority stake in Anthropic, with an option to boost that to a total of $4 billion. Along with Google and Amazon, Anthropic also counts Salesforce, Zoom, Spark Capital and others as backers. Notably, Anthropic’s deal with Google didn’t require it to buy cloud services from the search giant. 

Anthropic recently unveiled its first consumer-facing chatbot Claude 2, accessible by subscription much like OpenAI’s ChatGPT. The Claude “Constitutional AI” system is guided by 10 “foundational” principals of fairness and autonomy and is supposed to be harder to trick than other AI. Anthropic is also working on a chatbot it calls “Claude-Next” that’s supposed to be ten times more powerful than any current AI, according to TechCrunch

The startup touts itself as an advocate for responsible AI deployment, and recently formed an AI safety group with Google, Microsoft and Open AI. It has been with AWS since 2021. “Claude excels at a wide range of tasks, from sophisticated dialogue and creative content generation to complex reasoning and detailed instruction, while maintaining a high degree of reliability and predictability,” according to Amazon.  

Instead of training their own models, AWS customers will be able to use Anthropic’s AI models via Amazon’s Bedrock, a service designed specifically for AI development. Amazon Cloud also offers its own AI applications, and with the new partnership, is hoping to position itself as a key player in the field.

Microsoft-backed OpenAI is largely considered to be the leader in AI and chatbot tech, thanks to its ultra-popular ChatGPT chatbot and DALL-E image generation service. Use of AI in business continues to grow exponentially, despite concerns over the legality and ethics of AI-appropriated content — it was considered to be a strong sticking point in the WGA writer’s strike, for example. 

This article originally appeared on Engadget at https://www.engadget.com/amazons-bet-on-anthropics-ai-smarts-could-total-more-than-4-billion-095321462.html?src=rss 

NASA’s OSIRIS-REx successfully delivers asteroid samples back to Earth

NASA’s OSIRIS-REx seven-year mission to collect rocks and dust from a near-Earth asteroid is complete. The capsule containing the final samples returned to Earth on the morning of September 24th, touching down in the desert at the Department of Defense’s Utah Test and Training Range at 10:52 am ET.

The device collected around 250 grams of material from a carbon-rich asteroid dubbed “Bennu,” which NASA says hosts some of the oldest rocks in our solar system. The sample gives scientists more information about the building blocks of what planetary makeup looked like 4.5 billion years ago. 

Because asteroids are considered to be natural “time capsules” — due to how little they change over time – they can offer researchers a window into the chemical composition of our early solar system and determine whether or not Bennu carried the organic molecules that are found in life. Now that samples are in the hands of NASA scientists, the agency says its researchers will catalog the collection and conduct in-depth analysis over the next two years. 

NASA

NASA’s mission began all the way back in September 2016, launching from Cape Canaveral in Florida. It took just over a year to perform its flyby of Earth before arriving at the Bennu asteroid 15 months later in December 2018. In October 20, 2022, the explorer successfully captured samples from Bennu and began its journey back to Earth on May 10, 2021. Upon its touchdown on September 24th, The Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx’s full name) had journeyed 3.9 billion miles.

While NASA’s OSIRIS-REx is not the first attempt a space agency has made to deliver an asteroid sample to Earth, this mission’s rendition has the largest sample size. The Bennu sample is estimated to hold about half a pound of rocky material from the asteroid’s surface. In a similar vein, the Japan Aerospace Exploration Agency’s (JAXA) Hayabusa mission delivered specks from an asteroid called Itokawa and in a secondary mission, brought back about 5 grams from another asteroid coined Ryugu in 2021. Japan’s agency shared 10 percent of their samples with NASA at the time. NASA is expected to share a small percentage of its OSIRIS-REx samples from Bennu with JAXA.

While the sample made landfall, NASA’s OSIRIS-REx spacecraft remained in space. It has now set off on a new mission to explore another near-Earth asteroid called Apophis, which NASA says is roughly 1,200 feet (roughly 370 meters) in diameter and will come within 20,000 miles of Earth in 2029. 

The new project, dubbed OSIRIS-APophis EXplorer (OSIRIS-APEX), will study changes in the asteroid that experts believed in 2004 had a 2.7 percent chance of hitting Earth. The spacecraft’s gas thrusters will attempt to “dislodge dust and small rocks on and below Apophis’ surface,” giving experts data on how asteroid’s proximity to Earth affected its orbit, spin rate and surface composition. 

This article originally appeared on Engadget at https://www.engadget.com/nasas-osiris-rex-successfully-delivers-asteroid-samples-back-to-earth-091107901.html?src=rss 

The Hollywood writers strike may soon end after tentative deal is struck

Following marathon negotiations over the last five days, the Writers Guild of America (WGA) and major studios have reached a tentative deal to end a 146-day strike that has shut down much of the industry, Variety has reported. “We can say, with great pride, that this deal is exceptional – with meaningful gains and protections for writers in every sector of the membership,” the WGA wrote in an email to members.

Picketing has been suspended as of Sunday night, but the strike is still in force until it’s ratified and approved by members. “To be clear, no one is to return to work until specifically authorized to by the Guild. We are still on strike until then,” the email stated.

One of the last sticking points was reportedly around the use of generative AI in content production. Other details of the contract have yet to be released, including around streaming residuals, staffing levels for shows and more. “Though we are eager to share the details of what has been achieved with you, we cannot do that until the last ‘i’ is dotted,” wrote the WGA.

Things were looking bleak for the industry in mid-September, but some high-profile WGA members reportedly pressured leadership to restart negotiations. In addition, four key AMPTP executives (Bob Iger from Disney, NBCUniversal’s Donna Langley, Ted Sarandos and David Zaslav of Warner Bros. Discovery) participated in negotiations for three days. Bargaining resumed on September 20, and the deal was reached five days later.

Considering the strike length and WGA leadership’s high level of praise for the deal, a positive vote from membership seems probable. The guild credited membership’s solidarity and its willingness to “endure the pain and uncertainty of the past 146 days” as key to clinching the deal. “It is the leverage generated by your strike, in concert with the extraordinary support of our union siblings, that finally brought the companies back to the table to make a deal,” it stated in the message.

The labor strife isn’t finished yet, though. The SAG-AFTRA actors’ guild is still on strike after hitting picket lines on July 14 over issues like likeness rights. “While we look forward to reviewing the WGA and AMPTP’s tentative agreement, we remain committed to achieving the necessary terms for our members,” the union wrote in a statement.

Even after the actors reach their own deal, it will take time for TV series, films, talk shows and other productions to get back up to speed — so expect delays in your favorite shows coming back. The AMPTP has yet to comment on the WGA deal. 

This article originally appeared on Engadget at https://www.engadget.com/the-hollywood-writers-strike-may-soon-end-after-tentative-deal-is-struck-082357469.html?src=rss 

Hitting the Books: Beware the Tech Bro who comes bearing gifts

American entrepreneurs have long fixated on extracting the maximum economic value out of, well really, any resource they can get their hands on — from Henry Ford’s assembly line to Tony Hsieh’s Zappos Happiness Experience Form. The same is true in the public sector where some overambitious streamlining of Texas’ power grid contributed to the state’s massive 2021 winter power crisis that killed more than 700 people. In her new book, the riveting Optimal Illusions: The False Promise of Optimization, UC Berkeley applied mathematician and author, Coco Krumme, explores our historical fascination with optimization and how that pursuit has often led to unexpected and unwanted consequences in the systems we’re streamlining. 

In the excerpt below, Krumme explores the recent resurgence of interest in Universal Basic (or Guaranteed) Income and the contrasting approaches to providing UBI between tech evangelists like Sam Altman and Andrew Yang, and social workers like Aisha Nyandoro, founder of the Magnolia Mother’s Trust, in how to address the difficult questions of deciding who should receive the financial support, and how much.

Riverhead Books

Excerpted from Optimal Illusions: The False Promise of Optimization by Coco Krumme. Published by Riverhead Books. Copyright © 2023 by Coco Krumme. All rights reserved.

False Gods

California, they say, is where the highway ends and dreams come home to roost. When they say these things, their eyes ignite: startup riches, infinity pools, the Hollywood hills. The last thing on their minds, of course, is the town of Stockton.

Drive east from San Francisco and, if traffic cooperates, you’ll be there in an hour and a half or two, over the long span of slate‑colored bay, past the hulking loaders at Oakland’s port, skirting rich suburbs and sweltering orchards and the government labs in Livermore, the military depot in Tracy, all the way to where brackish bay waters meet the San Joaquin River, where the east‑west highways connect with Interstate 5, in a tangled web of introductions that ultimately pitches you either north toward Seattle or south to LA.

Or you might decide to stay in Stockton, spend the night. There’s a slew of motels along the interstate: La Quinta, Days Inn, Motel 6. Breakfast at Denny’s or IHOP. Stockton once had its place in the limelight as a booming gold‑rush supply point. In 2012, the city filed for bankruptcy, the largest US city until then to do so (Detroit soon bested it in 2013). First light reveals a town that’s neither particularly rich nor desperately poor, hitched taut between cosmopolitan San Francisco on one side and the agricultural central valley on the other, in the middle, indistinct, suburban, and a little sad.

This isn’t how the story was supposed to go. Optimization was supposed to be the recipe for a more perfect society. When John Stuart Mill aimed for the greater good, when Allen Gilmer struck out to map new pockets of oil, when Stan Ulam harnessed a supercomputer to tally possibilities: it was in service of doing more, and better, with less. Greater efficiency was meant to be an equilibrating force. We weren’t supposed to have big winners and even bigger losers. We weren’t supposed to have a whole sprawl of suburbs stuck in the declining middle.

We saw how overwrought optimizations can suddenly fail, and the breakdown of optimization as the default way of seeing the world can come about equally fast. What we face now is a disconnect between the continued promises of efficiency, the idea that we can optimize into perpetuity, and the reality all around: the imperfect world, the overbooked schedules, the delayed flights, the institutions in decline. And we confront the question: How can we square what optimization promised with what it’s delivered?

Sam Altman has the answer. In his mid-thirties, with the wiry, frenetic look of a college student, he’s a young man with many answers. Sam’s biography reads like a leaderboard of Silicon Valley tropes and accolades: an entrepreneur, upper‑middle‑class upbringing, prep school, Stanford Computer Science student, Stanford Computer Science dropout, where dropping out is one of the Valley’s top status symbols. In 2015, Sam was named a Forbes magazine top investor under age thirty. (That anyone bothers to make a list of investors in their teens and twenties says as much about Silicon Valley as about the nominees. Tech thrives on stories of overnight riches and the mythos of the boy genius.)

Sam is the CEO and cofounder, along with electric‑car‑and‑rocket‑ship‑magnate Elon Musk, of OpenAI, a company whose mission is “to ensure that artificial general intelligence benefits all of humanity.” He is the former president of the Valley’s top startup incubator, Y Combinator, was interim CEO of Reddit, and is currently chairman of the board of two nuclear‑energy companies, Helion and Okto. His latest venture, Worldcoin, aims to scan people’s eyeballs in exchange for cryptocurrency. As of 2022, the company had raised $125 million of funding from Silicon Valley investors.

But Sam doesn’t rest on, or even mention, his laurels. In conversation, he is smart, curious, and kind, and you can easily tell, through his veneer of demure agreeableness, that he’s driven as hell. By way of introduction to what he’s passionate about, Sam describes how he used a spreadsheet to determine the seven or so domains in which he could make the greatest impact, based on weighing factors such as his own skills and resources against the world’s needs. Sam readily admits he can’t read emotions well, treats most conversations as logic puzzles, and not only wants to save the world but believes the world’s salvation is well within reach.

A 2016 profile in The New Yorker sums up Sam like this: “His great weakness is his utter lack of interest in ineffective people.”

Sam has, however, taken an interest in Stockton, California.

Stockton is the site of one of the most publicized experiments in Universal Basic Income (UBI), a policy proposal that grants recipients a fixed stipend, with no qualifications and no strings attached. The promise of UBI is to give cash to those who need it most and to minimize the red tape and special interests that can muck up more complex redistribution schemes. On Sam’s spreadsheet of areas where he’d have impact, UBI made the cut, and he dedicated funding for a group of analysts to study its effects in six cities around the country. While he’s not directly involved in Stockton, he’s watching closely. The Stockton Economic Empowerment Demonstration was initially championed by another tech wunderkind, Facebook cofounder Chris Hughes. The project gave 125 families $500 per month for twenty‑four months. A slew of metrics was collected in order to establish a causal relationship between the money and better outcomes.

UBI is nothing new. The concept of a guaranteed stipend has been suggested by leaders from Napoleon to Martin Luther King Jr. The contemporary American conception of UBI, however, has been around just a handful of years, marrying a utilitarian notion of societal perfectibility with a modern‑day faith in technology and experimental economics.

Indeed, economists were among the first to suggest the idea of a fixed stipend, first in the context of the developing world and now in America. Esther Duflo, a creative star in the field and Nobel Prize winner, is known for her experiments with microloans in poorer nations. She’s also unromantic about her discipline, embracing the concept of “economist as plumber.” Duflo argues that the purpose of economics is not grand theories so much as on‑the‑ground empiricism. Following her lead, the contemporary argument for UBI owes less to a framework of virtue and charity and much more to the cold language of an econ textbook. Its benefits are described in terms of optimizing resources, reducing inequality, and thereby maximizing societal payoff.

The UBI experiments under way in several cities, a handful of them funded by Sam’s organization, have data‑collection methods primed for a top‑tier academic publication. Like any good empiricist, Sam spells out his own research questions to me, and the data he’s collecting to test and analyze those hypotheses.

Several thousand miles from Sam’s Bay Area office, a different kind of program is in the works. When we speak by phone, Aisha Nyandoro bucks a little at my naive characterization of her work as UBI. “We don’t call it universal basic income,” she says. “We call it guaranteed income. It’s targeted. Invested intentionally in those discriminated against.” Aisha is the powerhouse founder of the Magnolia Mother’s Trust, a program that gives a monthly stipend to single Black mothers in Jackson, Mississippi. The project grew out of her seeing the welfare system fail miserably for the very people it purported to help. “The social safety net is designed to keep families from rising up. Keep them teetering on edge. It’s punitive paternalism. The ‘safety net’ that strangles.”

Bureaucracy is dehumanizing, Aisha says, because it asks a person to “prove you’re enough” to receive even the most basic of assistance. Magnolia Mother’s Trust is unique in that it is targeted at a specific population. Aisha reels off facts. The majority of low‑income women in Jackson are also mothers. In the state of Mississippi, one in four children live in poverty, and women of color earn 61 percent of what white men make. Those inequalities affect the community as a whole. In 2021, the trust gave $1,000 per month to one hundred women. While she’s happy her program is gaining exposure as more people pay attention to UBI, Aisha doesn’t mince words. “I have to be very explicit in naming race as an issue,” she says.

Aisha’s goal is to grow the program and provide cash, without qualifications, to more mothers in Jackson. Magnolia Mother’s Trust was started around the same time as the Stockton project, and the nomenclature of guaranteed income has gained traction. One mother in the program writes in an article in Ms. magazine, “Now everyone is talking about guaranteed income, and it started here in Jackson.” Whether or not it all traces back to Jackson, whether the money is guaranteed and targeted or more broadly distributed, what’s undeniable is that everyone seems to be talking about UBI.

Influential figures, primarily in tech and politics, have piled on to the idea. Jack Dorsey, the billionaire founder of Twitter, with his droopy meditation eyes and guru beard, wants in. In 2020, he donated $15 million to experimental efforts in thirty US cities.

And perhaps the loudest bullhorn for the idea has been wielded by Andrew Yang, another product of Silicon Valley and a 2020 US presidential candidate. Yang is an earnest guy, unabashedly dorky. Numbers drive his straight‑talking policy. Blue baseball caps for his campaign are emblazoned with one short word: MATH.

UBI’s proponents see the potential to simplify the currently convoluted American welfare system, to equilibrate an uneven playing field. By decoupling basic income from employment, it could free some people up to pursue work that is meaningful.

And yet the concept, despite its many proponents, has managed to draw ire from both ends of the political spectrum. Critics on the right see UBI as an extension of the welfare state, as further interference into free markets. Left‑leaning critics bemoan its “inefficient” distribution of resources: Why should high earners get as much as those below the poverty line? Why should struggling individuals get only just enough to keep them, and the capitalist system, afloat?

Detractors on both left and right default to the same language in their critiques: that of efficiency and maximizing resources. Indeed, the language of UBI’s critics is all too similar to the language of its proponents, with its randomized control trials and its view of society as a closed economic system. In the face of a disconnect between what optimization promised and what it delivered, the proposed solution involves more optimizing.

Why is this? What if we were to evaluate something like UBI outside the language of efficiency? We might ask a few questions differently. What if we relaxed the suggestion that dollars can be transformed by some or another equation into individual or societal utility? What if we went further than that and relaxed the suggestion of measuring at all, as a means of determining the “best” policy? What if we put down our calculators for a moment and let go of the idea that politics is meant to engineer an optimal society in the first place? Would total anarchy ensue?

Such questions are difficult to ask because they don’t sound like they’re getting us anywhere. It’s much easier, and more common, to tackle the problem head‑on. Electric‑vehicle networks such as Tesla’s, billed as an alternative to the centralized oil economy, seek to optimize where charging stations are placed, how batteries are created, how software updates are sent out — and by extension, how environmental outcomes take shape. Vitamins fill the place of nutrients leached out of foods by agriculture’s maximization of yields; these vitamins promise to optimize health. Vertical urban farming also purports to solve the problems of industrial agriculture, by introducing new optimizations in how light and fertilizers are delivered to greenhouse plants, run on technology platforms developed by giants such as SAP. A breathless Forbes article explains that the result of hydroponics is that “more people can be fed, less precious natural resources are used, and the produce is healthier and more flavorful.” The article nods only briefly to downsides, such as high energy, labor, and transportation costs. It doesn’t mention that many grains don’t lend themselves easily to indoor farming, nor the limitations of synthetic fertilizers in place of natural regeneration of soil.

In working to counteract the shortcomings of optimization, have we only embedded ourselves deeper? For all the talk of decentralized digital currencies and local‑maker economies, are we in fact more connected and centralized than ever? And less free, insofar as we’re tied into platforms such as Amazon and Airbnb and Etsy? Does our lack of freedom run deeper still, by dint of the fact that fewer and fewer of us know exactly what the algorithms driving these technologies do, as more and more of us depend on them? Do these attempts to deoptimize in fact entrench the idea of optimization further?

A 1952 novel by Kurt Vonnegut highlights the temptation, and also the threat, of de-optimizing. Player Piano describes a mechanized society in which the need for human labor has mostly been eliminated. The remaining workers are those engineers and managers whose purpose is to keep the machines online. The core drama takes place at a factory hub called Ilium Works, where “Efficiency, Economy, and Quality” reign supreme. The book is prescient in anticipating some of our current angst — and powerlessness — about optimization’s reach.

Paul Proteus is the thirty‑five‑year‑old factory manager of the Ilium Works. His father served in the same capacity, and like him, Paul is one day expected to take over as leader of the National Manufacturing Council. Each role at Ilium is identified by a number, such as R‑127 or EC‑002. Paul’s job is to oversee the machines.

At the time of the book’s publication, Vonnegut was a young author disillusioned by his experiences in World War II and disheartened as an engineering manager at General Electric. Ilium Works is a not‑so‑thinly‑veiled version of GE. As the novel wears on, Paul tries to free himself, to protest that “the main business of humanity is to do a good job of being human beings . . . not to serve as appendages to machines, institutions, and systems.” He seeks out the elusive Ghost Shirt Society with its conspiracies to break automation, he attempts to restore an old homestead with his wife. He tries, in other words, to organize a way out of the mechanized world.

His attempts prove to be in vain. Paul fails and ends up mired in dissatisfaction. The machines take over, riots ensue, everything is destroyed. And yet, humans’ love of mechanization runs deep: once the machines are destroyed, the janitors and technicians — a class on the fringes of society — quickly scramble to build things up again. Player Piano depicts the outcome of optimization as societal collapse and the collapse of meaning, followed by the flimsy rebuilding of the automated world we know.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-optimal-illusions-coco-krumme-riverhead-books-143012184.html?src=rss 

How to use StandBy mode on your lock screen in iOS 17

Now that iOS 17 is out in the wild, consumers are getting hands-on time with many just-released iPhone features. One of the neater inclusions is the brand-new StandBy mode. This toolset transforms your lock screen into a myriad of useful widgets, like alarm clocks, picture frames and more.

What is StandBy?

StandBy is a new feature that shipped with iOS 17. It lets you change up your lock screen to access a number of widgets. This can be highly useful when the phone’s tethered to a charging dock or when you just want to take a quick glance at something without having to unlock your sparkly iPhone. There are a number of available widgets for this mode, including alarm clocks, picture frames, Siri, windows for incoming calls and large notification boxes. Third-party apps have been quick to offer support for StandBy, so tomorrow likely brings a host of new options.

How to use StandBy

Getting started with StandBy is extremely simple. Connect your iPhone to a charger and set it down on its side, as the widgets are designed to take advantage of this orientation. Keep the phone stationary and press the side button to activate StandBy. Once activated, swipe left and right to switch between the various widgets, photos, clocks and other display options. Once you choose your favorite, scroll up or down to access adjustment options. For instance, swiping up when the alarm clock is on the screen will change the design.

Apple

If your phone has an always-on display, your StandBy widgets will run without interruption. For older phones, you’ll have to tap the screen when you want to see what’s going on. The iPhone 14 Pro, iPhone 14 Pro Max, iPhone 15 Pro and iPhone 15 Pro Max all boast an always-on screen. If you’re worried about the bright screen interrupting your sleep, just turn on Night Mode and the display will automatically adjust to low ambient light, covering everything in a non-intrusive red tint.

How to turn off StandBy

Done staring lovingly at an alarm clock? Turn StandBy off by heading to settings and then look for StandBy as an option. Once you open that, just click it to the off position like you would Bluetooth or WiFi.

How to customize available widgets

The default widget when you first launch StandBy is the alarm clock, and there are several more first-party options available by swiping left and right. However, there’s a simple way to customize the available widgets, allowing you to delete some from the stack and add others.

Apple

To start this process, just long press on any widget while StandBy mode is activated. Once the phone unlocks via Face ID, you’ll see the entire stack of widgets in the center of the screen in a jiggle mode reminiscent of when you delete apps. Look for the “+” icon in the top left of the screen to add widgets. Each widget will have a “-” attached to the thumbnail icon. Click on that to delete the widget from your stack.

This article originally appeared on Engadget at https://www.engadget.com/how-to-use-standby-mode-on-your-lock-screen-in-ios-17-130031058.html?src=rss 

How to use NameDrop in iOS 17

If you want to easily share contact information with someone, Apple’s NameDrop is an efficient tool. With the recent launch of iOS 17, however, some consumers worry that accessing the new tool boasts a steep learning curve. That’s not true at all, as it’s quite simple to get started with NameDrop. Here’s our guide on how to share contact information like a true boss.

What is NameDrop?

NameDrop is a feature that comes with iOS 17. It allows you to instantaneously send contact information to other people just by placing your iPhone near to their iPhone. This is similar to the pre-existing Tap to Share toolset, but with a specialized emphasis on contact information. There have been plenty of third-party apps that do this sort of thing, but this is Apple’s first-party solution.

How to use NameDrop to share contact information

NameDrop is extremely simple. Just hold your iPhone near the top of someone else’s iPhone. That’s it. You’ll see a faint glow emerge from the top of both devices to indicate a successful connection and NameDrop will appear on both screens. 

Apple

Once connected, you’ll be able to adjust exactly what contact information gets shared between the two devices. You can receive the other person’s information, send your information or do both at once. If you want to cancel, just move the phone away before the system finishes its dark magic.

This only works for new contacts, though, and cannot be used to update pre-existing contact information. You can get around this limitation by deleting the contact before going in for the NameDrop.

How to use contacts on iPhone to share information

NameDrop requires that both phones are updated to iOS 17, and that’s not always a realistic possibility. You have another choice for sending out contact information. Just head into the Contacts app and select Share Contact. Select the specific data you want to share and tap Done. Finally, select the delivery method. You can choose between Messages, Mail and several other options. This isn’t as easy as moving one phone close to another phone, but it should still take just a few seconds.

This article originally appeared on Engadget at https://www.engadget.com/how-to-use-namedrop-in-ios-17-130020397.html?src=rss 

An NYPD security robot will be patrolling the Times Square subway station

The New York Police Department (NYPD) is implementing a new security measure at the Times Square subway station. It’s deploying a security robot to patrol the premises, which authorities say is meant to “keep you safe.” We’re not talking about a RoboCop-like machine or any human-like biped robot — the K5, which was made by California-based company Knightscope, looks like a massive version of R2-D2. Albert Fox Cahn, the executive director of privacy rights group Surveillance Technology Oversight Project, has a less flattering description for it, though, and told The New York Times that it’s like a “trash can on wheels.”

K5 weighs 420 pounds and is equipped with four cameras that can record video but not audio. As you can guess from the image above, the machine also doesn’t come with arms — it didn’t quite ignore Mayor Eric Adams’ attempt at making a heart. The robot will patrol the station from midnight until 6 AM throughout its trial run that’s running over the next two months. But K5 won’t be doing full patrols for a while, since it’s spending its first two weeks mapping out the station and roaming only the main areas and not the platforms. 

It’s not quite clear if NYPD’s machine will be livestreaming its camera footage, and if law enforcement will be keeping an eye on what it captures. Adams said during the event introducing the robot that it will “record video that can be reviewed in case of an emergency or a crime.” It apparently won’t be using facial recognition, though Cahn is concerned that the technology could eventually be incorporated into the machine. Obviously, K5 doesn’t have the capability to respond to actual emergencies in the station and can’t physically or verbally apprehend suspects. The only real-time help it can provide people is to connect them to a live person to report an incident or to ask questions, provided they’re able to press a button on the robot. 

NYC is leasing K5 for around $9 an hour for the next two months. The mayor sounds convinced that’s worth what the robot can do even though, as The Times notes, he recently ordered several agencies to reduce spending by 15 percent. “This is below minimum wage,” he said. “No bathroom breaks, no meal breaks.” Adams has a history of supporting the use of machines as police tools. Earlier this year, the mayor also announced that the NYPD will acquire two Digidog robots for $750,000 each for use in hostage and other critical situations. That’s quite a reversal from the NYPD’s decision in 2021 to cancel its lease on what was then known as Boston Dynamics’ Spot after facing backlash for its use.

This article originally appeared on Engadget at https://www.engadget.com/an-nypd-security-robot-will-be-patrolling-the-times-square-subway-station-130029937.html?src=rss 

iPhone 15 stuck on the Apple logo during setup? Here’s how to fix it

If you’re setting up a new iPhone 15 today, you might run into some problems. As first reported by 9to5Mac, the new models (including standard and pro variants) can get stuck in a boot loop where they may freeze on the Apple logo when transferring apps and data to the new model. Although Apple says the setup process should prompt you to install iOS 17.0.2, which fixes the problem, some users (including one Engadget staff member) have reported that it failed to do that. Here’s what to do.

First, if your iPhone 15 setup prompts you to install iOS 17.0.2 before reaching the data-transfer step, you’re good to go: That means Apple’s hotfix worked as planned, and you don’t need to worry about any special instructions. Accept the update, wait for it to install and complete the process. But you’ll need to hop on a computer if it doesn’t prompt you to update.

Computer workaround

Start by plugging your iPhone into a Mac or Windows PC using its supplied (or any compatible) USB-C cable. Then, put the phone in recovery mode using the following button combinations: While it’s still plugged in, quickly press the iPhone’s volume up button, then the volume down button. Immediately after, press and hold the phone’s side (power / sleep) button until your handset displays the image below of a computer and cable. (If you don’t see it, try the button combinations again without pausing.)

Apple

Next, Mac users can open Finder and select their iPhone from the sidebar. Windows users will need to open iTunes. (If you don’t already have it, you can download it from here.)

After opening Finder (Mac) or iTunes (Windows), it will ask if you want to restore or update your phone. Choose “Restore,” and it will install the new software. (Apple notes that if your iPhone restarts while your Mac or PC downloads the update, you’ll need to wait for the update to complete before repeating the recovery mode button combination from paragraph three.)

After your Mac or PC completes the software restore, you should be able to unplug your iPhone and follow the prompts on its screen to set it up and transfer your data as usual.

Workaround without a computer

If you’re on the go or otherwise don’t have access to a computer, there’s an alternate method that may take a little longer. After powering up the phone, select the option to set it up as a new iPhone instead of transferring apps and data from your old model or iCloud. Then, after it takes you to a clean Home Screen for the first time, navigate to Settings > General > Software Update, and install the iOS 17.0.2 update.

After the update completes, head to Settings > General > Transfer or Reset iPhone, and choose “Erase All Content and Settings” at the bottom of the screen. After it completes the factory reset, the setup process should allow you to transfer your existing content from iCloud or your old handset.

Once you’ve set up your new phone, you can check out Engadget’s iPhone 15 Pro / Pro Max review and iOS 17 preview to brush up on all your new features.

This article originally appeared on Engadget at https://www.engadget.com/iphone-15-stuck-on-the-apple-logo-during-setup-heres-how-to-fix-it-210049112.html?src=rss 

Generated by Feedzy
Exit mobile version