LinkedIn wants you to apply for fewer jobs

If you know anyone in the job market right now, then you’ve probably heard stories about just how tough it can be to even land an interview. Part of the problem, according to LinkedIn, is that too many people are applying for jobs they aren’t actually qualified for, which makes it harder for good candidates to stand out.

The company is hoping its new AI-powered “Job Match” feature can help address some of that disconnect. The feature, which is beginning to roll out today, uses AI to provide detailed summaries alongside job listings that let users know how qualified they actually are for a given role.

LinkedIn product manager Rohan Rajiv says that the AI-powered feature goes beyond the kind of simple keyword matching that job hunters may already rely on. Instead, it attempts to understand the breadth of your experience and how it aligns with the qualifications outlined in the job description.

The goal, Rajiv tells Engadget, is to help surface the jobs a person is most qualified for and discourage people from applying to roles they aren’t. “When you’re qualified, we’ll be able to help you, but also, when you’re not qualified, we can hopefully find you other places where you are qualified,” Rajv told Engadget.

While “Job Match” will be available to all LinkedIn users, there are some added benefits for subscribers to LinkedIn Premium, including more granular information about their job match level. Eventually, Rajiv says, LinkedIn will also be able to surface more qualified applicants on the recruiter side as well, to make it less likely for good candidates to be overlooked.

Whether any of this will actually ease the pain of would-be job seekers is less clear. The tech industry lost tens of thousands of jobs to layoffs in 2024. So did the video game industry. Media and entertainment hasn’t fared much better, either.

All that would seemingly create even more competition for the same job openings — a dynamic AI seems ill-equipped to fully address. “I think there’s a portion of this that will always be labor market dynamics, but I would argue that there’s a significant portion of this that is just pure lack of transparency,” Rajiv says. He notes that early tests of the feature have suggested that a “non-trivial chunk” of the problem is “more solvable than we think.”

On their part, recruiters seem to be endorsing LinkedIn’s latest advice regarding applying for fewer jobs. The company’s blog post features testimonials from recruiters practically begging unqualified applicants to stop flooding their inboxes.

This article originally appeared on Engadget at https://www.engadget.com/social-media/linkedin-wants-you-to-apply-for-fewer-jobs-140049139.html?src=rss 

NVIDIA’s AI NPCs are a nightmare

The rise of AI NPCs has felt like a looming threat for years, as if developers couldn’t wait to dump human writers and offload NPC conversations to generative AI models. At CES 2025, NVIDIA made it plainly clear the technology was right around the corner. PUBG developer Krafton, for instance, plans to use NVIDIA’s ACE (Avatar Cloud Engine) to power AI companions, which will assist and banter with you during matches. Krafton isn’t just stopping there — it’s also using ACE in its life simulation title InZOI to make characters smarter and generate objects.

While the use of generative AI in games seems almost inevitable, as the medium has always toyed with new methods for making enemies and NPCs seem smarter and more realistic, seeing several NVIDIA ACE demos back-to-back made me genuinely sick to my stomach. This wasn’t just slightly smarter enemy AI — ACE can craft entire conversations out of thin air, simulate voices and try to give NPCs a sense of personality. It’s also doing that work locally on your PC, powered by NVIDIA’s RTX GPUs. But while all of that that might sound cool on paper, I hated almost every second I saw the AI NPCs in action.

TiGames’ ZooPunk is a prime example: It relies on NVIDIA ACE to generate dialog, a virtual voice and lip syncing for an NPC named Buck. But as you can see in the video above, Buck sounds like a stilted robot with a slight country accent. If he’s supposed to have some sort of relationship with the main character, you couldn’t tell from the performance.

I think my visceral aversion to NVIDIA’s ACE-powered AI comes down to this: There’s simply nothing compelling about it. No joy, no warmth, no humanity. Every ACE AI character feels like a developer cutting corners in the worst way possible, as if you’re seeing their contempt for the audience manifested a boring NPC. I’d much rather scroll through some on-screen text, at least I wouldn’t have to have conversations with uncanny robot voices.

During NVIDIA’s Editor’s Day at CES, a gathering for media to learn more about the new RTX 5000-series GPUs and their related technology, I was also underwhelmed by a demo of PUBG’s AI Ally. Its responses were akin to what you’d hear from a pre-recorded phone tree. The Ally also failed to find a gun when the player asked, which could have been a deadly mistake in a crowded map. At one point, the PUBG companion also spent around 15 seconds attacking enemies while the demo player was shouting for it to get into a car. What good is an AI helper if it plays like a noob?

Poke around NVIDIA’s YouTube channel and you’ll find other disappointing ACE examples, like the basic speaking animations in the MMO World of Jade Dynasty (above) and Alien: Rogue Incursion. I’m sure many devs would love to skip the chore of developing decent lip syncing technology, or adopting someone else’s, but for these games leaning on AI just looks awful.

To be clear, I don’t think NVIDIA’s AI efforts are all pointless. I’ve loved seeing DLSS get steadily better over the years, and I’m intrigued to see how DLSS 4’s multi-frame generation could improve 4K and ray-tracing performance for demanding games. The company’s neural shader technology also seems compelling, in particular its ability to apply a realistic sheen to material like silk, or evoke the slight transparency you’d see from skin. These aren’t enormous visual leaps, to be clear, but they could help deliver a better sense of immersion.

Now I’m sure some AI boosters will say that the technology will get better from here, and at some undefinable point in the future, it could approach the quality of human ingenuity. Maybe. But I’m personally tired of being sold on AI fantasies, when we know the key to great writing and performances is to give human talent the time and resources to refine their craft. And on a certain level, I think I’ll always feel like the director Hayao Miyazaki, who described an early example of an AI CG creature as, “an affront to life itself.”

AI, like any new technology, is a tool that could be deployed in many ways. For things like graphics and gameplay (like the intelligent enemies in F.E.A.R. and The Last of Us), it makes sense. But when it comes to communicating with NPCs, writing their dialog and crafting their performances, I’ve grown to appreciate human effort more than anything else. Replacing that with lifeless AI doesn’t seem like a step forward in any way.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/nvidias-ai-npcs-are-a-nightmare-140313701.html?src=rss 

DJI will no longer block US users from flying drones in restricted areas

DJI has lifted its geofence that prevents users in the US from flying over restricted areas like nuclear power plants, airports and wildfires, the company wrote in a blog post on Monday. As of January 13th, areas previously called “restricted zones” or no-fly zones will be shown as “enhanced warning zones” that correspond to designated Federal Aviation Administration (FAA) areas. DJI’s Fly app will display a warning about those areas but will no longer stop users from flying inside them, the company said. 

In the article, DJI wrote that the “in-app alerts will notify operators flying near FAA designated controlled airspace, placing control back in the hands of the drone operators, in line with regulatory principles of the operator bearing final responsibility.” It added that technologies like Remote ID [introduced after DJI implemented geofencing] gives authorities “the tools needed to enforce existing rules,” DJI’s global policy chief Adam Welsh told The Verge

Still, the update is an odd one, given that DJI is already on shaky ground in the US and could be banned from selling its products stateside as early as next year. DJI’s former head of policy, Brendon Schulman, criticized the move on Twitter in a series of posts. “There was substantial evidence over the years that automatic drone geofencing, implemented using a risk-based approach, contributed significantly to aviation safety,” he wrote.  

This is a remarkable shift in drone safety strategy with a potentially enormous impact, especially among drone pilots who are less aware of airspace restrictions and high-risk areas. https://t.co/YJOpe2gcZe

— Brendan Schulman (@dronelaws) January 14, 2025

There’s also an issue with drones weighing less than 250 grams. Those models were previously geofenced via GEO in restricted areas to prevent inadvertent flight into restricted locations. However, the update will remove that geofencing, and Remote ID can be flicked off on those lightweight drones.

In fact, that’s exactly what happened last week when sub-250-gram DJI model damaged the wing of a Canadair Super Scooper airplane fighting Los Angeles wildfires, putting it temporarily out of commission. That drone may not have been transmitting a remote ID, so FBI said it will need to use “investigative means” instead to find the pilot. 

DJI first implemented the geofence (called GEO) around airports in 2013, and added new zones in 2015 and 2016, after a drone crash-landed on the White House lawn. It did this voluntarily, as the FAA only requires that operators are warned about restricted areas where flying is banned. Now, though, the onus will be 100 percent on the operator to keep out of no-fly zones. 

“DJI reminds pilots to always ensure flights are conducted safely and in accordance with all local laws and regulations. For flights conducted in Enhanced Warning Zones, drone operators must obtain airspace authorization directly from the FAA and consult the FAA’s No Drone Zone resource for further information,” it wrote. 

This article originally appeared on Engadget at https://www.engadget.com/cameras/dji-will-no-longer-block-us-users-from-flying-drones-in-restricted-areas-130051778.html?src=rss 

North Korea stole $659 million in crypto assets last year, the US says

The United States, Japan and South Korea have issued a warning against North Korean threat actors, who are actively and aggressively targeting the cryptocurrency industry. In their joint advisory, the countries said threat actor groups affiliated with the Democratic People’s Republic of Korea (DPRK) continue to stage numerous cybercrime campaigns to steal cryptocurrency. Those bad actors — including the Lazarus hacking group, which the US believes has been deploying cyber attacks all over the world since 2009 — target “exchanges, digital asset custodians and individual users.” And apparently, they stole $659 million in crypto assets in 2024 alone. 

North Korean hackers have been using “well-disguised social engineering attacks” to infiltrate their targets’ systems, the countries said. They also warned that the actors could get access to systems owned by the private sector by posing as freelance IT workers. Back in 2022, the US issued guidelines on how to identify potential workers from North Korea, such as how they’d typically log in from multiple IP addresses, transfer money to accounts based in the People’s Republic of China, ask for crypto payments, have inconsistencies with their background information and be unreachable at times during their supposed business hours. 

Once the bad actors are in, they then usually deploy malware, such as keyloggers and remote access tools, to be able to steal login credentials and, ultimately, virtual currency they can control and sell. As for where the stolen funds go: The UN issued a report in 2022, revealing its investigators’ discovery that North Korea uses money stolen by affiliated threat actors for its missile programs. “Our three governments strive together to prevent thefts, including from private industry, by the DPRK and to recover stolen funds with the ultimate goal of denying the DPRK illicit revenue for its unlawful weapons of mass destruction and ballistic missile programs,” the US, Japan and South Korea said.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/north-korea-stole-659-million-in-crypto-assets-last-year-the-us-says-133029741.html?src=rss 

SEC lawsuit claims Musk gained over $150 million by delaying Twitter stake disclosure

After a more than two-year investigation, the Securities and Exchange Commission has sued Elon Musk over his delayed disclosure of the Twitter stock he amassed before announcing his intention to acquire the company in 2022.

In a court filing, the SEC says that Musk filed paperwork with the SEC disclosing his purchase of Twitter shares 11 days after an SEC-mandated deadline to do so. This, according to the regulator, allowed him to buy up even more Twitter stock at a time when other investors were unaware of his involvement with the company.

From the lawsuit:

During the period that Musk was required to publicly disclose his beneficial ownership but had failed to do so, he spent more than $500 million purchasing additional shares of Twitter common stock. Because Musk failed to timely disclose his beneficial ownership, he was able to make these purchases from the unsuspecting public at artificially low prices, which did not yet reflect the undisclosed material information of Musk’s beneficial ownership of more than five percent of Twitter common stock and investment purpose. In total, Musk underpaid Twitter investors by more than $150 million for his purchases of Twitter common stock during this period. Investors who sold Twitter common stock during this period did so at artificially low prices and thus suffered substantial economic harm.

The regulator has been investigating Musk for years, and has long been at odds with the owner of X. At one point, the SEC accused Musk of attempting to stall and use “gamesmanship” to delay its investigation into his investment in Twitter. Last month, Musk shared a copy of a letter addressed to SEC Chair Gary Gensler in which Musk’s lawyer accused the regulator of “six years of harassment” targeting Musk. The letter indicated that Musk refused a settlement offer from the SEC related to its Twitter investigation

X didn’t immediately respond to a request for comment.

Developing…

This article originally appeared on Engadget at https://www.engadget.com/big-tech/sec-lawsuit-claims-musk-gained-over-150-million-by-delaying-twitter-stake-disclosure-002627091.html?src=rss 

Sonos’ chief product officer is also leaving the company

Sonos is continuing to clean house as the company recovers from the hits it took following a disastrous mobile app redesign last year. Just a day after CEO Patrick Spence departed the company, chief product officer Maxime Bouvat-Merlin is also leaving. He will act as an advisor to interim CEO Tom Conrad during the leadership transition before fully exiting Sonos.

Conrad informed Sonos employees about the latest leadership change in a company-wide email today. The CPO role is being made redundant, with Sonos’ product team reporting directly to Conrad for the time being.

Sonos has been in a tailspin since releasing a mobile app update in May that contained many bugs and was missing key features. The company’s financial results took a dive, and it laid off about 100 employees in August. Sonos has made several efforts to keep customers aware of its plans to recover from the app launch, and the decision to replace top leadership seems like the latest move to win back public trust in the business.

This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/sonos-chief-product-officer-is-also-leaving-the-company-223256031.html?src=rss 

Pixelfed, Instagram’s decentralized competitor, is now on iOS and Android

Pixelfed is now available as a mobile apps for both iOS and Android. The open source, decentralized platform offers image sharing similar to Instagram. However, Pixelfed has no advertisements and does not share user data with third parties. The platform launched in 2018, but was only available on the web or through third-party app clients. The Android app debuted on January 9 and the iOS app released today.

Creator Daniel Supernault posted on Mastodon Monday evening that the platform had 11,000 users join over the preceding 24 hours and that more than 78,000 posts have been shared to Pixelfed to date. The platform runs on ActivityPub, the same protocol that powers several other decentralized social networks in the fediverse, such as Mastodon and Flipboard.

Many Instagram users have been seeking out alternatives to the Meta-owned platform after the company said it would eliminate third-party fact checking and revised its “Hateful Content” policy to allow denigrating comments against women and trans people, among other changes. Meta also blocked some links to Pixelfed on Facebook, treating them as spam and deleting those posts. A representative from the company said this was an error and that the posts would be reinstated.

This article originally appeared on Engadget at https://www.engadget.com/social-media/pixelfed-instagrams-decentralized-competitor-is-now-on-ios-and-android-205059236.html?src=rss 

The new Witcher animated film finally has a legit trailer

We’ve been hearing about the latest animated movie based on The Witcher franchise for a while now, but we’ve only ever gotten a short teaser and an equally short clip. Now, finally, there’s a legit full-fledged trailer. This is opportune timing, as The Witcher: Sirens of the Deep hits Netflix on February 11.

The big hook here? Geralt is voiced by Doug Cockle, reprising his role from the video games. Anya Chalotra and Joey Batey (Yennefer of Vengerberg and Jaskier in the live action show) are also reprising their roles. It’s set in the universe of the TV show, surrounding events that occurred during the first season, but is based on a short story by franchise creator Andrzej Sapkowski.

The original story, called “A Little Sacrifice,” involves Geralt investigating a series of attacks in a seaside village, leading to a conflict between humans and merpeople. It’s generally considered one of the better short stories in the canon. There’s an underwater city, which is always a good time.

The movie is directed by Kang Hei Chul. He was a storyboard artist for The Witcher: Nightmare of the Wolf, which was a prequel that followed Geralt’s mentor Vesemir. Studio Mir is on animation duties, the same South Korean studio that worked on Nightmare of the Wolf.

This company has an absolutely amazing pedigree. It animated the hit cartoon X-Men ‘97, but also stuff like The Legend of Korra, Kipo and the Age of Wonderbeasts, My Adventures with Superman and Voltron: Legendary Defender, among many others. The studio is currently finishing up the upcoming Devil May Cry anime, also for Netflix.

A fourth game in The Witcher franchise is coming, but this one stars Ciri instead of Geralt. The fifth and final season of The Witcher TV show is expected to premiere on Netflix in 2026.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/the-new-witcher-animated-film-finally-has-a-legit-trailer-195457579.html?src=rss 

DoJ remotely cleaned thousands of computers infected with Chinese malware

The Department of Justice and the FBI shared today that they have completed a project to remove malware used by Chinese hackers from computers in the US. The effort was essentially a court-approved counter-hack that remotely deleted malware known as PlugX from more than 4,200 computers. The agencies will notify the US owners of those impacted machines about the operation through their internet service providers.

According to the DOJ press release, hacker groups known as Mustang Panda and Twill Typhoon received backing from the Chinese government to use PlugX to infect, control and gather information from computers outside China. The action to delete the PlugX malware from US computers began in August 2024. It was conducted in cooperation with French law enforcement and with Sekoia.io, a France-based private cybersecurity company. Sekoia.io has found PlugX malware in more than 170 countries.

The Mustang Panda group has been conducting infiltration efforts around the world since at least 2014. For instance, cybersecurity firm ESET found that Mustang Panda gained access to cargo shipping companies’ computers in Norway, Greece and the Netherlands in March. And the group was one of several China-linked hacking organizations identified as compromising telecommunications systems across the Asia-Pacific region in reports last summer.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/doj-remotely-cleaned-thousands-of-computers-infected-with-chinese-malware-191837967.html?src=rss 

How to talk to ChatGPT on your phone

ChatGPT has had support for voice conversations since the end of 2023, but if you’re new to OpenAI’s chatbot, figuring out how to converse with it can be tricky since there are a couple of ways to go about it. In this guide, I’ll explain the main differences between ChatGPT’s two voice modes and how to use both of them. 

What is ChatGPT Advanced Voice Mode?

As of the writing of this article, OpenAI offers two different ways of interacting with ChatGPT using your voice: “Standard” and “Advanced.” The former is available to all users, with usage counting against one’s message limit. “Advanced,” meanwhile, has more granular restrictions.

As a free user, OpenAI offers a monthly preview of the tool. Subscribing to the company’s $20 per month Plus plan allows you to use Advanced Voice daily. OpenAI notes daily limits may change, but either way, the company promises to notify users 15 minutes before they’re about to hit their usage cap. Pro users can use Advanced Voice as much as they want, provided they do so in a way that’s “reasonable” and complies with the company’s policies.

Outside of those limits, the primary difference between the two modes is the sophistication of the underlying models powering them. Per OpenAI, Advanced Voice is natively multi-modal, meaning it can process more than just text input. In practice, this translates to Advanced Voice offering capabilities that aren’t possible with Standard Mode, which is limited to reading a transcript of what you say to your phone. In addition to “hearing” your voice, ChatGPT in Advanced Voice can simultaneously process video and images. It’s even possible to screen share with the chatbot, making it possible for it to guide you through using an app on your phone.

Additionally, OpenAI says Advanced Voice’s multi-modality means ChatGPT can produce a more natural voice and is better able to pick up non-verbal cues.

How do you start a voice conversation with ChatGPT?

Igor Bonifacic for Engadget

As mentioned above, OpenAI offers two different ways of interacting with ChatGPT using your voice. The steps below will show you how to access them on your iPhone or Android device.

First, you need to download the ChatGPT app (Android, iOS). The features detailed in this guide aren’t available through the ChatGPT integration offered by Apple through its AI suite.

To use Standard mode, tap the microphone icon to the right of the Message bar. If you need to grant the ChatGPT app access to your phone’s microphone, you can do so through its settings menu.

After ChatGPT begins recording your prompt, tap the checkmark icon for the chatbot to start processing your question. If you want it to discard what you said, press the x icon.

To use Advanced Mode, tap the waveform icon to the right of the Message bar. Tap the microphone icon if you want to mute your phone’s mic. You can exit Advanced Mode at any time by pressing the x icon.

How do you share a photo of your screen with ChatGPT while having a voice conversation?

If you want to screen share with ChatGPT or share a photo or video with it, tap the three-dots icon and select Share Screen, Upload Photo or Take Photo.

If you don’t see those options, update to the latest version of the ChatGPT app. You may also live in a country where OpenAI isn’t offering those features yet. See below for more information.

How many voice options are available?

The first time you use Advanced Voice Mode, you’ll be prompted to select a tone of voice for ChatGPT. Right now, OpenAI offers nine different options. I’ve listed them below, along with the company’s descriptions of each one.

Arbor — Easygoing and versatile

Breeze — Animated and earnest

Cove — Composed and direct

Ember — Confident and optimistic

Juniper — Open and upbeat

Maple — Cheerful and candid

Sol — Savvy and relaxed

Spruce — Calm and affirming

Vale — Bright and inquisitive

If you want to change ChatGPT’s voice, follow these steps:

Bring up the sidebar by tapping the two-dash.

Open the Settings menu by tapping the three-dots icon.

Scroll down and tap Voice.

Select the voice you want ChatGPT to use.

You can also change ChatGPT’s voice directly from the Advanced Voice interface by tapping the slider icon.

Are there regional restrictions on Advanced Voice Mode?

Some Advanced Voice Mode features aren’t available in every country where OpenAI offers ChatGPT. Specifically, the video, screen share and image upload capabilities OpenAI introduced in December aren’t available in the European Union, Switzerland, Iceland, Norway and Liechtenstein.

This article originally appeared on Engadget at https://www.engadget.com/ai/how-to-talk-to-chatgpt-on-your-phone-193633643.html?src=rss 

Generated by Feedzy
Exit mobile version