Netflix and TED are hopping on the daily word game bandwagon

Netflix announced the next addition to its gaming roster, and it’s a collaboration with the TED nonprofit. TED Tumblewords is a daily puzzle game where you slide rows of letters around to make words. There will be three puzzles available each day, and you can play rounds against friends, other online players or the TED bot. In addition to the daily word challenges, which are designed to improve critical thinking and vocabulary, players will see interesting facts from the TED library. The game will be available to play on Netflix and TED.com on November 19.

Since it began offering mobile games, Netflix has amassed a lot of high-quality titles in its lineup. The collection is a mix of licensed indie game projects, such as Hades and Kentucky Route Zero, alongside in-house creations centered on its popular shows, like the retro-styled Stranger Things game. However, the streaming service just today shut down its in-house AAA game studio before the team ever released or even announced a single project. While we wait for TED Tumblewords to arrive, here are some other excellent choices for your daily online gaming fix.

This article originally appeared on Engadget at https://www.engadget.com/gaming/netflix-and-ted-are-hopping-on-the-daily-word-game-bandwagon-230014184.html?src=rss 

Huawei appears to still be using TSMC chips despite US sanctions

A Canadian research firm called TechInsights took a deep dive on one of Huawei’s artificial intelligence accelerators and found a chip manufactured by Taiwan Semiconductor Manufacturing Company (TSMC). Bloomberg spoke with several people familiar with the investigation who asked to remain anonymous since TechInsights’ report has been released to the public.

The anonymous sources says TechInsights’ investigation found an Ascend 910B chip made by TSMC in one of Huawei’s AI accelerators. The company that conducted the investigation declined to comment.

The US Commerce Department implemented additional trade restrictions against Huawei that barred the electronics company from obtaining chips made by foreign firms. Earlier this year, the US government tightened its restrictions even further by revoking its licenses with Intel and Qualcomm to produce chips for its devices.

TSMC denied that it had a working relationship with Huawei since mid-September of 2020 in a statement provided to the Commerce Department. TSMC also told Bloomberg that it hasn’t produced any chips for Huawei due to the amended restrictions. Huawei denied that it had ever “launched the 910B chip.”

This isn’t the first time Huawei has been caught trying to subvert US sanctions and trade restrictions. Bloomberg also uncovered in May that Huawei funded secret research in the US at universities including Harvard by funneling the money through a Washington-based scientific research foundation called Optica. The foundation said it decided to return the money in June and chief executive officers Elizabeth Rogen and Chad Stark stepped down the following August.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/huawei-appears-to-still-be-using-tsmc-chips-despite-us-sanctions-222617636.html?src=rss 

Qualcomm and Google team up to help carmakers create AI voice systems

Car manufacturers will be able to develop new AI voice assistants for their cars thanks to a new partnership with Qualcomm and Google. Qualcomm announced earlier today that it’s working with Google on a new AI development system for carmakers.

The new version is based on Android Automotive OS (AAOS), Google’s infotainment platform for cars. Qualcomm is offering its Snapdragon Digital Chassis with Google Cloud and AAOS to generate new AI-powered digital cockpits for cars. Qualcomm also unveiled two new chips for powering driving systems including the Snapdragon Cockpit Elite for dashboards and the Snapdragon Ride Elite for self-driving features.

The new interface will allow car drivers and passengers to interact with custom voice assistants, immersive maps and real-time driving updates. Carmakers can use the new system to create their own unique and marketable AI voice assistants that don’t require a connection to a smartphone.

Other carmakers have taken steps to try to integrate AI systems in its vehicles. Volkswagen announced plans at CES 2024 that it would integrate ChatGPT in its cars’ voice assistants across a range of newer models. After a slow start, AAOS now underpins vehicles from several manufacturers including Chevrolet, Honda, Volvo and Rivian.

This article originally appeared on Engadget at https://www.engadget.com/ai/qualcomm-and-google-team-up-to-help-carmakers-create-ai-voice-systems-211510693.html?src=rss 

NASA’s newest telescope can detect gravitational waves from colliding black holes

NASA showed off a telescope prototype for a new gravitational wave detection mission in space. The telescope is part of the Laser Interferometer Space Antenna (LISA) mission led by the European Space Agency (NSA) in partnership with NASA.

The goal of the LISA mission is to position three spacecraft in a triangular orbit measuring nearly 1.6 million miles on each side. The three spacecraft will follow the Earth’s orbit around the Sun. Each spacecraft will carry two telescopes to track their siblings using infrared laser beams. Those beams can measure distances down to a trillionth of a meter.

Gravitational waves are created during a collision between two black holes. They were first theorized by Albert Einstein in 1916 and detected almost a century later by the Laser Interferometer Gravitational-wave Observatory (LIGO) Scientific Collaboration from the National Science Foundation, Caltech and MIT. A gravitational wave is detected when the three spacecraft shift from their characteristic pattern.

The LISA mission is scheduled to launch in the mid-2030s. The detection of gravitational waves could provide “enormous potential” to better our understanding of the universe, including events like black holes and the Big Bang that are difficult to study through other means, according to the official mission website.

This article originally appeared on Engadget at https://www.engadget.com/science/space/nasas-newest-telescope-can-detect-gravitational-waves-from-colliding-black-holes-194527272.html?src=rss 

Ecobee smart home users can now unlock Yale and August smart locks from its app

Ecobee is integrating smart locks into its app. The company doesn’t make smart locks of its own, but you can now control Wi-Fi-enabled ones from Yale and August using the Ecobee app. The feature could prevent you from switching apps to let someone who rings your smart doorbell in. However, it’s locked behind a subscription, so user convenience isn’t the only motive here.

The integration adds an “unlock” button from the Ecobee app’s live view. So, you can let visitors in from the same screen where you confirm it’s someone you want coming inside. (Handy!) The Ecobee app also allows you to lock your doors automatically when you arm your security system. (Also handy!)

Less handy: You’ll need to pay up to enjoy these perks because the feature is locked (ahem) behind Ecobee’s Smart Security system. The premium service costs $5 monthly or $50 annually. And as The Verge notes, it won’t let you unlock your August or Yale devices from Ecobee’s smart thermostats.

This could be a convenient perk if you’re already paying for Ecobee’s subscription service. If not, you’ll have to ask yourself if it’s worth a premium to avoid the oh-so-grueling task of pulling up your phone’s app switcher to jump to another smart-home app.

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/ecobee-smart-home-users-can-now-unlock-yale-and-august-smart-locks-from-its-app-201700926.html?src=rss 

Google Messages adds enhanced scam detection tools

Google just announced a spate of safety features coming to Messages. There’s enhanced scam detection centered around texts that could lead to fraud. The company says the update provides “improved analysis of scammy texts.” For now, this tool will prioritize scams involving package deliveries and job offers.

When Google Messages suspects a scam, it’ll move the message to the spam folder or issue a warning. The app uses on-device machine learning models to detect these scams, meaning that conversations will remain private. This enhancement is rolling out now to beta users who have spam protection enabled.

Google’s also set to broadly roll out intelligent warnings, a feature that’s been in the pilot stage for a while. This tool warns users when they get a link from an unknown sender and automatically “blocks messages with links from suspicious senders.” The updated safety tools also include new sensitive content warnings that automatically blurs images that may contain nudity. This is an opt-in feature and also keeps everything on the device. It’ll show up in the next few months.

Finally, there’s a forthcoming tool that’ll let people turn off messages from unknown international senders, thus cutting the scam spigot off at the source. This will automatically hide messages from international senders who aren’t already in the contacts list. This feature is entering a pilot program in Singapore later this year before expanding to more countries.

In addition to the above tools, Google says it’s currently working on a contact verifying feature for Android. This should help put the kibosh on scammers trying to impersonate one of your contacts. The company has stated that this feature will be available sometime next year.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/google-messages-adds-enhanced-scam-detection-tools-190009890.html?src=rss 

A federal ban on fake online reviews is now in effect

Be warned, online merchants who see no issue in publishing phony reviews from made-up customers: that practice is no longer allowed. A federal ban on fake online reviews has taken effect.

The Federal Trade Commission issued a final rule on the purchase and sale of online reviews back in August and it came into force 60 days after it was published in the Federal Register. The agency’s commissioners voted unanimously in favor of the regulation.

The rule bans businesses from creating, buying or selling reviews and testimonials attributed to people who don’t exist, including those that are AI generated. False celebrity endorsements aren’t allowed and companies can’t pay or otherwise incentivize genuine customers to leave positive or negative reviews.

Certain reviews and testimonials written by people who have close ties with a company without a disclaimer is a no-no. There are restrictions on soliciting reviews from close relatives of employees too.

The rule includes limitations on the suppression of negative reviews from customers. It also prohibits people from knowingly selling or buying fake followers and views to inflate the influence or importance of social media accounts for commercial purposes.

Fines for violating these measures could prove extremely costly. The maximum civil penalty for each infraction is currently $51,744.

“Fake reviews not only waste people’s time and money, but also pollute the marketplace and divert business away from honest competitors,” FTC Chair Lina Khan said when the rule was finalized. “By strengthening the FTC’s toolkit to fight deceptive advertising, the final rule will protect Americans from getting cheated, put businesses that unlawfully game the system on notice, and promote markets that are fair, honest and competitive.”

The rule is a positive move for consumers, with the idea that reviews should be more trustworthy in the future. In a separate victory for consumer rights, the FTC recently issued a final rule to make it as easy for people to cancel a subscription as it is to sign up for one.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/a-federal-ban-on-fake-online-reviews-is-now-in-effect-191746690.html?src=rss 

OpenAI and Microsoft are funding $10 million in grants for AI-powered journalism

OpenAI and Microsoft are funding projects to bring more AI tools into the newsroom. The duo will give grants of up to $10 million to Chicago Public Media, the Minnesota Star Tribune, Newsday (in Long Island, NY), The Philadelphia Inquirer and The Seattle Times. Each of the publications will hire a two-year AI fellow to develop projects for implementing the technology and improving business sustainability. Three more outlets are expected to receive fellowship grants in a second round.

OpenAI and Microsoft are each contributing $2.5 million in direct funding as well as $2.5 million in software and enterprise credits. The Lenfest Institute of Journalism is collaborating with OpenAI and Microsoft on the project, and announced the news today.

To date, the ties between journalism and AI have mostly ranged from suspicious to litigious. OpenAI and Microsoft have been sued by the Center for Investigative Reporting, The New York Times, The Intercept, Raw Story and AlterNet. Some publications accused ChatGPT of plagiarizing their articles, and other suits centered on scraping web content for AI model training without permission or compensation. Other media outlets have opted to negotiate; Condé Nast was one of the latest to ink a deal with OpenAI for rights to their content.

In a separate development, OpenAI has hired Aaron Chatterji as its first chief economist. Chatterji is a professor at Duke University’s Fuqua School of Business, and he also served on President Barack Obama’s Council of Economic Advisers as well as in President Joe Biden’s Commerce Department.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-and-microsoft-are-funding-10-million-in-grants-for-ai-powered-journalism-193042213.html?src=rss 

More than 10,500 artists sign open letter protesting unlicensed AI training

Some of the biggest names in Hollywood, literature and music have issued a warning to the artificial intelligence industry. The Washington Post reports that more than 10,500 artists have signed an open protest letter objecting to AI developers’ “unlicensed use” of artists’ work to train their models.

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted,” the one sentence letter reads.

The letter has support from some huge names across the film, television, music and publishing industries. Some of the more famous signatures include actors Julianne Moore, Rosario Dawson, Kevin Bacon and F. Murray Abraham, as well as former Saturday Night Live star Kate McKinnon, author James Patterson and Radiohead frontman Thom Yorke.

The unauthorized use of their work to train AI models has been an area of major concern among creatives. The SAG-AFTRA union and Writers Guild of America recently held industry-wide strikes demanding better protections for their work and livelihood against the use of AI in studio projects.

There are also several lawsuits currently in courts accusing some AI developers of using copyrighted content without permission or proper compensation.On Monday, The Wall Street Journal and The New York Post sued Perplexity AI for violating their copyright protections. Music labels like Universal, Warner and Sony sued the makers of the Suno and Uido AI music makers back in June for violating its copyright protections on a “massive scale.”

This article originally appeared on Engadget at https://www.engadget.com/ai/more-than-10500-artists-sign-open-letter-protesting-unlicensed-ai-training-174544491.html?src=rss 

Redact-A-Chat is an old-style chatroom that censors words after one use

If you’re a word and game lover like me, then prepare to join me in excitement — and, eventual frustration — as there’s a new daily word puzzle of sorts. New York-based art collective MSCHF has introduced an AOL-style chatroom called Redact-A-Chat that censors a word each time someone uses it. Josh Wardle, creator of Wordle, recently worked at MSCHF there for a few years. 

So, how does it work? There’s a main chatroom where you can write anything, but if a word gets repeated, then it’s covered with a blue blurry line and unavailable for the rest of the day. I got to try it out early, and it seems duplicated words in sentences also lead to the second mention being blurred out. All words become fair game again at midnight. Announcements about newly censored words and when the time starts again come from three one-eyed safety pins reminiscent of the Microsoft Word safety pin. 

In a statement, MSCHF said Redact-A-Chat “forces creative communication. You must constantly keep ahead of the censor in order to continue your conversation. On the other hand, you can be that a**hole who starts working their way through the dictionary to deprive everyone else of language.”

If you’re unsure about participating in the main room, you can start a chat just for your friends. You just click the create a chat room button, give it a name and it will appear. You can then invite other people to the group with a unique code. 

This article originally appeared on Engadget at https://www.engadget.com/ai/redact-a-chat-is-an-old-style-chatroom-that-censors-words-after-one-use-180014370.html?src=rss 

Generated by Feedzy
Exit mobile version