Pre-orders for Ayaneo’s Kun gaming handheld start September 5

The Kun, Ayaneo’s latest addition to its ever-growing range of handheld gaming PCs, will be available sooner rather than later. It’s expected to go up for pre-order on September 5, with the Indiegogo campaign beginning at 8AM ET.

Ayaneo has released multiple gaming handhelds, but the Kun’s specs should take it to the top of the pile when compared to similar devices. While it has effectively the same processor as the ROG Ally, its 8.4-inch display sets it apart and makes it even bigger than the already-large Steam Deck. The Kun’s display has a 1600p resolution, compared to the Steam Deck’s 720p screen and the ROG Ally’s 1080p displays. The Kun also has a much brighter 500-nit display vs the Steam Deck’s 400 nits. This large and bright display may offer a great visual experience but could negatively affect the Kun’s 75Wh (19500mAh) battery life.

Ayaneo

Based on Ayaneo’s internal tests, the Kun should be able to run for around 49 minutes on extreme settings, but the company doesn’t recommend this for daily use. At lower settings, you can expect the Kun to run for around three hours on a single charge. That’s in a similar ballpark as the Steam Deck, but it remains to be seen how well the Kun fares in real-world usage.

As for the other specs, the Ayaneo Kun has an AMD Ryzen 7 7840U processor with integrated Radeon 780M graphics. Ayaneo has also packed in a new “KUNPeng” heat dissipation system to help the device from overheating when running games at the highest settings. There are also some smaller but still significant improvements like optimized grip sensation for larger screen handhelds, new floating eight-directional D-pad, face recognition and a new customizable back button. Additionally, the Kun’s built-in folding stand is a nice touch.

The Kun will be available in three colors: Silver Wing, Black Feather and White Silk. Prices start at $999 for Early Bird buyers, and Indiegogo’s retail price will be $1,129 with an official retail price of $1,209 for the 16G RAM and 512GB storage base model. The max configuration of 64GB RAM and 4TB storage will run you $1,699 Early Bird and $1,809 for Indiegogo’s retail price, and $1,949 official retail price. Global shipping is expected to start in mid-October.

This article originally appeared on Engadget at https://www.engadget.com/pre-orders-for-ayaneos-kun-gaming-handheld-start-september-5-204517180.html?src=rss 

Meta’s avatars finally grow some legs

It’s been nearly a year since Meta announced at Connect 2022 that it would give its weird Caspar the Friendly Ghost-esque metaverse avatars some legs to make them appear slightly more human. The day of reckoning is almost upon us as Quest Home avatars now sport extra limbs in the latest beta version of the Quest software.

You won’t see legs on your avatar when you look down, as UploadVR points out. They’ll only be visible in third-person or when you’re looking at a virtual mirror (much like in many first-person shooter games). This makes sense, as there’s no leg tracking option on any current consumer virtual reality system. It means Meta doesn’t have to worry too much about having accurate leg animations instead of, I don’t know, wacky QWOP-style physics?

In addition, it seems your avatar’s legs won’t crouch in third-person view when you bend your knees or sit down. That could make things a little awkward when you’re trying to maintain eye contact (as much as that’s possible in VR spaces) with another user.

Meta Quest v57 PTC finally adds legs to your Meta avatar 😀 pic.twitter.com/3dzuuppp6e

— Luna (@Lunayian) August 29, 2023

The legs are not in the VR version of Horizon Worlds as yet, though you should see them in the mobile and web versions if you’re one of the folks testing those. Curiously, Meta said last year that “legs will roll out to Worlds first” before making their way to other avatar-friendly experiences. UploadVR also notes that Meta hasn’t publicly updated its software development kit for avatars, so external developers using that toolset can’t play around with legs in the company’s virtual spaces yet either.

This could all come to a head next month when this year’s Meta Connect takes place. Perhaps the company will have more to say about its virtual legs then. One thing we know for sure about the event is that Meta will reveal much more about the Quest 3.

This article originally appeared on Engadget at https://www.engadget.com/metas-avatars-finally-grow-some-legs-211016742.html?src=rss 

The NBA, NFL and UFC want instantaneous DMCA takedowns

Three major American sports leagues want to speed up Digital Millennium Copyright Act (DMCA) takedowns. In a letter posted and reported by TorrentFreak (viaThe Verge), the UFC, NBA and NFL urged the US Patent and Trademark Office (USPTO) to make the removal process for illegal livestreams nearly instantaneous. The organizations say the global sports industry is losing up to $28 billion from fans watching pirated live feeds instead of paid ones.

“The rampant piracy of live sports events causes tremendous harm to our companies,” legal representatives for the UFC, NBA and NFL allegedly wrote in the letter. The leagues say online service providers often take “hours or even days” to take down infringing content — leaving illegal sports streams plenty of time to complete the event without removal. “This is particularly damaging to our companies given the unique time-sensitivity of live sports content.”

The Digital Millennium Copyright Act’s language in Section 512 is at the heart of the complaint, which states that content must be removed “expeditiously.” The UFC, NBA and NFL want the wording changed to “instantaneously or near-instantaneously” to help with their revenue problems. “This would be a relatively modest and non-controversial update to the DMCA that could be included in the broader reforms being considered by Congress or could be addressed separately,” the posted letter reads.

The letter didn’t address sports fans’ distaste for regional blackouts, which many viewers likely use the pirated feeds to bypass.

The leagues also ask the USPTO to consider more stringent requirements for online service providers to verify users posting livestreams. They ask for “particular verification measures,” including blocking the ability to stream from newly created accounts or those with few subscribers. “Certain [online service providers] already impose measures like these, demonstrating that the measures are feasible, practical and important tools to reduce livestream piracy,” the letter reads.

Sending a letter is the first step in communicating intent, but the UFC, NBA and NFL will likely have a long road ahead if they want to change the DMCA. The law, signed into law by Bill Clinton in 1998, has faced numerous calls for change in the following decades — both from media companies wanting stricter measures and users who believe it gives copyright holders too much power. Changing it would require Congress to pass a law revising it, which is never a quick or easy process.

This article originally appeared on Engadget at https://www.engadget.com/the-nba-nfl-and-ufc-want-instantaneous-dmca-takedowns-200047711.html?src=rss 

The Air Force wants $6 billion to build a fleet of AI-controlled drones

The F-22 and F-35 are two of the most cutting-edge and capable war machines in America’s arsenal. They also cost $143 million and $75 million a pop, respectively. Facing increasing pressure from China, which has accelerated its conventional weapon procurement efforts in recent months, the Pentagon announced Monday a program designed to build out America’s drone production base in response. As part of that effort, the United States Air Force has requested nearly $6 billion in federal funding over the next five years to construct a fleet of XQ-58A Valkyrie uncrewed aircraft, each of which will cost a (comparatively) paltry $3 million.

The Valkyrie comes from Kratos Defense & Security Solutions as part of the USAF’s Low Cost Attritable Strike Demonstrator (LCASD) program. The 30-foot uncrewed aircraft weighs 2,500 pounds unfueled and can carry up to 1,200 total pounds of ordinance. The XQ-58 is built as a stealthy escort aircraft to fly in support of F-22 and F-35 during combat missions, though the USAF sees the aircraft filling a variety of roles by tailoring its instruments and weapons to each mission. Those could includes surveillance and resupply actions, in addition to swarming enemy aircraft in active combat.

Earlier this month, Kratos successfully operated the XQ-58 during a three-hour demonstration at Elgin Air Force Base. “AACO [the Autonomous Air Combat Operations team] has taken a multi-pronged approach to uncrewed flight testing of machine learning Artificial Intelligence and has met operational experimentation objectives by using a combination of high-performance computing, modeling and simulation, and hardware in the loop testing to train an AI agent to safely fly the XQ-58 uncrewed aircraft,” Dr. Terry Wilson, AACO program manager, said in a press statement at the time.

“It’s a very strange feeling,” USAF test pilot Major Ross Elder told the New York Times. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.” The USAF has been quick to point out that the drones are to remain firmly under the command of human pilots and commanders. 

The Air Force took heat in June when Colonel Tucker “Cinco” Hamilton “misspoke” at a press conference and suggested that an AI could potentially be induced to turn on its operator, though the DoD dismissed that possibility as a “hypothetical thought exercise” rather than “simulation.”

“Any Air Force drone [will be] designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” a Pentagon spokeswoman told the NYT. Congress will need to pass the DoD’s budget for the next fiscal year before construction efforts can begin. The XQ-58 program will require an initial outlay of $3.3 billion in 2024 if approved.

This article originally appeared on Engadget at https://www.engadget.com/the-air-force-wants-6-billion-to-build-a-fleet-of-ai-controlled-drones-204548974.html?src=rss 

Some OnePlus smartphones are nearly 20 percent off, hitting record low prices

A pair of popular OnePlus smartphones just went on sale, hitting record low prices for both. The company’s flagship OnePlus 11 5G went down from $700 to $600, a savings of nearly 20 percent. The budget-friendly OnePlus Nord N30 5G got even, well, friendlier with a $50 discount, dropping the cost to $250 from $300. If you’re shopping for a smartphone, this is a good time to take the plunge.

We praised the OnePlus 11 as a “back-to-basics flagship smartphone,” noting its gorgeous 120Hz 6.6-inch OLED display, the fantastic battery life, 100W quick-charging and improved camera system when compared to its predecessor. In other words, the 11 was already a bargain at $800, as modern iPhones and Samsung phones cost upwards of $1,000. Today’s sale makes the bargain even harder to resist.

The OnePlus Nord N30 takes a more modest approach, as this is absolutely a low-priced smartphone rather than a flagship. However, it’s one of the best budget-friendly phones around and a great choice for anyone looking for a no-frills device that gets the job done. The specs are fantastic for the price, with a Snapdragon 695 processor, 8GB of RAM, 128GB of storage and a crisp 120Hz IPS display. Not many cheap phones can match this set of features.

These phones aren’t perfect, as the N30 lacks waterproofing and the 11 isn’t the most exciting flagship model in the world, but the list of pros far outweigh any list of cons. OnePlus isn’t widely available at retail outlets, so this sale is reserved for Amazon and Best Buy.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/some-oneplus-smartphones-are-nearly-20-percent-off-hitting-record-low-prices-184540056.html?src=rss 

X opens the floodgates on political ads

The company previously known as Twitter is fully reversing a longtime ban on political advertising after it first loosened its rules in January. X said in an update it would once again open its doors to political advertisers of all stripes.

“Building on our commitment to free expression, we are also going to allow political advertising,” the company wrote. It added that it will “apply specific policies to paid-for promoted political posts,” including rules barring “the promotion of false or misleading content” as well as content “intended to undermine public confidence in an election.” X also said it’s planning to create a “global advertising transparency center” so that users can track political ads on the platform.

Twitter first banned political ads in 2019, with then-CEO Jack Dorsey saying that “political message reach should be earned, not bought.” That began to change earlier this year when the company eased restrictions for “caused-based” ads, citing the importance of “public conversation around important topics.”

Now, it’s unclear if there is any kind of political ad that would be off-limits on X so long as it adheres to the company’s rules. Of note, X has yet to update support pages outlining its political ad rules, though it said in a blog post it was updating its civic integrity policy “to make sure we strike the right balance between tackling the most harmful types of content … and not censoring political debate.” X didn’t respond to a request for comment.

The policy changes could have significant implications for the upcoming 2024 elections. X also said that it was in the process of staffing up its teams overseeing safety and elections policies, “to focus on combating manipulation, surfacing inauthentic accounts and closely monitoring the platform for emerging threats.”

Opening to political ads could also be a major boon to X’s ad business, which has dropped 50 percent since Elon Musk’s takeover last year. Though conventional advertisers have increasingly shied away from the platform, political campaigns may have a harder time staying away ahead of a major election.

This article originally appeared on Engadget at https://www.engadget.com/x-opens-the-floodgates-on-political-ads-191931318.html?src=rss 

YouTubers can take training courses to remove warnings from their permanent record

YouTube is updating its enforcement policies to give creators who break its rules a chance to wipe the slate clean. Starting today, those who receive a warning for violating the community guidelines will be able to take a training course designed to help them better understand how to steer clear of uploading videos that run afoul of YouTube’s regulations. As long as they complete the course and don’t violate the same policy within a 90-day period, YouTube will remove the warning from their account. In other words, they can go to detention to help avoid a suspension.

If they violate the policy for which they received the warning a second time in that roughly three-month window, YouTube will remove the video in question and slap the creator with a dreaded strike (which can jeopardize their chances of making a living from the platform). A creator who finishes a course and has the warning lifted from their account after 90 days but then violates the same policy again will be back at square one — YouTube will nix the offending video and give them another warning. They can go through another training program to have the new warning wiped from their account.

Another major change is that, until now, YouTube has given creators who cross the line a single, blanket lifetime warning. From now on, warnings will be applied to rule-breaking creators’ accounts based on the specific policy they violate. So, they can have multiple warnings on their account and the option to take a training course for each one to have them wiped away.

YouTube started dishing out one-time warnings in 2019 for a first rule break, which it says offered “creators the chance to review what went wrong before facing more penalties” (i.e. strikes). The service points out that over 80 percent of creators who received a warning haven’t broken the rules since. Nonetheless, YouTube says creators told the team “they want more resources to better understand how we draw our policy lines” and this new approach is geared toward that greater transparency.

It’s worth bearing in mind that the three-strike policy is still in place. If a creator receives three strikes within 90 days, it’s still likely that YouTube will punt them off the platform. Extreme policy violations are still subject to strikes and channel termination, even if a creator has gone through these training courses. There aren’t any changes to the community guidelines here either.

“Looking ahead, we’ll keep working to make our policies easier for creators to understand,” YouTube said. “We ultimately want creators to have the clarity they need to stay strike free on our platform — while maintaining a healthy experience for YouTube’s entire community.”

Offering YouTubers a chance to learn and grow from their mistakes is a net positive even if some bad actors might try to abuse the system by deliberately uploading a few videos that cross the line each year. Meanwhile, Xbox recently adopted an eight-strike enforcement policy, under which its users can have strikes removed from their accounts after six months.

This article originally appeared on Engadget at https://www.engadget.com/youtubers-can-take-training-courses-to-remove-warnings-from-their-permanent-record-181432261.html?src=rss 

A Google-powered chatbot is handling GM’s non-emergency OnStar calls

General Motors is taking Google’s AI chatbot on the road. The automaker announced today that it’s using Google Cloud’s Dialogflow to automate some non-emergency OnStar features like navigation and call routing. Crucially, the automaker claims the bot can pinpoint keywords indicating an emergency situation and “quickly route the call” to trained humans when needed. GM says the system frees up OnStar Advisors to spend more time with customers requiring a live human.

According to GM, the OnStar Interactive Virtual Assistant (IVA) has used Google Cloud’s Dialogflow under the hood since IVA’s 2022 launch. The virtual voice assistant can handle common customer questions and help with routing and navigation, including turn-by-turn directions. The companies see the collaboration as expanding down the road. “The successful deployment of Google Cloud’s AI in GM’s OnStar service has now opened the door to future generative AI deployments being jointly piloted by General Motors and Google Cloud,” the companies wrote in a joint press release.

The automaker says Google Cloud’s AI has allowed OnStar to better understand customer requests on the first try. In addition, it says customers have reacted positively to avoiding hold times as they can quickly begin chatting with an AI-powered bot with a “modern, natural sounding voice.” GM says the virtual assistant now handles over one million customer inquiries per month in the US and Canada. OnStar IVA is available in most GM vehicles, 2015 and newer, with OnStar connections.

GM has also reportedly worked on developing a ChatGPT-powered assistant for its vehicles, although it isn’t yet clear if that project is still on the table.

“Generative AI has the potential to revolutionize the buying, ownership, and interaction experience inside the vehicle and beyond, enabling more opportunities to deliver new features and services,” Mike Abbott, GM’s executive vice president of software and services, wrote in the press release. “Our software-led approach has accelerated the creation of compelling services for our customers while driving increased efficiency across the GM enterprise. The work with Google Cloud is another example of our efforts to transform how customers engage with our products and services.”

The companies also announced today that Google’s Dialogflow tech is behind chatbots on the GM website, similar to the slew of OpenAI-powered assistants that began popping up since the launch of the ChatGPT API earlier this year. GM’s web bots can “conversationally help answer customer questions about GM vehicles and product features based on the technical information from GM’s extensive vehicle data repositories,” according to the automaker.

“General Motors is at the forefront of deploying AI in practical and effective ways that ultimately create better customer experiences,” Thomas Kurian, Google Cloud CEO, wrote today. “We’re looking forward to a deepened relationship and more collaboration with GM as we explore how the company uses generative AI in transformational ways.”

This article originally appeared on Engadget at https://www.engadget.com/a-google-powered-chatbot-is-handling-gms-non-emergency-onstar-calls-183040938.html?src=rss 

Google wants an invisible digital watermark to bring transparency to AI art

Google took a step towards transparency in AI-generated images today. Google DeepMind announced SynthID, a watermarking / identification tool for generative art. The company says the technology embeds a digital watermark, invisible to the human eye, directly onto an image’s pixels. SynthID is rolling out first to “a limited number” of customers using Imagen, Google’s art generator available on its suite of cloud-based AI tools.

One of the many issues with generative art — apart from the ethical implications of training on artists’ work — is the potential for creating deepfakes. For example, the pope’s hot new hip-hop attire (an AI image created with MidJourney) going viral on social media was an early example of what could become more commonplace as generative tools evolve. It doesn’t take much imagination to see how something like political ads using AI-generated art could do much more damage than a funny image circulating on Twitter. “Watermarking audio and visual content to help make it clear that content is AI-generated” was one of the voluntary commitments that seven AI companies agreed to develop after a July meeting at the White House. Google is the first of the companies to launch such a system.

Google doesn’t go too far into the weeds about SynthID’s technical implementation (likely to prevent workarounds), but it says the watermark can’t be easily removed through simple editing techniques. “Finding the right balance between imperceptibility and robustness to image manipulations is difficult,” the company wrote in a DeepMind blog post published today. “We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs,” DeepMind’s SynthID project leaders Sven Gowal and Pushmeet Kohli wrote.

Google DeepMind

The identification part of SynthID rates the image based on three digital watermark confidence levels: detected, not detected and possibly detected. Since the tool is embedded into the image’s pixels, Google says its system can work alongside metadata-based approaches, like the one Adobe uses with its Photoshop generative features, currently available in an open beta.

SynthID includes a pair of deep learning models: one for watermarking and the other for identifying. Google says the two trained on diverse images, culminating in a combined ML model. “The combined model is optimised on a range of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark to the original content,” Gowal and Kohli wrote.

Google acknowledged that it isn’t a perfect solution, adding that it “isn’t foolproof against extreme image manipulations.” But it describes the watermark as “a promising technical approach for empowering people and organisations to work with AI-generated content responsibly.” The company says the tool could expand to other AI models, including those tasked with generating text (like ChatGPT), video and audio. 

Although watermarks could help with deepfakes, it’s easy to imagine digital watermarking turning into an arms race with hackers, with services that adopt SynthID requiring continual updating. In addition, the open-source nature of Stable Diffusion, one of the leading generative tools, could make industry-wide adoption of SynthID or any similar solution a tall order: It already has countless custom builds that can run on local PCs out in the wild. Regardless, Google hopes to make SynthID available to third parties “in the near future” to at least improve AI transparency industry-wide. 

This article originally appeared on Engadget at https://www.engadget.com/google-wants-an-invisible-digital-watermark-to-bring-transparency-to-ai-art-164551794.html?src=rss 

WhatsApp’s new Mac app supports group video calls for up to eight people

Several months after WhatsApp released a Windows desktop client, Mac users are getting to join the party with their own dedicated app for the service. The formal arrival of the client (which had been in beta since January) on Apple’s desktops and laptops means users can take part in WhatsApp group calls on their Mac for the first time. 

WhatsApp for Mac supports up to eight people in video calls and as many as 32 in audio-only chinwags. You can hop into a group call after it’s already started, view your call (and chat) history and opt to receive notifications about incoming calls even if you don’t have the WhatsApp client open. Sharing files should be a cinch too, as you’ll be able to simply drag and drop them into a conversation.

The WhatsApp team has spent quite some time making sure that the service supports end-to-end encryption (E2EE) across multiple devices with cross-platform syncing. So, it’s not super surprising that WhatsApp for Mac includes E2EE protection for your chats and calls. The app is available from the WhatsApp website and it’ll hit the Mac App Store soon.

This article originally appeared on Engadget at https://www.engadget.com/whatsapps-new-mac-app-supports-group-video-calls-for-up-to-eight-people-170053104.html?src=rss 

Generated by Feedzy
Exit mobile version