Amazon will start charging for formerly free Alexa Guard smoke and security alerts

Amazon is paywalling more formerly free features on its smart home devices. Several months after it moved some basic Ring alarm system features behind a subscription plan, Amazon is doing something similar for several Alexa Guard functions.

Alexa Guard is a free security feature that came as standard on Echo devices. It listens out for things like alarms and intruders when you aren’t at home. However, the company is shutting down Alexa Guard, as The Verge reports.

Some Alexa Guard features will remain available to everyone at no extra cost as part of the core Alexa experience. These include the Home and Away modes (for arming and disarming a Ring Alarm system) and Away Lighting, which turns on smart lights to make it seem like you’re en casa.

However, you’ll need to pay for the new Emergency Assist service to keep using several features. Amazon is paywalling Alexa Guard’s smoke and CO alarm detection functions. You’ll also soon have to pony up for a subscription if you want Alexa to keep an ear out for the sound of breaking glass, signifying a possible intruder.

There’s at least some good news for Ring Protect Pro members who linked their Ring and Alexa accounts as of September 20. Those folks will get an Alexa Emergency Assist membership at no extra cost until October 31 next year. Guard Plus, which added some extra features to Alexa Guard for a monthly or annual fee, is no longer available for purchase. It was included with a Ring Protect Pro plan.

Alexa Emergency Assist currently costs $6 per month or $59 per year. However, that’s listed as an introductory price that will only remain valid for everyone until January 8. After that time, non-Prime subscribers will have to pay extra for Emergency Assist. Much like Guard Plus, Alexa Emergency Assist enables users to call emergency services via the voice assistant on an Echo device.

This article originally appeared on Engadget at https://www.engadget.com/amazon-will-start-charging-for-formerly-free-alexa-guard-smoke-and-security-alerts-184602106.html?src=rss 

OSIRIS-REx used a Tesla-esque navigation system to capture 4.5 billion-year-old regolith

NASA’s pioneering OSIRIS-REx mission has successfully returned from its journey to the asteroid Bennu. The robotic spacecraft briefly set down on the celestial body in a first-of-its-kind attempt (by an American space agency) to collect pristine rock samples, before alighting and heading back to Earth on a three-year roundtrip journey. The samples impacted safely on Sunday in the desert at the DoD’s Utah Test and Training Range and Dugway Proving Grounds.

Even more impressive, the spacecraft performed its Touch-and-Go Sample Acquisition Mechanism (TAGSAM) maneuver autonomously through the craft’s onboard Natural Feature Tracking (NFT) visual navigation system — another first! Engadget recently sat down with Guidance Navigation and Control Manager at Lockheed Martin Dr. Ryan Olds, who helped develop the NFT system, to discuss how the groundbreaking AI was built and where in the galaxy it might be heading next.

The OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification and Security – Regolith Explorer) is America’s first attempt at retrieving physical samples from a passing asteroid (Japan has already done it twice). Bennu, being roughly 70 million miles from Earth when OSIRIS first intercepted it, presented far more of a challenge in landing than previous, larger targets like the also-not-particularly-easy-to-reach targets of the moon or Mars

“There’s so many different factors,” in matching the myriad velocities and trajectories involved in these landing maneuvers, Olds told Engadget. “So many little details. A lot of what we’re doing is based on models and, if you have little error sources in your model that aren’t being taken into account, then those can lead to big mistakes. So it’s really, really important to make sure you’re modeling everything accurately.”

In fact, after OSIRIS-REx rendezvoused with Bennu in 2020, the spacecraft spent more than 500 days circling the asteroid and capturing detailed images of its surface from which the ground control team generated digital terrain models. “It takes a lot of research to make sure you’ve got all the effects understood,” Olds said. “We did a lot of that with our work on Natural Feature Tracking to make sure we understood the gravity field around the asteroid. Even little things like the spacecraft’s heaters turning on and off — even that produces a very, very tiny propulsive effect because you’re radiating heat, and on really small bodies like Bennu, those little things matter.”

Since the asteroid revolved around its axis, the surface transitioning from sunlit side to dark and back again, every four hours, the OSIRIS team had to, “design all of our TAG trajectories so that we were flying over the lit portion of the asteroid,“ Olds said. “We didn’t want the spacecraft to ever miss the maneuver and accidentally drift back into the eclipse behind the asteroid.” The NFT system, much like a Tesla, relies primarily on an array of visual spectrum cameras to know where it is in space, with a LiDAR system operating as backup.

LiDAR was initially going to be the primary method of navigating, given the team’s belief during the planning phase that the surface of Bennu resembled a sandy, beach-like environment. “We weren’t expecting to have any hazards like big boulders,” Olds said. ”So the navigation system was really only designed to make sure we would land within about a 25-meter area, and LiDAR was the system of choice for that. But quickly once we got to Bennu, we were really surprised by what it looked like, just boulders everywhere, hazards everywhere.”

The team had difficulty spotting any potential landing site with a radius larger than eight meters, which meant that the LiDAR system would not be precise enough for the task. They racked their brains and decided to switch over to using the NFT system, which offered the ability to estimate orbital state in three dimensions. This is helpful in knowing if there’s a boulder in the lander’s descent path. The spacecraft ultimately touched down within just 72cm of its target.

“We did have some ground-based models from radar imagery,” Olds said. “But that really only gave us a very kind of bulk shape — it didn’t give us the detail.” OSIRIS’s 17 months of flyovers provided that missing granularity in the form of thousands of high-resolution images. Those images were subsequently transmitted back to Earth where members of the OSIRIS-REx Altimetry Working Group (AltWG) processed, analyzed and reassembled them into a catalog of more than 300 terrain reference maps and trained a 3D shape model of the terrain. The NFT system relied on these assets during its TAG maneuver to adjust its heading and trajectory.

That full maneuver was a four-part process starting at the “safe-home terminator orbit” of Bennu. The spacecraft moved onto the daylight side of the asteroid, to a position about 125m above the surface dubbed Checkpoint. The third maneuver shifted OSIRIS-REx to Matchpoint, 55m above the surface, so that by the time it finished descending and came into contact with the asteroid, it would be traveling at just 10 cm/s. At that point the ship switched from visual cameras (which were less useful due to kicked-up asteroid dust) to using its onboard accelerometer and the delta-v update (DVU) algorithm to accurately estimate its relative position. In its fourth and final maneuver, was the craft — and its approximately eight-oz (250g) cargo — gently backed away from the 4.5 billion-year-old space rock.

Sunday’s touchdown was not the end of the NFT’s spacefaring career. An updated and upgraded version of the navigation system will potentially be aboard the next OSIRIS mission, OSIRIS-APEX. “Next year, we’re going to start hitting the whiteboard about what we want this updated system to do. We learned a lot of lessons from the primary mission.”

Olds notes that the asteroid’s small stature made navigation a challenge, “because of all those little tiny forces I was telling you about. That caused a lot of irritation on the ground … so we’re definitely wanting to improve the system to be even more autonomous so that future ground crews don’t have to be so involved.“ The OSIRIS spacecraft is already en route to its APEX target, the 1,000-foot wide Apophis asteroid, which is scheduled to pass within just 20,000 miles of Earth in 2029. NASA plans to put OSIRIS into orbit around the asteroid to see if doing so affects the body’s orbit, spin rate, and surface features.

This article originally appeared on Engadget at https://www.engadget.com/osiris-rex-used-a-tesla-esque-navigation-system-to-capture-45-billion-year-old-regolith-192132417.html?src=rss 

ChatGPT now supports voice chats and image-based queries

ChatGPT is getting some significant updates that will enable the chatbot to deal with voice commands and image-based queries. Users will be able to have a voice conversation with ChatGPT on Android and iOS and to feed images into it on all platforms. OpenAI is rolling out the features now. They’ll be available to Plus and Enterprise users at first, with other folks gaining access to the image-based features later.

You’ll need to opt in to voice conversations in the ChatGPT app (go to Settings then New Features) if you’d like to try them out. By tapping the microphone button, you’ll be able to choose from five different voices.

OpenAI says the back-and-forth voice conversations are powered by a new text-to-speech model that can generate “human-like audio from just text and a few seconds of sample speech.” It created the five voices with the help of professional actors. Going the other way, the company’s Whisper speech recognition system converts a user’s spoken words into text.

Use your voice to engage in a back-and-forth conversation with ChatGPT. Speak with it on the go, request a bedtime story, or settle a dinner table debate.

Sound on 🔊 pic.twitter.com/3tuWzX0wtS

— OpenAI (@OpenAI) September 25, 2023

The image-based functions are intriguing too. OpenAI says you can, for instance, show the chatbot a photo of your grill and ask why it won’t start, get it to help plan a meal based on a snap of what’s in your fridge or prompt it to solve a math problem you take a picture of. As it happens, Microsoft highlighted the Copilot AI’s ability to solve math problems in Windows during its Surface event last week.

OpenAI is using GPT-3.5 and GPT-4 to power the image recognition features. To use ChatGPT’s image-based functions, tap the photo button (you’ll need to tap the plus button first on iOS or Android) to take a snap or choose an existing image on your device. You can ask ChatGPT about multiple photos and use a drawing tool to focus on a specific part of the image.

In a blog post announcing the updates, OpenAI noted the potential for harm. It’s possible for bad actors to mimic the voices of public figures (and everyday folks) and perhaps commit fraud. That’s why OpenAI is focusing on ChatGPT voice conversations with this technology and working with select partners on other limited use cases (more on that in a moment).

As for images, OpenAI worked with Be My Eyes, a free app that blind and low-vision people can use to help them better understand their surroundings thanks to volunteers who hop into video calls with them. “Users have told us they find it valuable to have general conversations about images that happen to contain people in the background, like if someone appears on TV while you’re trying to figure out your remote control settings,” OpenAI said. The company noted that it has also limited how ChatGPT can analyze and make direct statements about people that appear in images, “since ChatGPT is not always accurate and these systems should respect individuals’ privacy.” It has published a paper on the safety properties of the image-based functionality, which it calls GPT-4 with vision.

ChatGPT is more effective at understanding English text in images than other languages. OpenAI says the chatbot “performs poorly” in other languages for the time being, particularly when it comes to those that use non-Roman scripts. As such, it suggests that non-English users avoid using ChatGPT to deal with text in images for now.

Meanwhile, Spotify has teamed up with OpenAI to use the voice-based technology for an interesting purpose. The former has announced a pilot of a tool called Voice Translation for podcasters. This can translate podcasts into different languages using the voices of the folks who appear on the show. Spotify says the tool can retain the speech characteristics of the original speaker after converting their voice into other languages.

To start with, Spotify is converting select English-based shows into a few languages. Spanish versions of some Armchair Expert and The Diary of a CEO with Steven Bartlett episodes are available now, with French and German variants to follow.

Do you dream of a world where some of the top podcasts would be spoken in your native language? Well, that’s now possible. We’re excited to pilot Voice Translation, a groundbreaking feature powered by AI that translates podcasts into additional languages—all in the podcaster’s… pic.twitter.com/7ebVwF99hD

— Spotify News (@SpotifyNews) September 25, 2023

This article originally appeared on Engadget at https://www.engadget.com/chatgpt-now-supports-voice-chats-and-image-based-queries-144718179.html?src=rss 

New PS5 owners can grab a free game thanks to Sony’s latest offer

It’s always nice to have some options when it comes to playing games on a fresh console, so for the next month Sony is giving away a free title to anyone who purchases and activates a new PS5.

Dubbed the Upgrader Program, Sony’s latest initiative to entice potential PS5 buyers is refreshingly straightforward. In order to get a free game, users will need to purchase and activate their console before 11:59PM PT on October 20th. Once that’s done, you can just go to the PlayStation Store and redeem a specific title by tapping on a banner for this offer. That said, if you don’t already have an existing PSN account, you will need to make one as the free games come in the form of a digital download. 

The other nice thing is the selection of free titles includes a number of high-profile releases from the past few years. Here’s the full list of currently redeemable games:

Marvel’s Spider-Man: Miles Morales

Marvel’s Spider-Man: Remastered

God of War Ragnarök

Horizon Forbidden West

Ghost of Tsushima Director’s Cut

Ratchet & Clank: Rift Apart

 Demon’s Souls

The Last of Us Part I

Sackboy: A Big Adventure

Returnal

Uncharted: Legacy of Thieves Collection

Death Stranding: Director’s Cut

Unfortunately, the Upgrader Program is only valid for owners in the US and it seems that if you have recently purchased a PS5 and activated it prior to September 23, you may not be eligible for the new offer. But for those who are able to take part, this is a great way to kick off your PS5 game collection with basically no strings attached.  

This article originally appeared on Engadget at https://www.engadget.com/new-ps5-owners-can-grab-a-free-game-thanks-to-sonys-latest-offer-151459735.html?src=rss 

PlatinumGames co-founder Hideki Kamiya is leaving the studio

Bayonetta director Hideki Kamiya is leaving PlatinumGames, after helping to found the company back in 2006 when it was called Seeds Inc. Kamiya was recently promoted to VP, so this move comes as a slight surprise. He recently said on social media that it was “by no means an easy decision to make.”

Kamiya still has a couple of weeks on his post, as he officially exits the company on October 12. As for the why of it all, he wrote that the move “came after a lot of consideration based on my own beliefs,” but didn’t offer further explanation. He says he’s still going to make games in his “Hideki Kamiya way”, but hasn’t announced if he’s heading to another company, starting his own or even just planning to tinker away in a garage somewhere. He’s only 52, so a complete retirement is highly unlikely.

Kamiya’s mark on gaming is massive. Most recently, he was the supervising director of the critically-acclaimed Bayonetta 3. During his more than 15 years at PlatinumGames, Kamiya worked on classics like the original Bayonetta, the Wii U and Switch cult hit The Wonderful 101 and the action-heavy Astral Chain, among others. He was staffed at Capcom and its spin-off studio Clover before founding PlatinumGames, helming Resident Evil 2, Viewtiful Joe and Ōkami.

Kamiya has been hard at work these past few years on a superhero title internally referred to as Project GG. He was lead director on the “heroic” game and the company has marketed it as a conclusion to his superhero trilogy, joining Viewtiful Joe and the Wonderful 101. PlatinumGames hasn’t announced if the game’s still coming or if it will poof into vaporware with Kamiya’s departure.

This article originally appeared on Engadget at https://www.engadget.com/platinumgames-co-founder-hideki-kamiya-is-leaving-the-studio-154529517.html?src=rss 

The iPhone 15 Pro version of Resident Evil Village lands on October 30

Resident Evil Village is a haunting horror romp starring a very tall and elegant vampire lady (and some other monsters, sure), and it’s heading to iPhone 15 Pro and Pro Max on October 30. It’ll hit the M1 and M2 models of the iPad Pro and iPad Air on the same day. The base game will cost $40 and its Winters’ Expansion DLC will be an additional $20.

Resident Evil Village originally came to PC, PlayStation 4, PS5, Xbox One and Xbox Series X/S in 2021, and it became a cultural touchstone for its monster-mashing storyline. The game includes werewolf creatures, a mutant fish man, a murderous cult leader and festering, zombie-like enemies, though its breakout star was Countess Alcina Dimitrescu. She’s an exceptionally tall, undead, razor-fingered villain who leads a trio of vampiric daughters, and she’s simply fantastic.

Village landed on Mac in 2022. Apple revealed the iPhone and iPad versions during its annual iPhone event on September 12, 2023, but it didn’t share a release date at the time. Capcom provided the date on its site this week. The Resident Evil 4 remake, which landed on PC and consoles this year, is also due to hit Apple’s mobile devices in 2023, but no date has been confirmed just yet.

Other games coming to the iPhone 15 Pro — thanks to the new A17 Pro chipset — include Death Stranding and Assassin’s Creed Mirage. Death Stranding is due out this year, while Mirage is scheduled to hit in early 2024.

This article originally appeared on Engadget at https://www.engadget.com/the-iphone-15-pro-version-of-resident-evil-village-lands-on-october-30-153334740.html?src=rss 

iPhone 15 Pro Max teardown reveals a mixed bag for repairability

Repairability website iFixit has published its teardown of the iPhone 15 Pro Max, and the results are a mixed bag. Local repair shops still have to deal with the company’s software-restricted “parts pairing” requirement, which means they need to order official components directly from Apple and get on the phone with a company employee before iOS will accept individual part replacements.

On the positive side, iFixit praised Apple for returning to a “dual-entry” removable glass back cover with the iPhone 15 Pro models — a feature that debuted with the standard iPhone 14 line last year. “This is a win for consumers as back glass repairs have been outrageously expensive on the high-end models until now, costing as much as $550,” iFixit said in its teardown video.

iFixit also examined the phone’s titanium frame and came away less than impressed. While noting that titanium is dirtier to produce than stainless steel and aluminum (mocking Apple’s “Mother Nature” skit in its launch event), the site also said the material scratches easily. “Unfortunately for the cool factor, we found that the color on the titanium shell scratches easily, a process that is only satisfying under the magnificent magnification of the microscope,” the teardown video said. “I could scratch this thing up all day.”

Elsewhere, iFixit found that the iPhone 15 Pro Max’s logic board appears to be the same as the one in the iPhone 15 Pro, and you have to remove the speaker and Taptic Engine to access the battery-removal tabs. Interestingly, the website also noted that the main and wide camera sensors on the iPhone 15 Pro Max appear identical to those on the iPhone 14 Pro Max, suggesting the “Tetraprism” periscope lens, which enables 5x optical zoom, is the only hardware-based camera update this year. “Any improvement in image quality has more to do with a new A17 SoC than the camera hardware itself,” iFixit said.

Dinging Apple for parts pairing appears primed to stand as a primary focus of iFixit’s Apple teardowns from now on. The repair advocacy website views it as significant enough of a problem to have lowered the iPhone 14’s repairability score from 7 out of 10 to 4 out of 10 nearly a year after launch because of it. “And as we’ve now come to expect, each year brings new parts pairing issues and bugs,” the video said. “This year’s edition is the LiDAR sensor, which now crashes if the sensor is swapped out. Calibration issue or not, these bugs need to be fixed, or else they might as well be paired with the logic board with a tiny Apple warning saying, ‘Hey, this phone is property of Apple.’”

Due to the parts pairing requirement, iFixit gave the iPhone 15 Pro Max a mere 4 out of 10 repairability score. “This phone won’t accept salvaged parts, it complicates at-home repair, and it won’t be any fun for your local repair tech,” the website said.

This article originally appeared on Engadget at https://www.engadget.com/iphone-15-pro-max-teardown-reveals-a-mixed-bag-for-repairability-164720796.html?src=rss 

California governor vetoes bill for obligatory human operators in autonomous trucks

California Gov. Gavin Newsom has blocked a bill that would have required autonomous trucks weighing more than 10,000 pounds (4,536kg) to have human safety drivers on board while operating on public roads. The governor said in a statement that the legislation, which California Senate members passed in a 36-2 vote, was unnecessary. Newsom believes existing laws are sufficient to ensure there’s an “appropriate regulatory framework.”

The governor noted that, under a 2012 law, the state’s Department of Motor Vehicles collaborates with the National Highway Traffic Safety Administration, California Highway Patrol and other relevant bodies “to determine the regulations necessary for the safe operation of autonomous vehicles on public roads.” Newsom added that the DMV is committed to making sure rules keep up with the pace of evolving autonomous vehicle tech. “DMV continuously monitors the testing and operations of autonomous vehicles on California roads and has the authority to suspend or revoke permits as necessary to protect the public’s safety,” his veto message reads.

Newsom, who has a reputation for being friendly to the tech industry, reportedly faced pressure within his administration not to sign the bill. The state’s Office of Business and Economic Development warned that the proposed law would lead to companies that are working on self-driving tech to move out of California.

On the other hand, as the Associated Press notes, California Labor Federation head Lorena Gonzalez Fletcher estimates that not requiring human drivers in trucks would cost around 250,000 jobs. “We will not sit by as bureaucrats side with tech companies, trading our safety and jobs for increased corporate profits,” Fletcher, who called autonomous trucks dangerous, said in a statement. “We will continue to fight to make sure that robots do not replace human drivers and that technology is not used to destroy good jobs.”

This article originally appeared on Engadget at https://www.engadget.com/california-governor-vetoes-bill-for-obligatory-human-operators-in-autonomous-trucks-170051289.html?src=rss 

Meta’s plan to attract young users hinges on cringe-worthy AI chatbots

Meta’s planning on unleashing a swarm of personality-driven AI chatbots to attract young users to its various platforms, as originally reported by The Wall Street Journal. The first of these bots could launch as early as this week, with rumors persisting that one will get announced during Meta’s Connect conference on Wednesday.

It looks like these bots won’t be tied to a particular platform under Meta’s umbrella and should launch on a variety of social media sites such as Instagram, Facebook and Whatsapp. WSJ says that Meta employees have been testing the generative bots for a while. The bots are being released to increase chat engagement, but some may offer productivity tools like coding and the like.

These AI chatbots are stuffed with personality to keep the young (and young at heart) entertained. Specifics remain vague, but WSJ got a look at some internal documents that detail an AI called “Bob the Robot” that’s loosely based on Bender from Futurama. This bot is a self-described “sassmaster general” with the internal documents referring to it as a “sassy robot that taps into the type of farcical humor that is resonating with young people.” As a note, Futurama premiered almost 25 years ago, long before many of those farcical humor-loving young people were even born.

There’s also a bot called “Alvin the Alien” that reportedly pries users for personal information in its quest to understand humans. “Your species holds fascination for me,” an internal report has it saying. “Share your experiences, thoughts and emotions! I hunger for understanding.” One employee noted in the memo that users “might fear this character” as it seems like it’s “purposefully designed to collect personal information.” The company has been famously squeaky-clean regarding privacy violations in the past, so this should cause no concern.

Meta’s been trying to court younger users for a while now, particularly since the meteoric rise of TikTok. The app has overtaken Instagram in recent years and CEO Mark Zuckerberg wants that marketshare back, telling investors during a conference call in 2021 that the company would retool its “teams to make serving young adults their North Star rather than optimizing for the larger number of older people.” So it looks like there won’t be a chatbot that complains about participation trophies or Bud Light or whatever.

WSJ suggests that dozens of these chatbots are on the way, referred to internally as Gen AI Personas. They’ll also pop up in metaverse applications in addition to standard social media services. Reports also indicate that Meta’s prepping a toolset for celebrities to allow them to create their own AI chatbots to interact with fans.

Of course, Meta’s not the first social media company to court youngsters with personality-filled chatbots. Amazon’s prepping an Alexa-powered voice chat service for kids. Snap also launched the My AI service back in February and it has been used by over 150 million people since that release. Despite the success, My AI has run into some troubling issues for a product intended for children. For instance, it has chatted about alcohol and sex with users and even randomly started posting photos without consent. We’ll have to wait and see if “Bob the Robot” and his cohorts start behaving badly when they launch.

This article originally appeared on Engadget at https://www.engadget.com/metas-plan-to-attract-young-users-hinges-on-cringe-worthy-ai-chatbots-173459484.html?src=rss 

Analogue’s limited-edition transparent Pocket handhelds come in seven colors

It’s only been a few weeks since Analogue released a glow-in-the-dark Pocket console, but the manufacturer is already gearing up for yet another limited edition launch that could evoke memories of your youth. Analogue will start selling transparent Pockets, which are reminiscent of clear Game Boy Color consoles, on September 29 at 8 AM PT/11 AM ET.

You will have seven transparent colors to choose from: clear, smoke, red, blue, orange, green and purple. The retro gaming console will set you back $250 each — that means they cost $30 more than the basic versions, which are out of stock at the moment. These consoles are only available in limited quantities, though, and Analogue told us that they will never be sold again. If you’re interested, you may want to online at the exact time the model will be available to check out, because the glow-in-the-dark edition sold out in mere minutes

Analogue’s Pocket handheld can play Game Boy and Game Boy Advance cartridges out of the box and even support original accessories for the Nintendo console. You can also connect it directly to a Game Boy if you still have one for multiplayer gaming. But Game Boy cartridges aren’t the only ones you can play again with a Pocket: Analogue also sells Game Gear adapters, so you can relive your ’90s gaming experience. TurboGrafx, Neo Geo and Lynx adapters were also announced a long time ago, and will likely arrive one day… one day.

This article originally appeared on Engadget at https://www.engadget.com/analogues-limited-edition-transparent-pocket-handhelds-come-in-seven-colors-150049816.html?src=rss 

Generated by Feedzy
Exit mobile version