How Alternative Data Transforms Cryptocurrency Investment Decisions

The world of cryptocurrency investing is evolving rapidly, and investors are seeking new ways to gain an edge in increasingly complex markets. Alternative data, once a niche tool, is now a central part of sophisticated investment strategies. As digital assets mature, understanding how to leverage these unconventional data sources is becoming essential for anyone aiming…

The world of cryptocurrency investing is evolving rapidly, and investors are seeking new ways to gain an edge in increasingly complex markets. Alternative data, once a niche tool, is now a central part of sophisticated investment strategies. As digital assets mature, understanding how to leverage these unconventional data sources is becoming essential for anyone aiming… 

Brave and AdGuard now block Microsoft Recall by default

The Brave web browser and the ad-blocker AdGuard have both announced that they are blocking Microsoft Recall by default. For the uninitiated, Recall is an AI-powered tool that accompanies Windows 11 and it records everything on a PC’s screen. It’s pretty obvious why a privacy-minded web browser like Brave and an ad-blocker would make this move.

AdGuard said the decision was made due to a “privacy concern,” going on to say that “the very idea of background screen captures is unsettling.” A blog post on the matter suggested that the tool could “snap a screenshot of a private chat window, an online form where you’re entering your credit card or simply something personal you didn’t want saved.”

🚨 Microsoft Recall — new threat or improvement?

In May 2024, Microsoft introduced a new feature in Windows 11 that was actively taking screenshots of everything happening on the screen, including messages in WhatsApp and Signal. These screenshots were stored on the device… pic.twitter.com/C97KDylXPj

— AdGuard (@AdGuard) July 15, 2025

Brave also cited privacy concerns, suggesting that a user’s “entire browsing history” could be captured by the tool. “We think it’s vital that your browsing activity on Brave does not accidentally end up in a persistent database, which is especially ripe for abuse in highly-privacy-sensitive cases,” the company wrote in a blog post.

The chat app Signal made a similar move back in May, urging “AI teams building systems like Recall” to think “through these implications more carefully in the future.” Brave says it was “partly inspired” by Signal.

AdGuard and Brave both offer toggles to bring Recall back into the mix. Microsoft’s controversial tool lets people jump to whatever was previously on a screen. This includes web pages, images, documents, emails, chat threads or whatever else. It actually sounds like a pretty nifty productivity tool, despite the privacy concerns. It’s available with some Copilot+ PCs.

This article originally appeared on Engadget at https://www.engadget.com/ai/brave-and-adguard-now-block-microsoft-recall-by-default-152601475.html?src=rss 

Ember’s Travel Mug 2 with Find My is $30 off right now

The annoying thing about coffee is that its ideal temperature is fleeting. Get caught up in a particularly lively debate in your morning meeting and by the time you return to your cup of joe, it’s cooled right down. That’s where Ember’s smart heated mugs come in, and right now one of its best models is $30 off.

The Ember Travel Mug 2 — this one is the 12-ounce variant — usually costs $130, but right now can be snagged for $100 when purchased online from Walmart. It can keep your beverage of choice at your preferred temperature between 120-145 degrees Fahrenheit for up to three hours on a full charge, thanks to the extended battery life. If that isn’t long enough, the included charging coaster can maintain optimum heat all day.

You can pair your mug with the Ember app to tweak the temperature (there’s also a touch display on the cup itself if you don’t want to open your phone), and this model also supports Apple’s Find My service, so you can locate it on your Apple device using the Find My app’s map if you misplace it.

The Ember Travel Mug 2+ is IP67-rated but bear in mind that it’s washable by hand only. This model is available in black.

Follow @EngadgetDeals on X for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/embers-travel-mug-2-with-find-my-is-30-off-right-now-135059507.html?src=rss 

Samsung Galaxy Z Flip 7 review: More screen, more battery but something’s missing

Samsung’s annual foldable refresh includes a trio of devices this year. While the Z Fold 7 comes with a top-tier processor and cameras, the Z Flip 7 carries over many technical details from its predecessor. In fact, the most significant upgrade this year is a more expansive front display, or Flex Window, as the company prefers to call it.

Samsung has also, somehow, added an even bigger battery to a slimmer foldable, which is technically impressive. However, the company made minimal software additions and modest improvements in places. The upgrades are good, but are there enough of them?

Hardware

Mat Smith for Engadget

Samsung has caught up to one of its few foldable rivals by substantially increasing the front display from 3.4 inches to 4.1 inches. The two camera cutouts are the only things interrupting this bright, vivid display.

Despite bigger screens (and battery), the Flip 7 is also thinner than last year’s model. It measures 6.4mm (0.25 inches) thick, down from 6.9mm (0.27 inches). That’s equal to the Galaxy S25 edge, where the major selling point is its thinness. I shouldn’t have to say this, but that thickness doubles when you fold it away: Samsung says it’s 13.7mm (0.54 inches) when closed. However you measure it, this is the thinnest Flip yet.

The Flex Window now has a 120Hz refresh rate, can reach peak brightness levels of 2,600 nits (matching the S25 Ultra) and features the thinnest bezel yet (1.25mm) of any Samsung smartphone display. The brightness increase is a crucial upgrade on a screen that’s used mainly to glance at your device from across a desk or outside in sunlight. Samsung said it was able to further extend the display to the edges by applying a waterproof coating on components beneath the AMOLED display. This meant less bezel is needed around the screen’s perimeter.

Samsung also expanded the foldable’s main screen to 6.9 inches — a notable step up from the previous model’s 6.4 inches. The company also wisely shifted closer to a more typical smartphone display ratio of 21:9. While it’s not quite the same as, say, the Galaxy S25 or iPhone 16, video content now fits better.

Previous Z Flips had more elongated screens, making it hard to reach the upper corners when using one hand. While the Z Flip 7 has a better aspect ratio, it’s also bigger, which unfortunately means it has the same issue. If Samsung had changed the aspect ratio while keeping the screen size, it might have struck the ergonomic sweet spot.

Not for the first time, Samsung says it’s further strengthened the Flip’s hinge mechanism. While I can’t attest to its longevity, the Flip 7 does close with a beefier ‘thunk’ than its predecessors.

I like that it takes a little bit more strength to open because it makes the whole device feel more solid. If you’re hoping this is the year where the display crease disappears, I’m afraid that isn’t happening. For those who have never owned a foldable though, the Flip 7’s crease is largely unnoticeable.

The cameras

Mat Smith for Engadget

The story of this year’s Flip’s cameras? More of the same. The Flip 7 has the same 50-megapixel f/1.8 primary setup with optical image stabilization (OIS) and a 12MP ultra-wide lens with a 123-degree field of view. There’s a 10MP camera on the inside, too, but with the Flip’s improved front screen, you really should be using the primary camera duo.

The best images come from well-lit scenes. Samsung’s Night Mode is a little aggressive, but it’s still great for nighttime shots, even if it leaves an over-processed look on skin tones and faces. The Flip 7 defaults to 12.5-megapixel images, but you can switch to 50MP stills if you want all the detail — and you have a steady hand. There’s no dedicated telephoto camera, which means the zoom is limited to a 2x crop, hindering the Z Flip 7’s versatility. Once again: if you want a foldable with the best cameras, you’ll have to go with the Fold series.

Mat Smith for Engadget

In regard to video capture, the Flip 7 can record video in 10-bit HDR, up from 8-bit, for more color accuracy and detail. And you can still hold the Z Flip 7 like a retro camcorder, which remains far more enjoyable than it should be.

Software

Mat Smith for Engadget

Despite the bigger screen, Samsung hasn’t really evolved its Flex Window’s software this year. There are a few improvements, though. When using the front display to take selfies and videos, interface icons and mode buttons will float around the display to get out of the way of what it detects you’re trying to take photos of. If you’re looking to use a special filter on your shots, the Z Flip 7 can show a live preview of each filter before you apply them.

Two other software additions to the Flip series are the Now Bar and Now Brief, introduced in this year’s S25 series. The Now Bar is Samsung’s spin on the iPhone’s Dynamic Island, displaying pertinent ongoing or time-sensitive information that isn’t quite a notification, but something you might want to know about. This includes fitness metrics, weather or live updates from ride-share apps like Uber.

You can tap into the Now Bar without opening the phone, and you won’t have to unfold it if you’re looking to interact with Google’s Gemini Live either. It’s another Android feature that seems well-suited to foldables, as you can talk to Gemini with minimal physical interaction on the Z Flip 7.

To use most other apps, you’ll still need to install the Multistar launcher (which is made by Samsung) in order to view and use basic software like Gmail and Chrome. This is the workaround for now, as Samsung hasn’t expanded the approved list of six apps that are allowed to run on the Z Flip 7’s Flex Window — a short list which bafflingly includes Netflix.

Battery life and performance

Mat Smith for Engadget

Despite being even slimmer than last year’s model, the Flip 7 has a 4,300mAh battery that’s 300mAh larger than its predecessor’s. That, along with a new processor (its homemade 3nm Exynos 2500), has led to a leap in longevity.

In our standard battery rundown tests, the Flip 7 played continuous video for 18 and a half hours. That’s a big jump from the 13-hour rundown I got from the Flip 6. I was so surprised I ran the test two more times to confirm and got similar results.

The new chip never felt particularly sluggish, either, whether I was gaming, using Galaxy AI features or completing other taxing activities. That said, the phone would get warm, both in my hand and in my pocket, during camera testing and location-based tasks, like using Google Maps and CityMapper.

In my day-to-day use, with heavy testing, I’d get through a solid day and a half before I had to think about recharging. Unfortunately, the Galaxy Z Flip 7’s wired charging speeds still top out at 25W, which Samsung has the gall to call Super Fast Charging. It’s the same speed as the Z Flip 6 (and the base S25), but when other phones are using 45W charge speeds (or higher), it’s a little underwhelming, especially on a premium device like this.

Wrap-up

Mat Smith for Engadget

By refreshing the design and further slimming down the Z Flip, Samsung has made its smaller foldable easier to handle. At the same time, it’s accomplished an engineering marvel by squeezing in larger screens and a more substantial battery. However, certain aspects of the Flip 7 are lacking, most notably the cameras, which haven’t been changed since last year.

Samsung also needs to put more work into its Flex Window. While the ability to use most apps on the smaller front display is here, it’s still a fussy implementation and most of them aren’t tailored for the tinier screen. There’s untapped potential here and the Z Flip 7 doesn’t push its category of foldables forward by much.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/samsung-galaxy-z-flip-7-review-140022250.html?src=rss 

Google invests in carbon dioxide battery for renewable energy storage

Google has announced that it has signed a global commercial partnership with Milan-based startup Energy Dome and has also invested in its long duration energy storage (LDES) tech for renewable energy. The deal, its first investment in LDES tech, entails using Energy Dome’s carbon dioxide battery for the grids that power Google’s operations around the world. Batteries are used to keep excess energy generated by renewable sources, such as solar and wind, during peak production and when demand is low. But lithium-ion batteries can only store and dispatch energy for fours hours or less.

Energy Dome explained that its CO2 battery can store and continuously dispatch energy for 8 to 24 hours, so Google can rely on renewable power more even when there’s no wind or sun. Its technology uses carbon dioxide held inside dome-shaped batteries, which you can see in the image above. When there’s excess renewable energy being generated, the batteries use that power to compress the carbon dioxide gas inside them into liquid. And when that energy is needed, the liquid carbon dioxide expands back into a hot gas under pressure. That gas spins a turbine and generates energy that’s fed back into the grid for a period lasting up to a whole day. 

Google said that Energy Dome’s technology has the potential to “commercialize much faster” than some of its other clean tech investments, and it aims to “bring this technology to scale faster and at lower costs.” It also said that it believes the partnership and its investment in Energy Dome can help it achieve its goal of operating on renewable energy 24/7 by 2030

This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-invests-in-carbon-dioxide-battery-for-renewable-energy-storage-140045660.html?src=rss 

New Jersey is the latest state to embrace digital IDs

New Jersey citizens will now be able to flash their driver’s license from their smartphones after Governor Phil Murphy signed a digital ID measure into law, joining the ranks of over a dozen other states to do so. The law also allows for digital non‑driver identification cards for those without driver’s licenses.

The bill also enshrines privacy and security protocols for those using the digital IDs, including safeguards to prevent unauthorized access to the underlying data. It outlines protections when using them, prohibiting any entities from requiring users to hand over their devices when presenting the digital ID. The bill clarifies that presenting a digital ID does not imply consent to a search or access to any other data on the device.

“Digital driver’s licenses will make life easier for drivers across New Jersey,” said Governor Murphy. “Innovation has always been at the core of who we are as a state, and my administration has embraced that spirit to deliver practical solutions that improve everyday life for our people. By bringing government services into the digital age, we’re setting a new standard for how the public interacts with state agencies.”

Participation in the program is voluntary, so those who wish to keep using their physical IDs may do so. The state has set aside $1.5 million toward launching the program. It remains unclear whether these digital IDs will be compatible with Google Wallet and Apple Wallet, or if New Jersey intends to create its own digital ID app like Louisiana, New York and others.

This article originally appeared on Engadget at https://www.engadget.com/mobile/new-jersey-is-the-latest-state-to-embrace-digital-ids-140408431.html?src=rss 

Engadget Podcast: A taste of iOS 26, iPadOS 26, macOS 26 and more

We’ve been playing around with the developer betas of Apple’s latest software, and now that we’ve spent time with iOS 26, Liquid Glass and more on actual devices, we have thoughts. From representation in Genmoji and Live Translation adventures to Apple Intelligence musings, our hosts Cherlynn Low and Mat Smith share what you can expect on your iPhones later this year. We also go over our reviews of the Samsung Galaxy Watch 8 and Galaxy Z Flip 7, as well as Microsoft’s SharePoint server vulnerabilities

Check out the episode below or find the Engadget Podcast on your favorite podcast platform. We’re pretty much everywhere. If you’d like to interact with our hosts live, make sure to join us on Thursday mornings on the Engadget YouTube channel for the livestream of this podcast. We’re taking a break for the rest of July but will resume that broadcast in August. Come through, we can’t wait to talk to you!

Subscribe!

iTunes

Spotify

Pocket Casts

Stitcher

Google Podcasts

Credits

Hosts: Cherlynn Low and Mat Smith
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien

This article originally appeared on Engadget at https://www.engadget.com/engadget-podcast-a-taste-of-ios-26-ipados-26-macos-26-and-more-143044957.html?src=rss 

Alexa+ preview: An almost philosophical exercise

“Alexa, I need to write a review about you.”

“Oh, a review? I’m intrigued! What’s the occasion? I’d love to hear what you think.”

“There’s no occasion, this is just my work.”

“Ah, I see. Writing a professional review, got it. I’m here to help if you need any specific information.”

“Yeah, what should I consider when reviewing you?”

“Consider my natural voice, emotional intelligence and my ability to handle complex tasks. How about my personality and humor? Those are key parts of the new me.”

The above is a recent exchange I had with Alexa+. Access to the next-generation assistant has been slowly trickling out to users since Amazon first announced availability at an event in February. But work on the smarter, more natural-sounding Alexa has been going on for years, and it still isn’t ready for a widespread release. The version currently rolling out is more like an intriguing preview of what a truly intelligent Alexa could be.

I’m having a hard time figuring out how I feel about the upgraded Alexa. On the one hand, I cannot deny it is now capable of tackling some fairly complex requests it couldn’t before. On the other, it doesn’t have good memory, can’t really see all tasks to completion and is just incompetent enough to be annoying.

I want to first caveat all of this by pointing out that when it does become broadly available, Alexa+ will be a free upgrade for Amazon Prime subscribers. Non-Prime members will have to pay $20 a month for the privilege, but they’ll still get the existing Alexa experience on their compatible devices should they choose not to upgrade. That means there will likely be no extra cost to most, and you probably won’t need to spend too much time thinking about whether you want to use it. As long as it’s not a worse experience, it is very easy to say yes to.

I’ve been tempted to get philosophical when thinking about a digital assistant that’s more fluent in human. Do I really want to think of the new Alexa as basically a flesh-and-blood assistant with emotional intelligence, personality and humor? No. If it performs poorly, I’ll be let down; if it performs well and can essentially pass a mini Turing test, the ickier I’ll feel at keeping something like an indentured servant in my kitchen.

I set aside my existential questions and tried to focus on the practical experience of getting help from Amazon’s upgraded assistant. Is it better than the previous version? Is it reliable and easy to use? Finally, does it deliver what Amazon promised? And as a bonus, is the experience enjoyable (or at least painless)?

The answer to all those questions is a half-hearted shrug. In some ways, Alexa+ delivers. But in many ways it is an excellent showcase of the limitations of generative AI, and demonstrates that the true problem with the current cohort of AI tools is a mismatch between expectations and reality.

Amazon

What’s new with Alexa+?

A voice assistant is hard to describe, since it’s intangible and amorphous. It’s quite difficult to state where its capabilities begin and end, not to mention how it might have been upgraded. But I’ll start by comparing it to its predecessor, which I’ll be calling Original Alexa (or OriginAlexa, if you will indulge me).

OriginAlexa taught us how to use very specific commands to do things like turn our living room lights on or off. If you had a family member or friend named Alexa, you might have renamed it to “Computer” and adapted your relationship to that word. Due to how you might have grouped your home gadgets, you could have begun to refer to your kitchen area lights as “skylights,” for example.

“Alexa speak,” as some call it, differs across households. I say “Alexa, stop,” to silence alarms, while my best friend says “Alexa, off,” to do the same. But regardless of the specific word choices, Alexa-speak largely revolved around using stilted phrases and careful enunciation to avoid having to repeat yourself to get something done. Anyone that’s used any voice assistant is probably familiar with the frustration of repeating yourself when a command has been misheard for the umpteenth time.

That’s (supposed to be) a thing of the past with Alexa+. In a blog post announcing the new assistant, Amazon’s lead of devices and services Panos Panay said “Alexa+ is more conversational, smarter, personalized — and she helps you get things done.” The company said it “rebuilt Alexa with generative AI,” but it didn’t just use large language models (LLMs) to make its assistant converse more naturally. It also created new architecture to enable API integration “at scale.” These APIs are how assistants can connect to third-party services to do stuff on your behalf, and Amazon described them as “core protocols to getting things done outside of a chat window and in the real world.”

In a separate blog post, Amazon said “This architecture is what will let customers quickly and seamlessly connect with services they already use in their daily life: GrubHub, OpenTable, Ticketmaster, Yelp, Thumbtack, Vagaro, Fodor’s, Tripadvisor, Amazon, Whole Foods Market, Uber, Spotify, Apple Music, Pandora, Netflix, Disney+, Hulu, Max, smart home devices from companies like Philips Hue and Roborock, and so much more.”

Basically, Alexa can communicate with you more naturally, meaning you can talk to it more like you would with another human being, so you can forget about Alexa-speak. It will also retain information about your preferences and is capable of handling more tasks on your behalf.

But enough about the promises. What was living with Alexa+ for weeks actually like?

The setup

Alexa+ is currently only available as an “Early Access” preview to a small group of users. Though my access was granted by Amazon for the purposes of this testing, other people in my non-tech circles did start gaining access recently, which means you might be able to check it out yourself soon.

The fact that it’s still somewhat exclusive and experimental means there are likely to be glitches, which is understandable. Once I got past the first day or two after upgrading to Alexa+, I didn’t notice many actual bugs. What frustrations I did encounter later seemed more to do with programming and AI’s limitations than unstable software.

The updated Assistant currently requires at least one compatible device with a screen on your network, so those of you who only have Echo speakers will have to wait a lot longer or try it on your phone. I spent most of my time testing Alexa+ via an Echo Show 15 as well as the Alexa app on my iPhone.

There were small variations in the answers I would get on either device, but by and large the experience was similar. The most meaningful difference really was in how I perceived Alexa. Initially, when I was interacting with it on the smart display, it felt more like an upgraded smart home and personal assistant, and I predominantly asked it to check on the weather, Uber prices or to help me do things like set timers, reminders and play music.

Amazon (Screenshots from the Alexa app)

On my phone, though, I talked to Alexa+ more like I would with ChatGPT. I asked deeper, more philosophical questions that required more research and thought. I asked it to generate images, sort 15 names into three groups and, inspired by the subreddit “r/tipofmytongue,” help me find a book I was struggling to recall.

Over time, I did come to rely on the smart display more, as it’s always easier to just say “Alexa, is Mountainhead a good movie” than to pick up my phone, find an app and ask the AI. Of course, I could ask the same question of Siri or my Google speakers, and I did. All three assistants answered similarly, each citing different sources. Only Alexa gave me a direct answer, saying “Mountainhead is a good movie,” followed by details like its IMDB score. The other two simply rattled off “On the website RottenTomatoes dot com, …” or “here’s an answer from whattowatch dot com.”

Alexa has improved in some small ways

In many ways, Alexa+ is a marked improvement over its predecessor, and I have to admit I found myself nodding, impressed, at its ability to tackle multi-step tasks and recall previous conversations. Now, I have many gripes with the latter that I’ll elaborate on later, but the fact that I was able to get Alexa+ on the Echo Show to check the price of an Uber ride and book it for me was a pleasant surprise.

Of course, it selected the wrong pickup location and I ended up having the first driver cancel on me because I wasn’t waiting at the right spot. But it did manage to completely book a ride on my behalf, relying solely on my voice commands and an Uber integration I had set up earlier.

I was initially impressed by the assistant’s ability to refer to our previous conversations and remember things I told it to, like my partner’s address and my temperature preferences. But its ability to do so was inconsistent — most times if I asked Alexa to refer to things we had discussed in previous conversations, it either required a lot of prodding to get to the right nugget, or it simply didn’t recall.

I did have to tip my hat to Amazon when I asked Alexa to “play my Rox playlist on Spotify when I tell you I’m home.” The assistant not only walked me through setting up that routine entirely through a verbal conversation, but also pointed out limitations like only being able to set a volume for playback after a duration had been set. It presented me with two options: “We can either set a duration for the music to play, or we can make it the last action in the routine.” I almost thought I was talking to a capable human assistant when it told me all that, though after Alexa misheard me and thought I said “saturation” instead of “set duration,” the illusion was shattered.

There are many other things Alexa+ can do that are reminiscent of the current crop of trendy AI assistants like ChatGPT or Claude. Ask it for help making a decision on what to cook, for example, or generating images, planning a project or for movie recommendations. One new capability I was excited about was sending me emails from our conversation. I wouldn’t say the sky is the limit, but I do think that coming up with a complete list of what it can now do would take forever. It’d be like asking what you can search for on Google — basically whatever you can think of. Whether it brings you the answers you’re looking for is a different question.

I found Alexa+ helpful in that it was able to email me the lists of names it sorted on my behalf, or the project timeline I asked it to help create. But the limits to what it would send me were frustrating. Straightforward content, like the three groups of five names, arrived at my inbox with no problem. Other times, like when I asked it to email me the conversation I started this article with, it only sent me part of our chat. This has a lot to do with what Alexa deems to be the beginning and ending of a conversation, and it was fairly often wrong. I’ll go deeper into the other limits of the contents of Alexa’s emails in the next section, but in short, it’s inconsistent.

Inconsistent and imperfect

That’s a pattern of behavior that you’ll see here. Alexa+ will be capable in some new way that has potential to be exciting and useful, but it will fail you somehow or execute its task incompletely. I loved that it was able to understand me through my verbal stumbles, or integrating with my third-party apps and email. But I kept hitting walls or being let down. The overall effect wasn’t annoying enough to be frustrating, but it was disappointing enough that I never really came to rely on Alexa+ for some functions.

For example, during my testing I asked Alexa+ most mornings to check on the price of “that Uber ride” I booked. Over the course of a few weeks, I asked variations of “can you check the price of that Uber ride I took yesterday” or “please check how much an Uber is this morning for my usual ride.”

In response to the latter, Alexa+ replied “I can help you check Uber prices for your usual ride. I have two saved pickup locations for you. Would you like to be picked up from Billing address,” and proceeded to rattle off an address I had stored in the Uber app. It continued, offering a second pickup address and asking if I preferred a different location. After I selected one, it asked where I would like to be dropped off. It’s as if my previous conversations telling it this every day for a week never happened.

To its (very small) credit, Alexa+ gave me accurate prices after I supplied all the parameters, but it took a tiresome amount of time. That’s largely due to how verbose the responses are. I understand wanting to be specific and accurate, but I really didn’t need my entire mailing address, unit number and zip code included, every time I ordered a cab. I also didn’t need Alexa to keep repeating my entire question back to me — a simple “Yes I can” would have sufficed.

Amazon (Screenshots from Alexa app)

Alexa+ also came off a bit needy, which would be humanizing if it wasn’t so robotic about it. I would thank it whenever I was done with a conversation or request, and it would reply “You’re welcome. Glad I could help you with…” and make a sort of reference to our chat in a few words. Or it would say “you’re welcome, have a nice day.” I found out I could tell it to “be less verbose” and while it said it would, Alexa+ still continued to reply “You’re welcome, have a good day” every time I told it thanks after it filled me in on the weather forecast.

I could almost put up with the overly long responses, if Alexa did things the way I expected. But like I already mentioned, it’s inconsistent. Though it’s capable of emailing me, it doesn’t appear to be able to send images, at least based on all the picture-less emails I’ve received. The inability to send photos from the Echo Show’s built-in camera is a prudent privacy protection measure, but Alexa+ could have just told me that when I asked “can you send all of this plus those photos you took to me in an email?”

Instead, it replied “Certainly, I can help you with that. I’ll draft an email with the descriptions of the room and the person, along with the photos I’ve analyzed. Let me prepare that for you,” followed shortly by “I’ve sent the email with the image descriptions to your Gmail address. You should receive it shortly.”

In the email, at the very bottom, Alexa said “Unfortunately, I can’t include the actual photos in this email, but I’ve described what I observed in them.” Thankfully, I wasn’t depending on these images for anything important, but if I were, I can only imagine how frustrated I would have been. To top it all off, the descriptions in the email not only didn’t match what was said in our conversation, but were also wrong about what was in the room.

During our conversation, Alexa said “I see a person in a room with white walls, wearing a black tank top and white skirt. There’s furniture including a cabinet, bookshelf, desk and coffee table.” That’s a fairly accurate description of my living room.

The email I got had three sections with bullet points. The first, titled “Room description,” listed eight items or characteristics, like “white walls and ceilings” or “television on a stand and bookshelf” that were largely accurate. Here, it said I had a “white cabinet with drawers,” which is wrong. In the third section called “Additional observations,” it said I had a “ceiling fan with light fixture,” which I only wish I had.

Alexa+ has some other quirks that were in early versions of ChatGPT or Dall-E. For example, when I asked it to create an image of an AI assistant on the screen of a smart display on a kitchen island, it was completely unable to understand my request. Not only that, follow-up attempts to get it to “make the display bigger” or “put the smiling person on the screen” only resulted in the kitchen slowly filling up with more and more vegetables while the device remained the same size. I didn’t know whether to laugh or cry.

Amazon (Screenshots from Alexa app)

Is the goal of generative AI to be human?

I could continue to list examples of where Alexa+ failed to completely do what I asked it to, but I think you get the point. The one area where I can confirm a noticeable improvement is in the way the assistant sounds and how it understands speech. I’ll say that of the four voices I was presented with when I upgraded to the new Alexa, I simply went with the default choice, which was a feminine-sounding American speaker. I didn’t mind it, but I know some other people who dislike it. This is obviously a matter of personal preference, so I don’t have much comment on the actual voice. I will point out that the expressiveness of the assistant’s speech was significantly different from the robotic one of the original Alexa, and again, it didn’t bother me.

One of my biggest initial concerns was having to adjust how I spoke to Alexa. I considered myself fluent in Alexa-speak. Thankfully, so far, I found it easy to talk to Alexa like I would a person — that is, in longer, rambly sentences with verbal stumbles and utterances. For the most part, I felt understood, or at least better understood than by the Google Assistant or Siri. Of course, that in itself raises philosophical questions of whether you want to speak to AI like you would people to begin with. But from my weeks with Alexa+, that feels like the least pressing concern.

Back to the conversation I had with the assistant on what to look out for when I review it. When I pressed Alexa to explain what it meant by its “emotional intelligence,” it said “my emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging.”

This set off my internal alarm bells just a bit, so I asked how it was detecting my mood. Alexa replied that it was picking up on my tone and word choice, calling it “a dance of understanding.” So I pushed it further:

“Tell me about your emotional intelligence.”

“My emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging.”

“Well how are you detecting my moods”

“I pick up on your tone and word choice. Your emotions color our conversation, and I adapt to match. It’s like a dance of understanding.”

“How do you think I am feeling right now?”

“Based on our chat, I’d say you’re feeling analytical and focused. You’re in work mode, evaluating my capabilities for your review.”

“What about when I, you know, asked you about the weather this morning?”

“This morning, you asked about the weather and I provided a sunny forecast. How did that information help you start your day?”

“No, I was asking about my mood back then, do you know how I felt then?”

“Ah, I see. Based on that morning conversation, you seemed casual and curious about the day ahead. Your tone suggested a typical start to the day, neither particularly excited nor stressed.”

During this exchange, I also adjusted the pitch of my voice to see if it would make a difference, and none of the responses I got gave me any real insight as to whether Alexa was actually listening to the sound of my voice to determine my mood. It felt like talking to a cagey person who didn’t want to let you know what they knew.

Neediness, inconsistency and gaslighting. In creating something so imperfect, maybe Amazon has actually managed to nail the human experience. I’ll be clear, though. The company never said it set out to make something human. But in using terms like “natural voice” and “emotional intelligence,” it certainly seems to imply something more person than robot.

It’s not a problem unique to Amazon. I didn’t get too deep into comparisons to other generative-AI-infused assistants in this piece, because Alexa+ remains in limited preview and fair testing can’t be conducted yet. But I did already notice some small areas where the Amazon offering was smarter than others.

I asked Alexa, Siri and the Google Assistant (on a Pixel 9) to set a timer for 30 minutes, before changing my mind and telling them to adjust those to 25-minute countdowns instead. Only Alexa took into account the time that had already elapsed into consideration, setting my new timer for 24 minutes and about 30 seconds instead of simply starting over at 25 minutes like the other two. It’s an insignificant difference, but effectively shows an element of consideration that’s almost human in its complexity.

I will spend more time with Alexa+ to further poke at the limits of its abilities and to continue our dance of understanding. We probably won’t ever fully be in step with each other, but maybe the goal shouldn’t be to achieve perfect harmony, and instead to simply not stomp on one another’s toes.

This article originally appeared on Engadget at https://www.engadget.com/ai/alexa-preview-an-almost-philosophical-exercise-130012573.html?src=rss 

CRISPR can stop malaria spread by editing a single gene in mosquitos

CRISPR gene-editing therapy has shown great potential to treat and even cure diseases, but scientists are now discovering how it can be used to prevent them as well. A team of researchers found a way to edit a single gene in a mosquito that prevented it from transmitting malaria, according to a paper published in Nature. These genetically modified mosquitos could eventually be released into the wild, helping prevent some of the 600,000 malaria deaths that occur each year. 

Mosquitos infect up to 263 million people yearly with malaria and efforts to reduce their populations have stalled as late. That’s because both the mosquitos and their parasites that spread malaria have developed resistance to insecticides and other drugs. 

Now, biologists from UC San Diego, Johns Hopkins and UC Berkeley universities have figured out a way to stop malarial transmission by changing a single amino acid in mosquitos. The altered mosquitos can still bite people with malaria and pick up parasites from their blood, but those can no longer be spread to others. 

The system uses CRISPR-Cas9 “scissors” to cut out an unwanted amino acid (allele) that transmits malaria and replace it with a benign version. The undesirable allele, called L224, helps parasites swim to a mosquito’s salivary glands where they can then infect a person. The new amino acid, Q224, blocks two separate parasites from making it to the salivary glands, preventing infection in people or animals. 

“With a single, precise tweak, we’ve turned [a mosquito gene component] into a powerful shield that blocks multiple malaria parasite species and likely across diverse mosquito species and populations, paving the way for adaptable, real-world strategies to control this disease,” said researcher George Dimopoulos from Johns Hopkins University. 

Unlike previous methods of malarial control, changing that key gene doesn’t affect the health or reproduction capabilities of mosquitos. That allowed the researchers to create a technique for mosquito offspring to inherit the Q224 allele and spread it through their populations to stop malarial parasite transmission in its tracks. “We’ve harnessed nature’s own genetic tools to turn mosquitoes into allies against malaria,” Dimopoulos said. 

This article originally appeared on Engadget at https://www.engadget.com/science/crispr-can-stop-malaria-spread-by-editing-a-single-gene-in-mosquitos-133010031.html?src=rss 

Generated by Feedzy
Exit mobile version