Amazon’s Echo Show smart displays fall back to all-time lows

A few of the Echo Show devices are touting major sales right now, dropping to their all-time-low prices. The 2023 third-generation Echo Show 5 has fallen to $40 from $90 — a 56 percent discount. The Echo Show 8 is marked down nearly as much, with a 54 percent discount bringing its price to $60 from $130. 

The third-gen Echo Show 5 is a great option if you’re looking for a simple smart home device that does all the basics your family needs. Its 5.5-inch 960 x 480 resolution display is perfect for checking the weather, picking a song or displaying your favorite photos. It also has a 2MP camera for making video calls or checking in on your home while you’re away. Listen to Prime Video, Spotify and more through its 1.7-inch speaker. 

While there’s a newer Echo Show 8 available, there’s still a lot to love about the second-gen model (and not just that it comes without a $150 price tag). The eight-inch screen has a 1,200 x 800 resolution display that can do all the same things the Echo Show 5 can, just in better quality — plus, it can stream Netflix, while its counterpart can’t. The Echo Show 8 also comes with a 13MP camera with auto-framing to look your best while on video calls. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/amazons-echo-show-smart-displays-fall-back-to-all-time-lows-114005160.html?src=rss 

Sony’s WH-1000XM5 ANC headphones drop to $330

While there are plenty of good headphones on the market, Sony’s WH-1000XM5 ANC model is really in a league of its own. Now, the temptation to pick up our favorite wireless headphones of the year has spiked thanks to an 18 percent discount, dropping Sony’s WH-1000XM5 headphones to $330 from $400. This deal brings them just $2 short of their Prime Day all-time-low $328

So what makes the WH-1000XM5 headphones so great even a year and a half after Sony released them? The headphones have an unmatched mix of features, including a remarkable sound quality that is crisp and clear while providing a punchy bass during 30 hours of battery life. The M5 comes with eight ANC mics — double that of its predecessor. Plus, the wireless headphones have an updated fit that makes the 0.55-pound device feel light and remarkably comfortable. It’s no surprise we gave them a 95 in our initial review

Sony’s top-tier headphones also have all the controls you need without having to pick up your phone. You can use physical and touch control buttons to change the song, make a call or change the noise mode (which can also change automatically as you move throughout the day). The Speak-to-Chat feature will even pause your audio as soon as you start talking. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/sonys-wh-1000xm5-anc-headphones-drop-to-330-100824048.html?src=rss 

Sweeping White House executive order takes aim at AI’s toughest challenges

The Biden Administration unveiled its ambitious next steps in addressing and regulating artificial intelligence development on Monday. Its expansive new executive order seeks to establish further protections for the public as well as improve best practices for federal agencies and their contractors.

“The President several months ago directed his team to pull every lever,” a senior administration official told reporters on a recent press call. “That’s what this order does, bringing the power of the federal government to bear in a wide range of areas to manage AI’s risk and harness its benefits … It stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world and like all executive orders, this one has the force of law.”

These actions will be introduced over the next year with smaller safety and security changes happening in around 90 days and with more involved reporting and data transparency schemes requiring 9 to 12 months to fully deploy. The administration is also creating an “AI council,” chaired by White House Deputy Chief of Staff Bruce Reed, who will meet with federal agency heads to ensure that the actions are being executed on schedule.

ASSOCIATED PRESS

Public Safety

“In response to the President’s leadership on the subject, 15 major American technology companies have begun their voluntary commitments to ensure that AI technology is safe, secure and trustworthy before releasing it to the public,” the senior administration official said. “That is not enough.”

The EO directs the establishment of new standards for AI safety and security, including reporting requirements for developers whose foundation models might impact national or economic security. Those requirements will also apply in developing AI tools to autonomously implement security fixes on critical software infrastructure. 

By leveraging the Defense Production Act, this EO will “require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests,” per a White House press release. That information must be shared prior to the model being made available to to the public, which could help reduce the rate at which companies unleash half-baked and potentially deadly machine learning products.

In addition to the sharing of red team test results, the EO also requires disclosure of the system’s training runs (essentially, its iterative development history). “What that does is that creates a space prior to the release… to verify that the system is safe and secure,” officials said.

Administration officials were quick to point out that this reporting requirement will not impact any AI models currently available on the market, nor will it impact independent or small- to medium-size AI companies moving forward, as the threshold for enforcement is quite high. It’s geared specifically for the next generation of AI systems that the likes of Google, Meta and OpenAI are already working on with enforcement on models starting at 10^26 petaflops, a capacity currently beyond the limits of existing AI models. “This is not going to catch AI systems trained by graduate students, or even professors,” the administration official said.

What’s more, the EO will encourage the Departments of Energy and Homeland Security to address AI threats “to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks,” per the release. “Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.” In short, any developers found in violation of the EO can likely expect a prompt and unpleasant visit from the DoE, FDA, EPA or other applicable regulatory agency, regardless of their AI model’s age or processing speed.

In an effort to proactively address the decrepit state of America’s digital infrastructure, the order also seeks to establish a cybersecurity program, based loosely on the administration’s existing AI Cyber Challenge, to develop AI tools that can autonomously root out and shore up security vulnerabilities in critical software infrastructure. It remains to be seen whether those systems will be able to address the concerns of misbehaving models that SEC head Gary Gensler recently raised.

AI Watermarking and Cryptographic Validation

We’re already seeing the normalization of deepfake trickery and AI-empowered disinformation on the campaign trail. So, the White House is taking steps to ensure that the public can trust the text, audio and video content that it publishes on its official channels. The public must be able to easily validate whether the content they see is AI-generated or not, argued White House officials on the press call. 

Adobe

The Department of Commerce is in charge of the latter effort and is expected to work closely with existing industry advocacy groups like the C2PA and its sister organization, the CAI, to develop and implement a watermarking system for federal agencies. “We aim to support and facilitate and help standardize that work [by the C2PA],” administration officials said. “We see ourselves as plugging into that ecosystem.”

Officials further explained that the government is supporting the underlying technical standards and practices that will lead to digital watermarking’ wider adoption — similar to the work it did around developing the HTTPS ecosystem and in getting both developers and the public on-board with it. This will help federal officials achieve their other goal of ensuring that the government’s official messaging can be relied upon.

Civil Rights and Consumer Protections

The first Blueprint for an AI Bill of Rights that the White House released last October directed agencies to “combat algorithmic discrimination while enforcing existing authorities to protect people’s rights and safety,” the administration official said. “But there’s more to do.” 

The new EO will require guidance be extended to “landlords, federal benefits programs and federal contractors” to prevent AI systems from exacerbating discrimination within their spheres of influence. It will also direct the Department of Justice to develop best practices for investigating and prosecuting civil rights violations related to AI, as well as, per the announcement, “the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.”

Additionally, the EO calls for prioritizing federal support to accelerate development of privacy-preserving techniques that would enable future LLMs to be trained on large datasets without the current risk of leaking personal details that those datasets might contain. These solutions could include “cryptographic tools that preserve individuals’ privacy,” per the White House release, developed with assistance from the Research Coordination Network and National Science Foundation. The executive order also reiterates its calls for bipartisan legislation from Congress addressing the broader privacy issues that AI systems present for consumers.

In terms of healthcare, the EO states that the Department of Health and Human Services will establish a safety program that tracks and remedies unsafe, AI-based medical practices. Educators will also see support from the federal government in using AI-based educational tools like personalized chatbot tutoring.

Worker Protections

The Biden administration concedes that while the AI revolution is a decided boon for business, its capabilities make it a threat to worker security through job displacement and intrusive workplace surveillance. The EO seeks to address these issues with “the development of principles and employer best practices that mitigate the harms and maximize the benefit of AI for workers,” an administration official said. “We encourage federal agencies to adopt these guidelines in the administration of their programs.”

Richard Shotwell/Invision/AP

The EO will also direct the Department of Labor and the Council of Economic Advisors to both study how AI might impact the labor market and how the federal government might better support workers “facing labor disruption” moving forward. Administration officials also pointed to the potential benefits that AI might bring to the federal bureaucracy including cutting costs, and increasing cybersecurity efficacy. “There’s a lot of opportunity here, but we have to to ensure the responsible government development and deployment of AI,” an administration official said.

To that end, the administration is launching on Monday a new federal jobs portal, AI.gov, which will offer information and guidance on available fellowship programs for folks looking for work with the federal government. “We’re trying to get more AI talent across the board,” an administration official said. “Programs like the US Digital Service, the Presidential Innovation Fellowship and USA jobs — doing as much as we can to get talent in the door.” The White House is also looking to expand existing immigration rules to streamline visa criteria, interviews and reviews for folks trying to move to and work in the US in these advanced industries.

The White House reportedly did not preview the industry on this particular swath of radical policy changes, though administration officials did note that they had already been collaborating extensively with AI companies on many of these issues. The Senate held its second AI Insight Forum event last week on Capitol Hill, while Vice President Kamala Harris is scheduled to speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak on Tuesday.

Chip Somodevilla via Getty Images

At a Washington Post event on Thursday, Senate Majority Leader Charles Schumer (D-NY) was already arguing that the executive order did not go far enough and could not be considered an effective replacement for congressional action, which to date, has been slow in coming. 

“There’s probably a limit to what you can do by executive order,” Schumer told WaPo, “They [the Biden Administration] are concerned, and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”

This article originally appeared on Engadget at https://www.engadget.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655.html?src=rss 

Samsung’s Galaxy Z Flip5 Retro pays tribute to the iconic SGH-E700 flip phone

Samsung has unveiled the Galaxy Z Flip5 Retro, a limited edition version that pays homage to the iconic SGH-E700 (aka the SGH-E715 in the US on T-Mobile), which first came out 20 years ago in 2003. It comes with the same indigo blue and silver color combo as the original, along with similar pixel graphics for the clock widget on the cover screen and an exclusive cityscape-style animation on the Flex Window. It’ll be sold in Korea and several countries in Europe, but not the US.  

The SGH-E700 was Samsung’s first mobile phone with an integrated antenna and became a certified hit, selling more than 10 million units. The success of that phone elevated Samsung’s standing in the mobile phone industry at the time, helping make it the smartphone behemoth it is today. The phone was popular enough that in 2007, Engadget noted that Samsung effectively reissued the phone with new radios as a nostalgia play, even though it was only four years old at the time. 

The Galaxy Z Flip5 Retro will include three Flipsuit cards featuring logos from different eras of Samsung’s history, a Flipsuit case and a collector card engraved with a unique serial number, the company said. It’ll be available starting November 1 in Korea, the UK, France, Germany, Spain and Australia from Samsung’s website. 

This article originally appeared on Engadget at https://www.engadget.com/samsungs-galaxy-z-flip5-retro-pays-tribute-to-the-iconic-sgh-e700-flip-phone-073003464.html?src=rss 

New report reveals details on the three M3 chips Apple may launch Monday night

Apple is planning to debut three M3 chips at its “Scary Fast” Mac event Monday night, according to Bloomberg’s Mark Gurman — the M3, M3 Pro and M3 Max. The event is set to kick off at 8 PM ET and is expected to bring multiple hardware announcements. Gurman previously reported that the company is prepping a new 24-inch iMac which could make an appearance tomorrow, along with upgraded MacBook Pros running the new M3 series.

In the Power On newsletter, Gurman writes that the standard M3 chip is likely to sport an 8-core CPU and 10-core GPU like the M2, but with improvements to performance speed and memory. He also notes the company is testing multiple configurations for both the M3 Pro and M3 Max chips. We may see an M3 Pro with 12-core CPU/18-core GPU and the option for a pricier 14-core CPU with a 20-core GPU. Meanwhile, the M3 Max could come with 16 CPU cores and either 32 or 40 GPU cores.

We won’t know anything for sure until Apple’s unusually-timed October event starts tomorrow night. Thankfully, that’s not a long time to wait. Join us here to watch as it all unfolds.

This article originally appeared on Engadget at https://www.engadget.com/new-report-reveals-details-on-the-three-m3-chips-apple-may-launch-monday-night-202456989.html?src=rss 

Apple’s upgraded 2nd-gen AirPods Pro with USB-C are $50 off right now

Apple’s refreshed second-generation AirPods Pro are down to just $200 on Amazon in a discount almost as good as we saw during October’s Prime Day event. The deal cuts $50 off the normal price of $250. The second-generation AirPods Pro got an upgrade in September that brought improvements to durability and a USB-C port for charging the MagSafe case more conveniently, replacing the Lightning port. While the price could dip down even lower as Black Friday approaches, this is one of best deals we’ve seen as of late.

The upgraded second-generation AirPods Pro have an IP54 rating for better dust resistance than their predecessor. They also received new audio features with the release of iOS 17 that further improves upon the listening experience, including Adaptive Audio, Conversation Awareness, and Personalized Volume. The second-generation AirPods Pro get up to six hours of battery life, with up to 30 hours using the charging case. Even before the upgrade, we counted them among the best earbuds you can get today.

Apple also introduced lossless audio with Apple Vision Pro for the refreshed second-generation AirPods Pro, which buyers will get to appreciate once they finally have the headset in their hands. Otherwise, the AirPods Pro are a top choice for use with the Apple ecosystem of devices, with features like active noise cancellation and an impressive transparency mode. At $200 right now, they’re only $10 more than they were going for on Prime Day.

If you’re looking for something with fewer bells and whistles, Apple’s third-generation AirPods are discounted too. Right now, they’re just $150 on Amazon.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-upgraded-2nd-gen-airpods-pro-with-usb-c-are-50-off-right-now-182421286.html?src=rss 

Apple’s 9th-gen iPad is back to its all-time low price of $250 ahead of Black Friday

Apple’s 9th generation iPad is $80 off at Amazon right now. The discount brings the 64GB variant down to just $250 from its regular price of $330, a record low typically only seen on Prime Day. You can also snag the 9th-gen iPad with 256GB of storage for $80 off at Amazon, where it’s currently down to $400 from its usual $480. 

The 9th-gen iPad came out in 2021, but it’s still a solid tablet especially if you’re on a budget. While its A13 Bionic chip isn’t the fastest or most powerful, it’s more than enough for basic productivity tasks, browsing and streaming. It earned a score of 86 when we reviewed it back at the time of its release, and it’s still one of the best iPads you can get that won’t break the bank.

It has a heftier build than the newer, sleeker models, with chunky bezels framing its 10.2-inch Retina Display, and a physical Home button with Touch ID. Apple’s 9th-gen iPad also still has a headphone jack and charges via lightning port. It has a 12MP ultrawide front camera and 8MP back camera, and supports Apple’s Center Stage video calling feature.

The 9th generation iPad comes in Silver and Space Gray, and the discount applies to both color variants for the Wi-Fi only model. It’s a great option for the casual iPad user, and the price right now can’t be beat. But, if those specs aren’t quite cutting it, Amazon is also running a deal on the 10th generation iPad, which is a step up. That model is currently $50 off.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-9th-gen-ipad-is-back-to-its-all-time-low-price-of-250-ahead-of-black-friday-154710678.html?src=rss 

What the evolution of our own brains can tell us about the future of AI

The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right. 

In this week’s Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain. 

Focusing on the five evolutionary “breakthroughs,” amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can’t quite get a grasp on the vagaries of human speech.

HarperCollins

Excerpted from A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett. Published by Mariner Books. Copyright © 2023 by Max Bennett. All rights reserved.

Words Without Inner Worlds

GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:

One plus one equals _____

Roses are red, violets are _____

You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.

Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.

GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:

If 3x + 1 = 3, then x equals _____

I am in my windowless basement, and I look toward the sky, and I see _____

He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____

I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____

Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.

We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.

I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):

If 3x + 1 = 3 , then x equals 1

I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.

He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!

I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.

All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.

What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.

Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:

Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?

1) Librarian

2) Construction worker

If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.

The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.

It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.

Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.

A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.

You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.

To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.

The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):

Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:

Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.

And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.

The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-a-brief-history-of-intelligence-max-bennett-mariner-books-143058118.html?src=rss 

NASA is launching a rocket on Sunday to study a 20,000-year-old supernova

A sounding rocket toting a special imaging and spectroscopy instrument will take a brief trip to space Sunday night to try and capture as much data as it can on a long-admired supernova remnant in the Cygnus constellation. Its target, a massive cloud of dust and gas known as the Cygnus Loop or the Veil Nebula, was created after the explosive death of a star an estimated 20,000 years ago — and it’s still expanding.

NASA plans to launch the mission at 11:35 PM ET on Sunday October 29 from the White Sands Missile Range in New Mexico. The Integral Field Ultraviolet Spectroscopic Experiment, or INFUSE, will observe the Cygnus Loop for only a few minutes, capturing light in the far-ultraviolet wavelengths to illuminate gasses as hot as 90,000-540,000 degrees Fahrenheit. It’s expected to fly to an altitude of about 150 miles before parachuting back to Earth.

The Cygnus Loop sits about 2,600 light-years away, and was formed by the collapse of a star thought to be 20 times the size of our sun. Since the aftermath of the event is still playing out, with the cloud currently expanding at a rate of 930,000 miles per hour, it’s a good candidate for studying how supernovae affect the formation of new star systems. “Supernovae like the one that created the Cygnus Loop have a huge impact on how galaxies form,” said Brian Fleming, principal investigator for the INFUSE mission.

“INFUSE will observe how the supernova dumps energy into the Milky Way by catching light given off just as the blast wave crashes into pockets of cold gas floating around the galaxy,” Fleming said. Once INFUSE is back on the ground and its data has been collected, the team plans to fix it up and eventually launch it again.

This article originally appeared on Engadget at https://www.engadget.com/nasa-is-launching-a-rocket-on-sunday-to-study-a-20000-year-old-supernova-193009477.html?src=rss 

Instagram head says Threads is working on an API for developers

Threads was missing a lot of features users would expect from a service similar to Twitter’s (now X’s) when it launched. Over the past few months, however, it has been been rolling out more and more new features to give users a more robust experience, including polls, an easy way to post GIFs and the ability to quote posts on the web. Still, since it doesn’t have an API, third-party developers can’t conjure features specific to their services that would make the social network a more integral part of people’s everyday lives. An example of that is local transportation agencies being able to automatically post service alerts when a train is delayed. According to Instagram chief Adam Mosseri, though, Threads is working on an API for developers — he just has concerns about how it’s going to be used. 

As first reported by TechCrunch, Mosseri responded to a conversation on the platform about having a TweetDeck-like experience for Threads. In a response to a user saying that Threads has no API yet, the executive said: “We’re working on it.” He added that he’s concerned that the API’s launch could mean “a lot more publisher content and not much more creator content,” but he’s aware that it “seems like something [the company needs] to get done.”

Mosseri previously said that Threads won’t amplify news, which may have been disappointing to hear for publishers and readers looking to leave X. Instead, he said, Threads wants to “empower creators in general.” More recently, in an AMA he posted on the platform, Mosseri said that that his team’s long-term aspiration is for Threads to become “the de facto platform for public conversations online,” which means being both culturally relevant and big in terms of user size. He said he believes Threads has a chance of surpassing X, but he knows that his service has a long way to go. For now, he keeps his team focused on making people’s experience better week by week. 

Mark Zuckerberg recently announced that Threads has “just under” 100 million monthly active users. Like Mosseri, he is optimistic about its future and said that there’s a “good chance” it could reach 1 billion users over the next couple of years.

This article originally appeared on Engadget at https://www.engadget.com/instagram-head-says-threads-is-working-on-an-api-for-developers-140049094.html?src=rss 

Generated by Feedzy
Exit mobile version