The Morning After: NASA has to make a time zone for the Moon

The White House has published a policy memo asking NASA to create a new time standard for the Moon by 2026. Coordinated Lunar Time (LTC) will establish an official time reference to help guide future lunar missions. The US, China, Japan, India and Russia have space missions to the Moon planned or completed.

NASA (and the White House) aren’t the only ones trying. The European Space Agency is also trying to make a time zone outside of Earth’s… zone.

Given the Moon’s weaker gravity, time moves slightly faster there. “The same clock we have on Earth would move at a different rate on the Moon,” NASA space communications and navigation chief Kevin Coggins told Reuters.

You saw Interstellar, right? Er, just like that. Exactly like that. No further questions.

— Mat Smith

The biggest stories you might have missed

Meta’s AI image generator struggles to create images of couples of different races

Our favorite cheap smartphone is on sale for $250 right now

OnePlus rolls out its own version of Google’s Magic Eraser

How to watch (and record) the solar eclipse on April 8

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Microsoft may have finally made quantum computing useful

The most error-free quantum solution yet, apparently.

What if we could build a machine working at the quantum level that could tackle complex calculations exponentially faster than a computer limited by classic physics? Despite all the heady dreams of quantum computing and press releases from IBM and Google, it’s still a what-if. Microsoft now says it’s developed the most error-free quantum computing system yet, with Quantinuum. It’s not a thing I can condense into a single paragraph. You… saw Interstellar, right?

Continue reading.

Stability AI’s audio generator can now create three-minute ‘songs’

Still not that good, though.

Stability AI just unveiled Stable Audio 2.0, an upgraded version of its music-generation platform. With this system, you can use your own text to create up to three minutes of audio, which is roughly the length of a song. You can hone the results by choosing a genre or even uploading audio to inspire the algo. It’s fun — try it out. Just don’t add vocals, trust me.

Continue reading.

Bloomberg says Apple is developing personal robots now

EVs schmee vees.

Apple, hunting for its next iPhone / Apple Watch / Vision Pro (maybe?), might be trying to get into robots. According to Bloomberg’s Mark Gurman, one area the company is exploring is personal robotics — and it started looking at electric vehicles too. The report says Apple has started working on a mobile robot to follow users around their home and has already developed a table-top device that uses a robot to move a screen around.

Continue reading.

Another Matrix movie is happening.

Not like this.

Warner Bros.

Whoa.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-nasa-has-to-make-a-time-zone-for-the-moon-111554408.html?src=rss 

X is giving blue checks to people with more than 2,500 Premium followers

Last night, several prominent journalists and other posted (complained in many cases) about unexpectedly regaining their verified blue checks on Elon Musk’s X platform. One of them, Peter Kafka, shared a message from X showing that the upgrade was no accident. 

“As an influential member of the community on X, we’ve given you a complimentary subscription to X Premium subject to X Premium Terms by selecting this notice,” it states.

A subsequent tweet from X provided an explanation: Any accounts with over 2,500 verified (ie, paid Premium or Premium+ blue tick subscribers) get Premium features for free, and any with over 5,000 get the ad-free Premium+ tier, also gratis

based on all the confused tweets i’m seeing, it looks like Twitter / X is starting to really ramp up the roll out of this now

if you suddenly have a blue checkmark even though you’re not paying for one, this is why: pic.twitter.com/T1XaBEeGgn

— Matt Binder (@MattBinder) April 3, 2024

Prior to this, the only users to get free Premium blue checks have been those with large follower numbers (in the million range, minimum), along with celebrities and corporations/media companies. The new move appears to be a way to bring influential users with lower follower counts (journalists largely) into the fold. 

So what prompted this? X may have decided it needs more journalists with blue checks. In the wake of recent events (the Taiwan earthquake, Turkey elections, Baltimore bridge collapse), some users complained that X is no longer the gold standard breaking news platform that Twitter used to be. 

That’s likely because journalists, who discover or amplify such news, have seen reduced prominence while X’s algorithms amplify blue check content and replies. That means know-nothing or shitcoin promoters with 25 followers who paid 8 bucks will appear atop replies, rather than an experienced journalist who can furnish useful, truthful information. 

With the blue check now being a mark of shame in many cases, a fair number of the users who regained one aren’t necessarily happy about it. “Shit, I’ve been forcibly bluechecked. How do I opt out,” wrote @emptywheel. “oh no,” Katie Notopoulos tweeted. “I am become bluecheck, promoter of shibacoin.”

Shit. I’ve been forcibly bluechecked.

How do I opt out?

— emptywheel (@emptywheel) April 3, 2024

i am become bluecheck, promoter of shibacoin

— Katie Notopoulos (@katienotopoulos) April 3, 2024

This article originally appeared on Engadget at https://www.engadget.com/x-is-giving-blue-checks-to-people-with-more-than-2500-premium-followers-090922311.html?src=rss 

Apple is developing personal robots for your home, Bloomberg says

Apple is still on the hunt for the next revolutionary product to help it remain dominant in the market and to serve as new sources of revenue after abandoning its plans to develop an electric vehicle of its own. According to Bloomberg’s Mark Gurman, one of the areas the company is exploring is personal robotics. It reportedly started looking into robots and electric vehicles at the same time, with the hopes of developing a machine that doesn’t need human intervention. 

While Apple’s robotics projects are still in the very early stages, Bloomberg said it had already started working on a mobile robot that can follow users around their home and had already developed a table-top device that uses a robot to move a screen around. The idea behind the latter is to have a machine that can mimic head movements and can lock on to a single person in a group, presumably for a better video call experience. Since these robots are supposed to be able to move on their own, the company is also looking into the use of algorithms for navigation. Based on the report, Apple’s home devices group is in charge of their development, and at least one engineer who worked on its scrapped EV initiative has joined the team. 

Robots, however, aren’t like phones in the sense that people these days need them in their lives. Apple is apparently worried about whether people would pay “top dollar” for the robots it has in mind, and executives still can’t get to an agreement on whether the company should keep working on these projects. Gurman previously reported that Apple may have sold its EV for $100,000 — if that’s true, it had a bigger potential to grow the company’s revenue. But the Apple Car is now out of the picture, and the company is reportedly putting all of its focus on the Vision Pro and new products for the home, which also includes a home hub device with a display that resembles an iPad. Of course, Apple could still scrap these projects, and it could find other classes of products to invest in if it discovers that they could bring in bigger money in the future. 

This article originally appeared on Engadget at https://www.engadget.com/apple-is-developing-personal-robots-for-your-home-bloomberg-says-044254029.html?src=rss 

Meta’s AI image generator struggles to create images of couples of different races

Meta AI is consistently unable to generate accurate images for seemingly simple prompts like “Asian man and Caucasian friend,” or “Asian man and white wife,” The Verge reports. Instead, the company’s image generator seems to be biased toward creating images of people of the same race, even when explicitly prompted otherwise.

Engadget confirmed these results in our own testing of Meta’s web-based image generator. Prompts for “an Asian man with a white woman friend” or “an Asian man with a white wife” generated images of Asian couples. When asked for “a diverse group of people,” Meta AI generated a grid of nine white faces and one person of color. There were a couple occasions when it created a single result that reflected the prompt, but in most cases it failed to accurately depict the prompt.

As The Verge points out, there are other more “subtle” signs of bias in Meta AI, like a tendency to make Asian men appear older while Asian women appeared younger. The image generator also sometimes added “culturally specific attire” even when that wasn’t part of the prompt.

It’s not clear why Meta AI is struggling with these types of prompts, though it’s not the first generative AI platform to come under scrutiny for its depiction of race. Google’s Gemini image generator paused its ability to create images of people after it overcorrected for diversity with bizarre results in response prompts about historical figures. Google later explained that its internal safeguards failed to account for situations when diverse results were inappropriate.

Meta didn’t immediately respond to a request for comment. The company has previously described Meta AI as being in “beta” and thus prone to making mistakes. Meta AI has also struggled to accurately answer simple questions about current events and public figures.

This article originally appeared on Engadget at https://www.engadget.com/metas-ai-image-generator-struggles-to-create-images-of-couples-of-different-races-231424476.html?src=rss 

Prepare for more red pill memes: a fifth Matrix movie is happening

There’s another Matrix movie in the works. Warner Bros. just greenlit a fifth installment of the saga, as reported by Deadline. However, neither Lana Wachowski or Lilly Wachowski will be handling directing duties. That honor falls to Drew Goddard, who adapted The Martian into a screenplay and directed the criminally underrated Cabin in the Woods. He’s also writing the script. 

Goddard cut his teeth writing episodes of Buffy the Vampire Slayer, Angel and Lost, among others — you could say he knows his way around genre content. Lana Wachowski will be on board as an executive producer, so there will be some input from one of the franchise’s original creators.

There’s no word as to what the film will be about, but Warner Bros. says that Goddard came to the company with a “new idea that we all believe would be an incredible way to continue the Matrix world.” Goddard added that the original films inspire him on a daily basis and that he is “beyond grateful for the chance to tell stories” in that world.

Warner Bros. is also being cagey as to which, if any, cast members would be returning. The original trilogy featured Keanu Reeves, Carrie Anne-Moss, Laurence Fishburne, Hugo Weaving and Jada Pinkett Smith. Most of these actors returned for 2021’s The Matrix Resurrections, with one story-based exception.

Speaking of The Matrix Resurrections, it received mixed reviews from both critics and audiences. We loved the film, going as far as to call it brilliant, but admitted that it wasn’t for everyone. That’s par for the course with this franchise. Every single Matrix movie beyond the first one is divisive. We’ll have to wait and see what Goddard brings to the table.

He’s also writing a film adaptation based on another novel by The Martian scribe Andy Weir. Project Hail Mary will be directed by Phil Lord and Christopher Miller and will star Ryan Gosling as an astronaut trying to save the planet from a star-eating microbe.

This article originally appeared on Engadget at https://www.engadget.com/prepare-for-more-red-pill-memes-a-fifth-matrix-movie-is-happening-184811691.html?src=rss 

The White House tells NASA to create a new time zone for the Moon

On Tuesday, The White House published a policy memo directing NASA to create a new time standard for the Moon by 2026. Coordinated Lunar Time (LTC) will establish an official time reference to help guide future lunar missions. It arrives as a 21st-century space race emerges between (at least) the US, China, Japan, India and Russia.

The memo directs NASA to work with the Departments of Commerce, Defense, State, and Transportation to plan a strategy to put LTC into practice by December 31, 2026. International cooperation will also play a role, especially with signees of the Artemis Accords. Established in 2020, they’re a set of common principles between a growing list of (currently) 37 countries that govern space exploration and operating principles. China and Russia are not part of that group.

“As NASA, private companies, and space agencies around the world launch missions to the Moon, Mars, and beyond, it’s important that we establish celestial time standards for safety and accuracy,” OSTP Deputy Director for National Security Steve Welby wrote in a White House press release. “A consistent definition of time among operators in space is critical to successful space situational awareness capabilities, navigation, and communications, all of which are foundational to enable interoperability across the U.S. government and with international partners.”

Einstein’s theories of relativity dictate that time changes relative to speed and gravity. Given the Moon’s weaker gravity (and movement differences between it and Earth), time moves slightly faster there. So an Earth-based clock on the lunar surface would appear to gain an average of 58.7 microseconds per Earth day. As the US and other countries plan Moon missions to research, explore and (eventually) build bases for permanent residence, using a single standard will help them synchronize technology and missions requiring precise timing.

“The same clock that we have on Earth would move at a different rate on the moon,” NASA space communications and navigation chief Kevin Coggins told Reuters. “Think of the atomic clocks at the U.S. Naval Observatory (in Washington). They’re the heartbeat of the nation, synchronizing everything. You’re going to want a heartbeat on the moon.”

NASA

The White House wants LTC to coordinate with Coordinated Universal Time (UTC), the standard by which all of Earth’s time zones are measured. Its memo says it wants the new time zone to enable accurate navigation and scientific endeavors. It also wants LTC to maintain resilience if it loses contact with Earth while providing scalability for space environments “beyond the Earth-Moon system.”

NASA’s Artemis program aims to send crewed missions back to the Moon for the first time since the Apollo missions of the 1960s and 70s. The space agency said in January that Artemis 2, which will fly around the Moon with four people onboard, is now set for a September 2025 launch. Artemis 3, which plans to put humans back on the Moon’s surface, is now scheduled for 2026.

In addition to the US, China aims to put astronauts on the Moon before 2030 as the world’s two foremost global superpowers take their race to space. Although no other countries have announced crewed missions to the lunar surface, India (which put a module and rover on the Moon’s South Pole last year), Russia (its mission around the same time didn’t go so well), the United Arab Emirates, Japan, South Korea and private companies have all demonstrated lunar ambitions in recent years.

In addition to enabling further scientific exploration, technological establishment and resource mining, the Moon could serve as a critical stop on the way to Mars. It could test technologies and provide fuel and supply needs for eventual human missions to the Red Planet.

This article originally appeared on Engadget at https://www.engadget.com/the-white-house-tells-nasa-to-create-a-new-time-zone-for-the-moon-193957377.html?src=rss 

Microsoft may have finally made quantum computing useful

The dream of quantum computing has always been exciting: What if we could build a machine working at the quantum level that could tackle complex calculations exponentially faster than a computer limited by classical physics? But despite seeing IBM, Google and others announce iterative quantum computing hardware, they’re still not being used for any practical purposes. That might change with today’s announcement from Microsoft and Quantinuum, who say they’ve developed the most error-free quantum computing system yet.

While classical computers and electronics rely on binary bits as their basic unit of information (they can be either on or off), quantum computers work with qubits, which can exist in a superposition of two states at the same time. The trouble with qubits is that they’re prone to error, which is the main reason today’s quantum computers (known as Noisy Intermediate Scale Quantum [NISQ] computers) are just used for research and experimentation.

Microsoft’s solution was to group physical qubits into virtual qubits, which allows it to apply error diagnostics and correction without destroying them, and run it all over Quantinuum’s hardware. The result was an error rate that was 800 times better than relying on physical qubits alone. Microsoft claims it was able to run more than 14,000 experiments without any errors.

According to Jason Zander, EVP of Microsoft’s Strategic Missions and Technologies division, this achievement could finally bring us to “Level 2 Resilient” quantum computing, which would be reliable enough for practical applications.

“The task at hand for the entire quantum ecosystem is to increase the fidelity of qubits and enable fault-tolerant quantum computing so that we can use a quantum machine to unlock solutions to previously intractable problems,” Zander, wrote in a blog post today. “In short, we need to transition to reliable logical qubits — created by combining multiple physical qubits together into logical ones to protect against noise and sustain a long (i.e., resilient) computation. … By having high-quality hardware components and breakthrough error-handling capabilities designed for that machine, we can get better results than any individual component could give us.”

Microsoft

Researchers will be able to get a taste of Microsoft’s reliable quantum computing via Azure Quantum Elements in the next few months, where it will be available as a private preview. The goal is to push even further to Level 3 quantum supercomputing, which will theoretically be able to tackle incredibly complex issues like climate change and exotic drug research. It’s unclear how long it’ll take to actually reach that point, but for now, at least we’re moving one step closer towards practical quantum computing.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-may-have-finally-made-quantum-computing-useful-164501302.html?src=rss 

The next Ubisoft Forward showcase is set for June 10 alongside WWDC

Ubisoft has revealed when its next major showcase will take place. The latest edition of Ubisoft Forward is set for June 10 in Los Angeles. That’s at the tailend of the main slate of Summer Game Fest festivities, and on the same day as Apple’s Worldwide Developers Conference keynote.

While Ubisoft hasn’t revealed specifically what it plans to show off at Forward, it’s promising updates and news on upcoming releases. During its most recent earnings report, Ubisoft said it would shed more light on some upcoming projects in May, but it seems Forward is now the more likely venue for that.

It’s back ✨

Join us live from Los Angeles for #UbiForward on June 10 for updates and upcoming releases! pic.twitter.com/PevpR3rfvH

— Ubisoft (@Ubisoft) April 3, 2024

At Forward, we’ll probably find out more details about what’s next for Assassin’s Creed, Ubisoft’s flagship franchise. The feudal Japan-set Assassin’s Creed Codename Red is slated to arrive within the next year, while we’ve long been awaiting more info on Assassin’s Creed Infinity, which is set to tie the series together,

It’s a safe bet that Star Wars Outlaws will get some shine at Forward, since that game is scheduled for release in 2024. With XDefiant being delayed indefinitely (it was supposed to arrive by the end of March) amid reports of a troubled development process, perhaps we’ll find out more about that game at Forward too. Mobile games The Division Resurgence and Rainbow Six Mobile were also slated to come under the spotlight in May, so we could see those at Forward as well. Just don’t expect any sea shanties this time.

This article originally appeared on Engadget at https://www.engadget.com/the-next-ubisoft-forward-showcase-is-set-for-june-10-alongside-wwdc-170210746.html?src=rss 

Hyundai’s Ioniq 5 N eN1 Cup car brings extreme EV performance to the track

The Hyundai Ioniq 5 N is one of the most extreme EVs you can buy at the moment. With over 600 horsepower delivered to all four wheels, plus a plethora of drive modes that help you do everything from circuit racing to drifting, it’s a truly wild ride.

But it’s about to get even wilder. Meet the new Hyundai Ioniq 5 N eN1 Cup car. This is a lightweight, caged, and big-winged version of Hyundai’s rocket ship, tuned to such an extreme level that it isn’t even road legal. Yes, this one’s strictly for racing, and Hyundai is launching a focused racing series for the 5 N later this year.

Ahead of that, I headed to Korea to take it for a drive on a closed track. Inje Speedium is a tricky circuit with lots of elevation changes, and despite some inclement weather the Ioniq 5 N eN1 proved to be a masterful drive. And, at $100,000, for a track-ready machine, it’s surprisingly value-priced. Watch the video above for the full story.

This article originally appeared on Engadget at https://www.engadget.com/hyundais-ioniq-5-n-en1-cup-car-brings-extreme-ev-performance-to-the-track-160024376.html?src=rss 

Stability AI’s audio generator can now crank out 3 minute ‘songs’

Stability AI just unveiled Stable Audio 2.0, an upgraded version of its music-generation platform. This system lets users create up to three minutes of audio via text prompt. That’s around the length of an actual song, so it’ll also whip up an intro, a full chord progression and an outro.

First, the good news. Three minutes is huge. The previous version of the software maxed out at 90 seconds. Just imagine the fake birthday song you could make in the style of that one Rob Thomas/Santana track. Another boon? The tool is free and publicly available through the company’s website, so have at it.

Introducing Stable Audio 2.0 – a new model capable of producing high-quality, full tracks with coherent musical structure up to three minutes long at 44.1 kHz stereo from a single prompt.

Explore the model and start creating for free at: https://t.co/E9ZIGagmPf

Read the… pic.twitter.com/rFGb0KpdeX

— Stability AI (@StabilityAI) April 3, 2024

It primarily works via text prompt, but there’s an option to upload an audio clip. The system will analyze the clip and produce something similar. All uploaded audio must be copyright-free, so this isn’t for the purposes of mimicking something that already exists. Rather, it could be useful for, say, humming a drum part or extending a 20 second clip into something longer.

Now, the bad news. This is still AI-generated music. It’s cool as a conversation piece and as an emblem of a possible future that’s great for tinkerers and bad for musicians, but that’s about it. The songs can actually sound nifty, at first, until the seams start showing. Then things get a bit creepy.

For instance, the system loves adding vocals, but not in any known human language. I guess it’s in whatever language that makes up the text in AI-generated images. The vocals sort of sound like actual people, and other times they sound Gregorian chanters filtered through outer space. It’s right smack dab in the middle of that uncanny valley. The Verge called the vocals “soulless and weird,” comparing them to whale sounds. That tracks. 

Stable Audio 2.0 makes the same weird little mistakes that all of these systems make, no matter the output type. Parts can vanish into thin air, replaced with something else. Sometimes melodic elements will double out of nowhere, like an audio version of those extra fingers in AI-generated images.

Created this with the new Stable Audio 2.0 from @StabilityAI! pic.twitter.com/kmN0eubJSK

— Chris McKay (@cmcky) April 3, 2024

There’s also the, well, boring-ness of it all. This is music in name only. Without a human connection, what’s the point? I listen to music to get inside the head of another person or group of people. There’s no head to get inside of here, despite constant proclamations that artificial general intelligence (AGI) is only months away.

So, this tech is an absolute gift for those making silly birthday videos or bank hold music. For everyone else? Shrug. One thing I can say from personal experience: It’s pretty fast. The system concocted an absolutely terrifying big band song about my cat in around a minute. 

This article originally appeared on Engadget at https://www.engadget.com/stability-ais-audio-generator-can-now-crank-out-3-minute-songs-160620135.html?src=rss 

Generated by Feedzy
Exit mobile version