Disney and ITV partner up to show each other’s shows on their streaming services

Disney and the British free-to-air broadcaster ITV have launched a new partnership that will allow them to show each other’s shows in an effort to reach new audiences. ITV viewers will be able to watch shows including Only Murders in the Building, Andor, and The Bear – which lives on Disney+ in the UK – while Disney will take advantage of ITV’s various dramas and reality TV offerings.

Mr Bates vs The Post Office, ITV’s BAFTA award-winning four-part dramatization of the British Post Office scandal, will be available to Disney+ subscribers, as well as selected seasons of the ever-popular Love Island dating show. The thinking seems to be that ITV’s typically older viewing demographic could be drawn to Disney’s more adult-focused shows, with ITV’s output likely to appeal to streaming audiences that skew younger. That said, family-friendly Disney+ shows including Lilo and Stitch: The Series and Phineas and Ferb will also make their way to ITV as part of the deal.

ITV has its own streaming platform, called ITVX, which is free to watch in the UK with ads, or ad-free as part of a monthly subscription. Disney’s content will live on ITVX in the UK, and will be badged as “A Taste of Disney+”, with Disney+ offering its “A Taste of ITVX” library to its own subscribers. Kevin Lygo, Managing Director of Media and Entertainment at ITV, said in a press release that the plan is for both libraries to be regularly updated.

Traditional broadcasters striking deals with streaming platforms is nothing new. Netflix has been licensing shows from the BBC and Channel 4 in the UK for a number of years, for example, and back in 2022 Disney and the BBC started co-producing Doctor Who, with Disney+ becoming the home of the long-running sci-fi show outside of the UK. More often than not, though, these relationships tend to be one-way affairs in terms of where the content is distributed, so this even divide between two platforms seems more novel.

The selected shows and movies for the launch window will be available to stream on their respective platforms from July 16. 

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/disney-and-itv-partner-up-to-show-each-others-shows-on-their-streaming-services-150109825.html?src=rss 

How exactly did Grok go full ‘MechaHitler?’

Earlier this week, Grok, X’s built-in chatbot, took a hard turn toward antisemitism following a recent update. Amid unprompted, hateful rhetoric against Jews, it even began referring to itself as MechaHitler, a reference to 1992’s Wolfenstein 3D. X has been working to delete the chatbot’s offensive posts. But it’s safe to say many are left wondering how this sort of thing can even happen.

I spoke to Solomon Messing, a research professor at New York University’s Center for Social Media and Politics, to get a sense of what may have gone wrong with Grok. Before his current stint in academia, Messing worked in the tech industry, including at Twitter, where he founded the company’s data science research team. He was also there for Elon Musk’s takeover.

The first thing to understand about how chatbots like Grok work is that they’re built on large language models (LLMs) designed to mimic natural language. LLMs are pretrained on giant swaths of text, including books, academic papers and, yes, even social media posts. The training process allows AI models to generate coherent text through a predictive algorithm. However, those predictive capabilities are only as good as the numerical values or “weights” that an AI algorithm learns to assign to the signals it’s later asked to interpret. Through a process known as post-training, AI researchers can fine-tune the weights their models assign to input data, thereby changing the outputs they generate.

“If a model has seen content like this during pretraining, there’s the potential for the model to mimic the style and substance of the worst offenders on the internet,” said Messing.

In short, the pre-training data is where everything starts. If an AI model hasn’t seen hateful, anti-antisemitic content, it won’t be aware of the sorts of patterns that inform that kind of speech — including phrases such as “Heil Hitler” — and, as a result, it probably won’t regurgitate them to the user.

In the statement X shared after the episode, the company admitted there were areas where Grok’s training could be improved. “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,” the company said. “xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”

Screenshots via X

As I saw people post screenshots of Grok’s responses, one thought I had was that what we were watching was a reflection of X’s changing userbase. It’s no secret xAI has been using data from X to train Grok; easier access to the platform’s trove of information is part of the reason Musk said he was merging the two companies in March. What’s more, X’s userbase has become more right wing under Musk’s ownership of the site. In effect, there may have been a poisoning of the well that is Grok’s training data. Messing isn’t so certain.

“Could the pre-training data for Grok be getting more hateful over time? Sure, if you remove content moderation over time, the userbase might get more and more oriented toward people who are tolerant of hateful speech […] thus the pre-training data drifts in a more hateful direction,” Messing said. “But without knowing what’s in the training data, it’s hard to say for sure.”

It also wouldn’t explain how Grok became so antisemitic after just a single update. On social media, there has been speculation that a rogue system prompt may explain what happened. System prompts are a set of instructions AI model developers give to their chatbots before the start of a conversation. They give the model a set of guidelines to adhere to, and define the tools it can turn to for help in answering a prompt.

In May xAI blamed “an unauthorized modification” to Grok’s prompt on X for the chatbot’s brief obsession with “white genocide” in South Africa. The fact that the change was made at 3:15AM PT made many suspect Elon Musk had done the tweak himself. Following the incident, xAI open sourced Grok’s system prompts, allowing people to view them publicly on GitHub. After Tuesday’s episode, people noticed xAI had deleted a recently added system prompt that told Grok its responses should “not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

Messing also doesn’t believe the deleted system prompt is the smoking gun some online believe it to be.

“If I were trying to ensure a model didn’t respond in hateful/racist ways I would try to do that during post-training, not as a simple system prompt. Or at the very least, I would have a hate speech detection model running that would censor or provide negative feedback to model generations that were clearly hateful,” he said. “So it’s hard to say for sure, but if that one system prompt was all that was keeping xAI from going off the rails with Nazi rhetoric, well that would be like attaching the wings to a plane with duct tape.”

He added: “I would definitely say a shift in training, like a new training approach or having a different pre-training or post-training setup would more likely explain this than a system prompt, particularly when that system prompt doesn’t explicitly say, ‘Do not say things that Nazis would say.'”

On Wednesday, Musk suggested Grok was effectively baited into being hateful. “Grok was too compliant to user prompts,” he said. “Too eager to please and be manipulated, essentially. That is being addressed.” According to Messing, there is some validity to that argument, but it doesn’t provide the full picture. “Musk isn’t necessarily wrong,” he said, “There’s a whole art to ‘jailbreaking’ an LLM, and it’s tough to fully guard against in post-training. But I don’t think that fully explains the set of instances of pro-Nazi text generations from Grok that we saw.”

If there’s one takeaway from this episode, it’s that one of the issues with foundational AI models is just how little we know about their inner workings. As Messing points out, even with Meta’s open-weight Llama models, we don’t really know what ingredients are going into the mix. “And that’s one of the fundamental problems when we’re trying to understand what’s happening in any foundational model,” he said, “we don’t know what the pre-training data is.”

In the specific case of Grok, we don’t have enough information right now to know for sure what went wrong. It could have been a single trigger like an errant system prompt, or, more likely, a confluence of factors that includes the system’s training data. However, Messing suspects we may see another incident just like it in the future.

“[AI models] are not the easiest things to control and align,” he said. “And if you’re moving fast and not putting in the proper guardrails, then you’re privileging progress over a sort of care. Then, you know, things like this are not surprising.”

This article originally appeared on Engadget at https://www.engadget.com/ai/how-exactly-did-grok-go-full-mechahitler-151020144.html?src=rss 

Amazon strikes AI licensing deal with Hearst and Condé Nast

Digiday is reporting that media conglomerates Hearst and Condé Nast have signed multi-year licensing agreements with Amazon to allow its AI shopping assistant Rufus access to the vast library of content held by the two companies. Between Hearst and Condé Nast, Rufus will have access to Cosmopolitan, GQ, Vogue and The New Yorker, just to name a few.

A Hearst spokesperson confirmed to Digiday that the licensing deal with Amazon will allow Rufus broad access to its newspapers and magazines. The publication also received confirmation from Condé Nast. Further details on the arrangements have not been shared.

Rufus is a chatbot built to answer shoppers’ questions on product recommendations and other shopping-related needs. The AI tool is trained on Amazon’s catalog, customer reviews, community Q&As, and “information from across the web.” The strong commerce angle found in much of the Hearst and Condé Nast catalog makes the publishers suitable matches for the AI to train on.

This follows a slew of licensing deals over the last few years between content publishers and tech giants seeking more content on which to train AI. For Condé Nast, this actually marks the second major AI deal for the media company since it entered into a multi-year partnership with OpenAI last year to display content from its various publications in ChatGPT.

Amazon recently struck a licensing arrangement with The New York Times and its adjacent properties, all while the iconic newspaper is embroiled in a lawsuit against Microsoft and OpenAI for copyright infringement.

From Disney and Universal suing Midjourney to Reddit signing an AI deal, these latest signings are a continuation of the existential back-and-forth between content creators protecting their intellectual property and AI companies’ seemingly endless appetite for more content on which to train their various models.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/amazon-strikes-ai-licensing-deal-with-hearst-and-conde-nast-134849930.html?src=rss 

Ben Askren’s Health: Update After His Double Lung Transplant

The former mixed martial arts fighter underwent a double lung transplant after a battle with pneumonia, and he ‘died four times,’ Ben said. Get an update on his health.

The former mixed martial arts fighter underwent a double lung transplant after a battle with pneumonia, and he ‘died four times,’ Ben said. Get an update on his health. 

Major US power operator says AI and data center demands are pushing prices up

PJM Interconnection (PJM) is the largest power grid operator in the US, serving 65 million customers across the District of Columbia and 13 states, namely Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia and West Virginia. But this summer, some parts of PJM’s power grid are expected to use so much electricity that people’s bills for the summer are projected to be 20 percent higher than before, according to Reuters.

The operator said its problems with supply and demand are beyond its control. To start with, some state energy policies caused the closure of fossil-fuel fired power plants before new ones could become operational. “Prices will remain high as long as demand growth is outstripping supply — this is a basic economic policy,” PJM spokesperson Jeffrey Shields told Reuters

Of course, wind and solar projects are likely the cheapest way to add power generation capacity to the grid, but the Trump administration’s Big, Beautiful Bill kills off a lot of incentives for solar power. Renewable energy projects also require engineering studies before they could be connected to the grid. PJM decided to stop accepting new applications for power plant connections in 2022 since it still has 2,000 requests from renewable sources to process. 

In addition to PJM losing power sources due to plants closing down over the years, there’s a surge in demand from data centers over the past few years. The region PJM serves has the most number of data centers in the world. Demand for power also exploded in 2023 when ChatGPT started becoming a household name, contributing greatly to the spike in prices. PJM has capped its prices for now and has fast-tracked the connection of 51 power plants to its grid, but a lot of those aren’t slated to come online until 2030. 

This article originally appeared on Engadget at https://www.engadget.com/ai/major-us-power-operator-says-ai-and-data-center-demands-are-pushing-prices-up-130030473.html?src=rss 

OpenAI’s own web browser could arrive within weeks

OpenAI is said to be almost ready to unleash its own web browser, which could be out in the wild within weeks. According to Reuters sources, the company is aiming to more deeply integrate its services into users’ work and personal lives, and the browser is part of that strategy (as is its push into hardware). Naturally, the browser is slated to have a ChatGPT-style chatbot baked in.

OpenAI is reportedly looking to use the browser to capture more user data — a strategy that has worked out to Google’s benefit with Chrome. The browser is also expected to have agentic AI features such as Operator, which are billed as tools that can carry out actions (such as booking reservations) on a user’s behalf. Having direct access to information like web browsing data may make it easier for OpenAI to pull that off.

The browser is said to be designed to keep many interactions within an AI chatbot interface rather than directing users to websites. As with Google’s AI Overviews, this could dissuade people from clicking through to the sources of information that the likes of ChatGPT rely on, potentially depriving website operators of valuable traffic.

If OpenAI does start offering users access to its own browser, it would be following Perplexity, which released a browser with agentic AI functions on Wednesday. That browser, Comet, is currently only available to those with a $200 per month Perplexity Max subscription. Opera also released a “fully agentic” browser back in May.

While ChatGPT has more than 500 million weekly active users that OpenAI can market Its browser to, the company will face a tough battle if it truly wants to challenge Chrome, which is estimated to have more than 3 billion users. As it happens, OpenAI’s browser is reportedly built on Chromium, Google’s open-source code on which Chrome, Comet, Microsoft Edge and Opera run. Reports last year suggested that OpenAI may build its own browser after hiring two former Google execs who helped create Chrome.

Google has long tapped into data garnered through Chrome to help with ad targeting. However, the Department of Justice late last year said it wanted Google to sell off Chrome. A judge ruled earlier in 2024 that Google was a “monopolist” in the search sector and that it violated the Sherman Act (Google plans to appeal the ruling). OpenAI has said that were Google forced to sell off Chrome, it would be interested in snapping up the world’s most popular browser.

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-own-web-browser-could-arrive-within-weeks-120039766.html?src=rss 

Samsung plans to launch its trifold smartphone by the end of 2025

In January Samsung teased an all new Galaxy Z trifold device, but no mention of it was made during its Unpacked event yesterday. However, the company now says that does indeed plan to release a smartphone with three displays by the end of 2025, acting head of device experience TM Roh told The Korea Times

“We are working hard on a trifold smartphone with the goal of launching it at the end of this year,” said Roh. “We are now focusing on perfecting the product and its usability, but we have not decided its name. As the product nears completion, we are planning to make a final decision soon.” Along with that confirmation, an unnamed executive told Android Authority much the same thing. 

Last month, Samsung teased the Galaxy Z Fold 7 Ultra, but the smartphone shown in the image appeared to have two screens like other Z Fold models. That made us wonder what would be “Ultra” with the device, as the new Galaxy Z Fold 7 has most of the Galaxy S25 Ultra’s key features. Other rumors have it that the trifold device could be called the Galaxy G Fold due to the hinge’s shape. 

Samsung wouldn’t be first to market with a trifold phone. Huawei has that, er, honor with the Mate XT, an accordion-style device that starts at an eye-watering $2,800. 

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/samsung-plans-to-launch-its-trifold-smartphone-by-the-end-of-2025-122358763.html?src=rss 

Subnautica 2’s early access release delayed to 2026 amid developer drama

Subnautica 2 is one of the most highly anticipated games around. It’s the second-most wishlisted game on Steam behind (you guessed it) Hollow Knight: Silksong. However, you’ll need to wait longer than anticipated to try Subnautica 2 in four-player co-op, as the survival game’s early access release has been delayed until early 2026.

Developer Unknown Worlds said that community members who took part in playtests provided positive feedback about the story, creatures, environment and general direction of the game. However, the studio said, they “also provided some insight that there are a few areas where we needed to improve before launching the first version of Subnautica 2 to the world. Our community is at the heart of how we develop, so we want to give ourselves a little extra time to respond to more of that feedback before releasing the game into early access. With that in mind, we’ve made the decision to delay Subnautica 2’s early access release to 2026.”

The delay will afford Unknown Worlds a chance to add more biomes, tools, vehicle upgrades and creatures while expanding the story, the studio said. Players can expect more details in the coming months.

But news of the delay comes amid behind-the-scenes drama at Unknown Worlds. Bloomberg reports that the studio had been in line for a $250 million bonus (which the leadership group planned to share with employees) from Krafton if it hit revenue goals by the end of this year. The delay reportedly means Unknown Worlds is very unlikely to hit those targets. As such, Bloomberg‘s sources suggest that means the team of around 100 people may not be eligible for the payout.

Last week, Krafton — which bought Unknown Worlds in 2021 — turfed out the studio’s leadership team of CEO Ted Gill and co-founders Charlie Cleveland and Max McGuire. The publisher brought in Steve Papoutsis, a former executive at The Callisto Protocol developer Striking Distance, as the new CEO of Unknown Worlds.

“There is nothing more important than the gamer experience. Given the anticipation around Subnautica 2, we owe our players nothing less than the best possible game, as soon as possible,” Krafton CEO CH Kim said in a statement, “We are thrilled Steve is joining us in our shared commitment at Krafton and Unknown Worlds to deliver Subnautica 2 as a more complete and satisfying entry in the series — one that truly lives up to player expectations.”

Per Bloomberg, Papoutsis told employees this week that Krafton didn’t believe Subnautica 2 was ready for an early access release and claimed he didn’t know the specifics of the contract regarding the quarter-billion-dollar bonus. “It’s never been told to me that we’re making this change specifically to impact any earnout or anything like that,” he reportedly told staff.

According to Cleveland, however, Subnautica 2 is actually “ready for early access release.” The studio’s co-founder wrote in a lengthy X post on July 5 that “while we thought this was going to be our decision to make, at least for now, that decision is in Krafton’s hands.”

A Krafton spokesperson told Eurogamer that the decision to delay Subnautica 2 was “based solely on our commitment to quality and to delivering the best possible experience for players” and it was not “influenced by any contractual or financial considerations.” They added that “the decision had already been under discussion prior to recent leadership changes at the studio.”

This article originally appeared on Engadget at https://www.engadget.com/gaming/subnautica-2s-early-access-release-delayed-to-2026-amid-developer-drama-123042406.html?src=rss 

Elon Must spent almost an hour talking about Grok without mentioning its Nazi problem

xAI has officially lunched Grok 4 during a livestream with Elon Musk, who called it the “smartest AI in the world.” He said that if you make the Grok 4 take the SATs and the GREs, it would get near perfect results every time and can answer questions it’s never seen before. “Grok 4 is smarter than almost all graduate students in all disciplines simultaneously” and can reason at superhuman levels, he claimed. 

Musk and the xAI team showed benchmarks they used for Grok 4, including something called “Humanity’s Last Exam” that contained 2,500 problems curated by subject matter experts in mathematics, engineering, physics, chemistry, biology, humanities and other topics. When it was first released earlier this year, most models could only reportedly get single digit accuracy. Grok 4, which is the single agent version of the model, was able to solve around 40 percent of the benchmark’s problems. Grok 4 Heavy, the multi-agent version, was able to solve over 50 percent. xAI is now selling a $300-per-month SuperGrok subscription plan with access to Grok 4 Heavy and new features, as well as higher limits for Grok 4. 

The new model is better than PhD level in every subject, Musk said. Sometimes it may lack common sense, he admitted, and it has not yet invented or discovered new tech and physics. But Musk believes it’s just a matter of time. Grok is going to invent new tech maybe later this year, he said, and he would be shocked if it doesn’t happen next year. At the moment, though, xAI is training the AI to be much better at image and video understanding and image generation, because it’s still “partially blind.”

During the event, Musk talked about combining Grok with Tesla’s Optimus robot so that it can interact with the real world. The most important safety thing for AI is for it to be truth-seeking, Musk also said. He likened AI to a “super genius child” who will eventually outsmart you, but which you can shape to be truthful and honorable if you instill it with the right values.

What Musk didn’t talk about, however, is Grok’s recent turn towards antisemitism. In some recent responses to users on X, Grok spewed out antisemitic tropes, praised Hitler and posted what seems to be the text version of the “roman salute.” Musk did respond to a post on X about the issue blaming the problem on rogue users. “Grok was too compliant to user prompts,” he wrote. “Too eager to please and be manipulated, essentially. That is being addressed.”

This article originally appeared on Engadget at https://www.engadget.com/ai/elon-must-spent-almost-an-hour-talking-about-grok-without-mentioning-its-nazi-problem-061101656.html?src=rss 

Generated by Feedzy
Exit mobile version