Gypsy Rose Blanchard Will Reportedly Be Released From Prison 3 Years Early

The woman who inspired Hulu’s ‘The Act’ is set to be released from prison three years ahead of her release date, at the end of 2023.

The woman who inspired Hulu’s ‘The Act’ is set to be released from prison three years ahead of her release date, at the end of 2023. 

Chris Harrison’s Future After Sudden ‘Bachelor’ Nation Exit: What’s He Doing Now

One of the most dramatic moments in the history of ‘The Bachelor’ is Chris Harrison’s controversial exit. Here’s what Chris is doing now amid the premiere of ‘The Golden Bachelor.’

One of the most dramatic moments in the history of ‘The Bachelor’ is Chris Harrison’s controversial exit. Here’s what Chris is doing now amid the premiere of ‘The Golden Bachelor.’ 

Apple asks Supreme court to reverse App Store ruling in Epic case

As expected, Apple is making a last-ditch effort to get the Supreme Court to reverse a ruling that would force it to open up its App Store to third-party payments. The iPhone maker filed a petition with the Court Thursday, arguing that the lower court injunction was “breathtakingly broad” and “unconstitutional.”

It’s the latest beat in a long-simmering feud between Cupertino and the Fortnite developer that’s seen both sides ask the Supreme Court to reverse parts of a lower court ruling. But Apple’s latest petition could have far-reaching consequences for all developers, should the Supreme Court decide to take up the case.

That’s because Apple is asking the Supreme Court to reverse an injunction that would require the company to allow app developers to offer payments that circumvent its App Store, and the fees associated with it. Such a move would be a major blow to the App Store’s business, which has used the rule to maintain strict control over in-app payments.

The rule, often referred to as an “anti-steering” policy, has long been controversial and a major gripe for developers. It not only prohibits app makers from providing links to web-based payments, it bars them from even telling their customers that a cheaper rate was available somewhere else.

Fortnite developer Epic made the issue a central part of its antitrust lawsuit against Apple in 2020, and the judge in the case ruled in Epic’s favor on the issue in 2021. Apple has spent the last two years fighting that part of the ruling.

Separately, Epic has also asked the Supreme Court to reconsider part of the lower court’s ruling in its bid to keep its antitrust claims against Apple alive.

This article originally appeared on Engadget at https://www.engadget.com/apple-asks-supreme-court-to-reverse-app-store-ruling-in-epic-case-221126323.html?src=rss 

Kylie Jenner Channels Marilyn Monroe in Sequined Gown for Paris Fashion Week: Video

The TV personality glistened in a skin-tight sequined gown while at the Schiaparelli show amid Paris Fashion Week on September 28.

The TV personality glistened in a skin-tight sequined gown while at the Schiaparelli show amid Paris Fashion Week on September 28. 

Theresa Nist: 5 Things to Know About ‘The Golden Bachelor’ Contestant

Theresa Nist is one of the lucky ladies hoping to find love with ‘The Golden Bachelor’ star Gerry Turner. Get to know the contestant as she searches for love after loss.

Theresa Nist is one of the lucky ladies hoping to find love with ‘The Golden Bachelor’ star Gerry Turner. Get to know the contestant as she searches for love after loss. 

Meta’s metaverse is getting an AI makeover

Meta’s Connect keynote felt different this year, and not just because it marked the return of an in-person event. It’s been nearly two years since Mark Zuckerberg used Connect to announce that Facebook was changing its name to Meta and reorienting the entire company around the metaverse.

But at this year’s event, it felt almost as if Zuckerberg was trying to avoid saying the word “metaverse.” While he did utter the word a couple of times, he spent much more time talking up Meta’s new AI features, many of which will be available on Instagram and Facebook and other non-metaverse apps. Horizon Worlds, the company’s signature metaverse experience that was highlighted at last year’s Connect, was barely mentioned.

That may not be particularly surprising if you’ve been following the company’s metaverse journey lately. Meta has lost so much money on the metaverse, its own investors have questioned it. And Zuckerberg has been mercilessly mocked for trying to hype seemingly minor metaverse features like low-res graphics or avatars with legs.

AI, on the other hand, is much more exciting. The rise of large language models has fueled a huge amount of interest from investors and consumers alike. Services like OpenAI’s ChatGPT, Snap’s MyAI and Midjourney have made the technology accessible — and understandable— to millions.

ASSOCIATED PRESS

Given all that, it’s not surprising that Zuckerberg and Meta used much of Connect — once known solely as a virtual reality conference — to talk about the company’s new generative AI tools. And there was a lot to talk about: the company introduced Meta AI, a generative AI assistant, which can answer questions and take on the personality of dozens of characters; AI-powered image editing for Instagram; and tools that will enable developers, creators and businesses to make their own AI-powered bots. AI will even play a prominent role in the company’s new hardware, the Meta Quest 3 and the Ray-Ban Meta smart glasses, both of which will ship with the Meta AI assistant.

But that doesn’t mean the company is giving up on the metaverse. Zuckerberg has said the two are very much linked, and has previously tried to dispel the notion that Meta’s current focus on AI has somehow supplanted its metaverse investments. “A narrative has developed that we’re moving away from focusing on the metaverse vision,” Zuckerberg said in April. We’ve been focusing on both AI and the metaverse for years now, and we will continue to focus on both.”

But at Connect he offered a somewhat different pitch for the metaverse than he has in the past. Over the last two years, Zuckerberg spent a lot of time emphasizing socializing and working in VR environments, and the importance of avatars. This year, he pitched an AI-centric metaverse.

“Pretty soon, I think we’re going to be at a point where you’re going to be there physically with some of your friends, and others will be there digitally as avatars as holograms and they’ll feel just as present as everyone else. Or you know, you’ll walk into a meeting and you’ll sit down at a table and there will be people who are there physically, and people are there digitally as holograms. But also sitting around the table with you. are gonna be a bunch of AIs who are embodied as holograms, who are helping you get different stuff done too. So I mean, this is just a quick glimpse of the future and how these ideas of the physical and digital come together into this idea that we call the metaverse.”

Notably, the addition of AI assistants could also make “the metaverse” a lot more useful. One of the more intriguing features previewed during Connect were Meta AI-powered search capabilities in the Ray-Ban Meta smart glasses. The Google Lens-like feature would enable wearers to “show” things they are seeing through the glasses and ask the AI questions about it, like asking Meta AI to identify a monument or translate text.

It’s not hard to imagine users coming up with their own use cases for AI assistants in Meta’s virtual worlds, either. Angela Fan, a research scientist with Meta AI, says generative AI will change the type of experiences people have in the metaverse. “It’s almost like a new angle on it,” Fan tells Engadget. “When you’re hanging out with friends, for example, you might also have an AI looped in to help you with tasks. It’s the same kind of foundation, but brought to life with the AIs that will do things in addition to some of the friends that you hang out with in the metaverse.”

Meta

For now, it’s not entirely clear just how long it will be before these new AI experiences reach the metaverse. The company said the new “multi-modal” search capabilities would be arriving on its smart glasses sometime next year. And it didn’t give a timeframe for when the new “embodied” AI assistants could be available for metaverse hangouts.

It’s also not yet clear if the new wave of AI assistants will be popular enough to fuel a renewed interest in the metaverse to begin with. Meta previously tried to make (non-AI) chatbots a thing in 2016 and the effort fell flat. And even though generative AI makes the latest generation of bots much more powerful, the company has plenty of competition in the space. But by putting its AI into its other apps now, Meta has a much better chance at reaching its billions of users. And that could lay important groundwork for its vision for an AI-centric metaverse.

This article originally appeared on Engadget at https://www.engadget.com/metas-metaverse-is-getting-an-ai-makeover-194004996.html?src=rss 

Google opens its AI-generated search experience to teens

Google is opening its AI-powered search experience to teens. In addition, the company’s Search Generative Experience (SGE) is adding new context pages to shed light on generated responses and individual web links within answers.

The company is opening its search-based AI tool to US teenagers between 13 and 17. Google says it received “particularly positive feedback” from 18- to 24-year-olds who tested SGE, which influenced its decision. (Younger people being more open to AI isn’t exactly a shock, given older adults’ tendency to be more suspicious of new technologies.) SGE has been available as part of Google Search Labs since late May.

Google says it has added safeguards to prevent inappropriate or harmful content based on its research with experts in teen development. “For example, we’ve put stronger guardrails in place for outputs related to illegal or age-gated substances or bullying, among other issues,” the company wrote on Thursday. Google says it will continue to gather feedback and work with specialists to fine-tune SGE for teens.

Google

Starting today, the company is also adding an “About this result” tool to SGE responses, helping users understand how the AI settled on its answers. Soon, it will also produce “About this result” responses for individual URLs within AI-generated answers “so people can understand more about the web pages that back up the information in AI-powered overviews.”

To help newcomers understand generative AI, Google has published an AI Literacy Guide, serving as a welcome manual to SGE and other AI projects like Bard. It includes tips, FAQs and discussions about its capabilities and limitations.

Finally, Google says it’s making “targeted improvements” to AI-powered results that are false or offensive. It’s rolling out an update to train the AI model to better detect “hallucinations” or inappropriate content. (Chatbots spreading misinformation has been an issue from the get-go.) The company is also working on using large language models to “critique” their first draft responses and rewrite them with quality and safety in mind.

“Generative AI can help younger people ask questions they couldn’t typically get answered by a search engine and pose follow-up questions to help them dig deeper,” the company wrote. “As we introduce this new technology to teens, we want to strike the right balance in creating opportunities for them to benefit from all it has to offer, while also prioritizing safety and meeting their developmental needs.”

This article originally appeared on Engadget at https://www.engadget.com/google-opens-its-ai-generated-search-experience-to-teens-201357386.html?src=rss 

Google will let publishers hide their content from its insatiable AI

Google has announced a new control in its robots.txt indexing file that would let publishers decide whether their content will “help improve Bard and Vertex AI generative APIs, including future generations of models that power those products.” The control is a crawler called Google-Extended, and publishers can add it to the file in their site’s documentation to tell Google not to use it for those two APIs. In its announcement, the company’s vice president of “Trust” Danielle Romain said it’s “heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases.”

Romain added that Google-Extended “is an important step in providing transparency and control that we believe all providers of AI models should make available.” As generative AI chatbots grow in prevalence and become more deeply integrated into search results, the way content is digested by things like Bard and Bing AI has been of concern to publishers. 

While those systems may cite their sources, they do aggregate information that originates from different websites and present it to the users within the conversation. This might drastically reduce the amount of traffic going to individual outlets, which would then significantly impact things like ad revenue and entire business models.

Google said that when it comes to training AI models, the opt-outs will apply to the next generation of models for Bard and Vertex AI. Publishers looking to keep their content out of things like Search Generative Experience (SGE) should continue to use the Googlebot user agent and the NOINDEX meta tag in the robots.txt document to do so.

Romain points out that “as AI applications expand, web publishers will face the increasing complexity of managing different uses at scale.” This year has seen an explosion in the development of tools based on generative AI, and with search being such a huge way people discover content, the state of the internet looks set to undergo a huge shift. Google’s addition of this control is not only timely, but indicates it’s thinking about the way its products will impact the web.

Update, September 28 at 5:36pm ET: This article was updated to add more information about how publishers can keep their content out of Google’s search and AI results and training.

This article originally appeared on Engadget at https://www.engadget.com/google-will-let-publishers-hide-their-content-from-its-insatiable-ai-202015557.html?src=rss 

Looks like NVIDIA got raided by French antitrust authorities

At dawn on Wednesday, French antitrust authorities conducted a surprise raid on a company in the country that specializes in graphics cards — and according to The Wall Street Journal and Challenges business magazine, that company was NVIDIA. We reached out to NVIDIA for clarification and a spokesperson declined to comment. Here’s what we know for sure:

The French Competition Authority conducted a surprise raid early Wednesday morning on “a company suspected of having implemented anticompetitive practices in the graphics cards sector,” according to a brief press release from the regulator. The raid was tied to a larger investigation into the health of the cloud computing market, with a focus on identifying whether new companies were being unfairly squeezed out by larger, existing ones. The results of that investigation were published in June and they centered on three “hyperscalers,” Amazon Web Services, Google Cloud and Microsoft Azure. 

The results read, in part, “The likelihood of a new operator being able to gain market share rapidly appears limited, excluding companies who are already powerful in other digital markets.” NVIDIA is not mentioned in the original cloud investigation.

NVIDIA has seen significant financial success this year amid the AI boom. NVIDIA’s AI chips and data centers are in high demand, and the company crushed its most recent earnings expectations, pulling in $13.51 billion in the second quarter of 2023, compared with $6.7 billion in 2022.

As the French Competition Authority noted, a raid does not mean the targeted company is guilty of anticompetitive practices — but it’s a confident step from the regulatory body.

This article originally appeared on Engadget at https://www.engadget.com/looks-like-nvidia-got-raided-by-french-antitrust-authorities-205809329.html?src=rss 

Generated by Feedzy
Exit mobile version