OpenAI will let adults use ChatGPT for erotica starting in December

OpenAI plans to open the floodgates to more adult uses of ChatGPT starting in December, according to a new post from CEO Sam Altman. The company announced that it would add parental controls and automatic age detection features in September, and it seems like a benefit of sorting out children from adults is an ability to offer more freedom in what ChatGPT can show users.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman says. Some avid ChatGPT users already regularly manipulate the chatbot to engage in NSFW conversations, but Altman’s announcement sounds more like tacit approval from OpenAI that those use-cases are okay.

We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.

Now that we have…

— Sam Altman (@sama) October 14, 2025

The company signaled something similar during its DevDay 2025 announcements, when its new guidelines for developers creating apps for ChatGPT shared that “support for mature (18+) experiences will arrive once appropriate age verification and controls are in place.” After December, it sounds like adult interactions with ChatGPT or apps the chatbot can access are fair game.

All of these changes are being made in the shadow of disturbing stories of the seemingly negative influence ChatGPT can have on users, including the death of 16-year old Adam Raine, who allegedly used ChatGPT to plan his own suicide.

Reducing the chatbot’s sycophantic qualities with the release of GPT-5 was one of the ways OpenAI tried to address the mental health impacts of ChatGPT, along with built-in notifications to remind users to take breaks. It’s hard to definitively say whether these tweaks have made a difference, but combined with age-gating, it’s clear OpenAI feels comfortable giving its chatbot a longer leash.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-let-adults-use-chatgpt-for-erotica-starting-in-december-182417583.html?src=rss 

OpenAI forms advisory council on wellbeing and AI

OpenAI announced today that it is creating an advisory council centered on its users’ mental and emotional wellness. The Expert Council on Well-being and AI comprises eight researchers and experts on the intersection of technology and mental health. Some of the members were experts that OpenAI consulted as it developed parental controls. Topics of safety and protecting younger users have become more of a talking point for all artificial intelligence companies, including OpenAI, after lawsuits questioned their complicity in multiple cases where teenagers committed suicide after sharing their plans with AI chatbots.

This move sounds like a wise addition, but the effectiveness of any advisor hinges on listening to their insights. We’ve seen other tech companies establish and then utterly ignore their advisory councils; Meta is one of the notable recent examples. And the announcement from OpenAI even acknowledges that its new council has no real power to guide its operations: “We remain responsible for the decisions we make, but we’ll continue learning from this council, the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being.” It may become clearer how seriously OpenAI is taking this effort when it starts to disagree with the council, whether the company is genuinely committed to mitigating the serious risks of AI or whether this is a smoke and mirrors attempt to paper over its issues.

This article originally appeared on Engadget at https://www.engadget.com/openai-forms-advisory-council-on-wellbeing-and-ai-183815365.html?src=rss 

SteelSeries’ updated Nova 7 gaming headset offers much better battery life

SteelSeries just released a refresh of its popular Arctis Nova 7 midrange gaming headset. The Nova 7 Gen 2 offers significantly improved battery life, with an increase of around 40 percent when compared to the original version. This translates to 54 hours of use per charge, which is a mighty fine metric. There’s quick-charging that can provide six more hours of use in 15 minutes at the outlet. 

It charges via USB-C and can simultaneously play audio from both a 2.4GHz wireless connection and Bluetooth audio. This means that users can mix and match audio sources, which comes in handy when gaming. The headset connects wirelessly to all major gaming consoles, in addition to PCs and phones.

The Arctis Nova 7 Gen 2 integrates with the company’s mobile app. This offers more than 200 presets for specific games, all of which adjust the EQ to match what’s being played. On PC, the presets will automatically adjust depending on the game.

These are headphones intended for gaming, so they also include a noise-cancelling microphone. This little boom mic can be hidden within the headset and disabled when using a standalone microphone.

As for aesthetics, the Nova 7 looks a lot like the company’s high-end headsets. It’s available in three colors, including black, white and magenta. The headset is available to purchase right now and costs $200. It does lack a few features included with its higher-priced cousins, like the swappable battery system and active noise cancellation.

This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/steelseries-updated-nova-7-gaming-headset-offers-much-better-battery-life-171435527.html?src=rss 

X experiments with showing more information about profiles to fight inauthentic engagement

X has long been a hotbed for fake accounts, bots and other scammy behavior. Many of those dynamics have been exacerbated by the rise of paid verification, which boosts the visibility of anyone who pays for a subscription. Now, the company is running a small experiment that could help users better identify potentially suspicious accounts.

The service is starting to test a new “about this account” feature that will provide details about when an account joined the platform, where the person running it is based, how many times the username has been changed and how the account is connected to X. The feature is a lot like the “page transparency” information on Facebook, which provides similar details about when a given page was created and where the people running it are based. 

“When you read content on X, you should be able to verify its authenticity,” X’s head of product, Nikita Bier, shared in a post about the change. “This is critical to getting a pulse on important issues happening in the world.” 

If fully rolled out, this type of feature could help people on X understand a lot of common scams and other deceptive behavior on the platform. For example, scammers often change the handle of a recently compromised account in order to trick an account’s existing followers. And understanding the location of an account could help users root out people lying about their identity. 

However, it sounds like it could be some time before the feature is implemented in a way that could be broadly useful. Bier said that initially X will show this info on “a handful of profiles of X team members” — most of whom already have an official “X” badge on their profiles — in order to get feedback on the change.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-experiments-with-showing-more-information-about-profiles-to-fight-inauthentic-engagement-172500501.html?src=rss 

Spotify’s managed accounts will help keep your kids from wrecking your music taste profile

Spotify has introduced a new “managed accounts” feature aimed at younger listeners. Initially piloted last year and launching in seven new markets, including the US, today, it allows parents and guardians with a Spotify Premium Family plan to allocate their children a dedicated profile with their own personalized recommendations and custom playlists.

The idea is that the adults can filter out explicit content, limit the playback of certain artists and hide video playback features, including Canvas, should they want to. Users on this profile can’t use interactivity features like Messages either. 

Perhaps most importantly of all for some, a managed account also ensures that your personal Wrapped results at the end of the year aren’t dominated by whatever TikTok-viral songs the kids have been obsessively playing on repeat for months — and they won’t mess with your Discover Weekly algorithm either. Spotify’s ‘Exclude from your Taste Profile’ feature already offers a way of keeping the nonsense your kids might be listening to away from your own recommended content, but this feels like a cleaner option for families.

Standard Spotify Premium features like daylist and the aforementioned Discover Weekly remain available to someone using a managed account, making it a better option for kids becoming interested in music (they may even have gotten hooked on a band you’ve been listening to in the car) than the Spotify Kids app, which is very much designed for the ‘Baby Shark‘ devotees. It’s probably helpful to think of a managed account as a bridge between that and an unrestricted Premium account where all the music in the world is at your fingertips.

To set up a managed account, the plan owner has to go into their account settings within the Spotify app and select “Add a Member,” followed by selecting “Add a listener aged under 13.” The app will provide further instructions from there. As a reminder, a Spotify Premium Family plan is required to set up a managed account. This currently costs $20 per month.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotifys-managed-accounts-will-help-keep-your-kids-from-wrecking-your-music-taste-profile-154406843.html?src=rss 

SpaceX is preparing the next-gen Starship after a successful flight test

SpaceX’s second-generation Starship vehicle has just made a graceful exit. The company achieved every major objective it set for the super-heavy lift vehicle’s 11th flight test, the second-gen Starship’s final flight, which launched from Starbase in Texas on October 13. It followed another successful test in August, which saw Starship deploy its payload for the first time ever. Before those two most recent flights, SpaceX suffered a series of failures: Starship exploded during its ascent stage in the company’s seventh and eighth tests, and it failed to deploy its payload during its ninth test. Another Starship vehicle blew up on the ground during a routine test while SpaceX was preparing for its 10th flight. 

All of the vehicle’s 33 Raptor engines ignited upon launch, and the stage separation and first-stage ascent went smoothly. The Super Heavy booster splashed down into the ocean as planned, while Starship was able to deploy all its Starlink simulators before re-entering the atmosphere. During its reentry burn, SpaceX intentionally stressed the vehicle to determine the capabilities of its heatshield. And with just a few minutes left to the flight, the vehicle executed a banking maneuver to “mimic the trajectory that future missions returning to Starbase will fly.”

The company says it will now focus on developing the next generation of Starship and Super Heavy. It has multiple versions of the vehicle and the booster being prepared for tests at the moment, and it expects them to be used for the first Starship orbital flights and operational payload missions. 

Watch Starship’s eleventh flight test → https://t.co/YmvmGZTV8o
https://t.co/zIRMX5mh9K

— SpaceX (@SpaceX) September 29, 2025

This article originally appeared on Engadget at https://www.engadget.com/science/space/spacex-is-preparing-the-next-gen-starship-after-a-successful-flight-test-130027382.html?src=rss 

Instagram makes ‘teen accounts’ more restrictive

Instagram is tightening the settings on its “teen accounts” to add new limits on what kids on the platform are able to see. Older teens will also no longer be able to opt out of the default stricter settings without parental approval. 

Meta first introduced teen accounts for Instagram a year ago, when it began automatically moving teens into the more locked-down accounts that come with stricter privacy settings and parental controls. The company recently rolled out the accounts for teens on Facebook and Messenger too, and has used AI tools to detect teens that are lying about their age. 

While teen accounts are meant to address long-running criticism about Meta’s handling of teen safety on its apps, the measures have been widely criticized as not going far enough to protect the company’s most vulnerable users. A recent report from safety advocates at Heat Initiative found that “young teen users today continue to be recommended or exposed to unsafe content and unwanted messages at alarmingly high rates while using Instagram Teen Accounts.” (Meta called the report “deeply subjective.”) 

Now, Meta is locking down teen accounts even more. With the latest changes, teens will no longer be able to follow or see content from accounts that “regularly share age-inappropriate content” or that seem “age-inappropriate” based on their bio or username. Meta says it will also block these accounts from appearing in teens’ recommendations or in search results in the app. 

Instagram will also block a “a wider range of mature search terms” for teens, including words like “alcohol,” “gore,” and intentional misspellings of these words, which is a common tactic to avoid Instagram’s filters. And, even if an account a teen already follows shares a post that goes against these rules, teens should be prevented from seeing it, even if it’s sent to their DMs.

Instagram will block teens from searching for more term associated with inappropriate content,.

Meta

While these changes may seem like Meta once again filling somewhat obvious gaps in its safety features, the company says the revamp is meant to make the content teens encounter on Instagram more like a PG-13 movie. “Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram – but we’re going to keep doing all we can to keep those instances as rare as possible,” the company explained in a blog post.

That’s a somewhat confusing analogy as there’s a fairly wide spectrum of what might appear in a PG-13 movie. Meta also says that some of its rules for teens are more restrictive than what teens might see in a PG-13 movie. For example, the app aims to prevent teens from seeing any kind of “sexually suggestive” content or images of “near nudity” even though that type of content might appear in movies rated for 13-year-olds. 

For parents that want even tighter restrictions, Instagram is also adding a new “limited content” setting that filters “even more” content from teens’ view (Meta didn’t explain what exactly would be restricted). The setting also prevents teens from accessing any comments on the platform, either on their own posts or other users’. Finally, Meta is testing a new reporting feature for parents that use Instagram’s parental control settings to monitor their teens’ use of the app. With the feature, parents can flag specific posts they feel are inappropriate to trigger a review by Meta. 

Meta says the latest changes will be rolling out “gradually” to teen accounts in the US, UK, Canada and Australia to start and that it will eventually “add additional age-appropriate content protections for teens on Facebook.” 

This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-makes-teen-accounts-more-restrictive-120000653.html?src=rss 

Generated by Feedzy
Exit mobile version