VICTORY! Apple Extends Key Safety Features to 13 to 17-year-olds
Jordan. Gavin. Zach. Mason. Walker. Carson. These are names of teens who have died because of harms they encountered online. And there are many, many more.
Whether it be dying by suicide after experiencing sextortion, not surviving a dangerous Internet challenge, or being sold deadly drugs on online platforms … whatever the details of their story, what they all have in common is that they are no longer with us.
But there is another thing these teens have in common. They were deemed “too old” for many of the protections Big Tech offers children.
The 1998 Children’s Online Privacy and Protection Act (COPPA) has been abused by Big Tech for years, being used as an excuse to treat minors who were 13+ as “digital adults.” This is maddening, especially when research suggests this group of minors is often the most at risk for being harmed online.
This is why NCOSE and Protect Young Eyes have long advocated for Apple and Google to enable default tech safety features for ALL minors. And with your support, Apple has finally turned on web content filters and automatic blurring of nude images in messages for 13 to 17-year-olds!
|
|
|
Ending Sexploitation Podcast: Pornography's Influence on Desire Haley McNamara (SVP at NCOSE) and Dani Pinter (SVP and Director of the Law Center at NCOSE) take a closer look at the impact that pornography use has on our sexual arousal template. More specifically, they discuss some studies that have been done to demonstrate the rise in violent sexual behavior like choking or strangulation. Experiments were able to condition men to be aroused by shapes and inanimate objects, and it demonstrates that when a person is exposed to pornography, they can be conditioned to be aroused by the harmful behaviors that are depicted.
Listen to Haley and Dani discuss this issue on the Ending Sexploitation Podcast, available on Youtube, Apple Podcasts, Spotify, or any of your favorite podcast platforms.
|
|
|
X’s Track Record of Perpetuating Sexual Exploitation Disqualifies it from Creating Child-Focused App
In a post on X last week, Elon Musk said his artificial intelligence startup, xAI, will make an AI chatbot that is "kid-friendly" called "Baby Grok." Musk did not provide any further details about this, but given that the X platform has been particularly reluctant to implement any type of safety changes to their platform to protect children, NCOSE is vehemently opposed to this idea.
XAI rolled out its chatbot, Grok, earlier this month, which features sexually explicit themes, despite being accessible to children and rated 12+ on the Apple App Store.
“Fresh off X’s launch of a ‘NSFW’ AI chatbot that children can access is a ridiculous suggestion that X will launch a supposedly child-friendly app. X has no track record whatsoever of prioritizing child safety and should halt any plans to court children,” said Haley McNamara, Senior Vice President of Strategic Initiatives and Programs, National Center on Sexual Exploitation.
“X allows pornography on its platform, which provides a foundation for sexually exploitative content, child sexual abuse material, sex trafficking and other nonconsensual content to flourish. Is X even taking action to currently prevent children from accessing pornography on its site? If and until X prioritizes child safety on its current platform by removing the ‘NSFW’ AI chatbot and by changing policies to forbid pornography, X cannot be trusted to provide a safe app for children,” McNamara said.
|
|
|
NCOSE and 170+ Organizations Call for AI Safety
This week, NCOSE and 170+ organizations released a joint letter opposing efforts to revive a moratorium on state AI laws. The letter was featured in Politico and was sent to key leaders on Capitol Hill.
Shortly after the letter’s publication, the White House AI action plan was released and did NOT contain a moratorium on state AI laws! However, the action plan did include vague language about restricting AI-related Federal funding for states if their laws are "unduly restrictive to innovation."
The fight for AI safety is far from over! We continue to call for bipartisan and commonsense safeguards in the AI and emerging technology space to balance innovation with safety and preventing sexual exploitation. |
|
|
|