When built with the right values, nonprofits are proving AI can heal rather than harm.
View in browser ([link removed] )
June 3rd, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )
Meet the AI Trailblazers Transforming Youth Wellbeing
When it comes to the frontier of AI development, Big Tech companies like OpenAI, Anthropic, and Google get all of the attention.
But when it comes to responsible AI, nonprofits are at the forefront.
Those seeking best practices and pioneering examples of how organizations are blending cutting-edge technology with human values and principles around data privacy and online safety need look no further than the growing movement of AI-powered nonprofits.These organizations aren’t just leading the conversation, they’re applying innovative technologies to solve real-world problems.
In this newsletter, we focus on an emerging trend and uncover bright spots: how small but fast-growing nonprofits are leveraging AI to build a safer internet.
// The Rise of AI Nonprofits
Fast Forward ([link removed] ) , a member of the Project Liberty Alliance, is a nonprofit that provides resources and funding to build the tech and AI-powered nonprofit sector. Fast Forward hosts a nonprofit accelerator, which has supported over 100 mission-driven organizations that have improved the lives of more than 262 million people. In the past year, Fast Forward has observed a remarkable shift in the landscape of AI-powered nonprofits ([link removed] ) .
In a series of articles in Stanford’s Social Innovation Review ([link removed] ) last year, they reported seeing a 600% increase in AI-powered nonprofit applicants to their program compared to the year prior, with four major themes of how AI is deployed:
- Structuring Data: Supercharging what humans can do with data.
- Advising: Scaling human-to-human, high-touch experiences.
- Translating: Translating language and decoding information.
- Platforms: AI infrastructure that equips nonprofits to deploy their own AI tools.
Kevin Barenblat ([link removed] ) , Fast Forward’s Co-Founder and President, said: “On average, nonprofits feel like they get the benefits of many new technologies last.
But with AI, it's actually the nonprofits that are at the forefront of thinking about AI responsibly—from the way they think about their data and privacy and bias to the kinds of structures and guardrails they create.”
Kevin Barenblat, Fast Forward Co-Founder & President
Project Liberty teamed up with Fast Forward to interview the leaders of two AI-powered nonprofits in the youth mental health field. Below we highlight their stories and their models, exploring how ethical AI use can improve youth mental health.
Lenny Learning
// Personal Story
Ting Gao, Co-founder and CEO of Lenny Learning ([link removed] ) , said that her entire life story has culminated in what Lenny does. Growing up, her brother had an undiagnosed and untreated mental illness. Despite being raised in the same home, they went on different paths, in part because he never got the right mental health resources at the right time. This idea of the right resources at the right time stuck with her.
Ting Gao, Co-Founder & CEO of Lenny Learning
After university, Gao was working on a startup with business partner Bryce Bjork. In 2020, Bjork lost his brother to suicide. Gao and Bjork started incubating a project ([link removed] ) focused on mental health literacy—called Brain Health Bootcamp—through Bjork's family foundation. After two years of validating, piloting, and building, they went full-time and evolved the model into what would become Lenny Learning.
// The Problem
The teachers, counselors, and administrators in a school system are on the frontlines in supporting the mental health of young people. The problem is they’re often underequipped and overwhelmed. Gao said that the average counselor-to-student ratio in the United States is 1:385 (one counselor serves an average of 385 students). Meanwhile, 9/10 teachers don’t feel equipped to address mental health issues and respond effectively. These adults interact with students every day, but they often lack the right resources at the right time.
// The Solution
Lenny is an AI-enabled behavioral health platform. It provides a toolkit of lessons, interventions, family engagement, and analytics that are aligned with proven research and best practices.
Teachers and counselors can log in to the platform and explain to the AI-powered system the issue or challenge they’re facing. Trained on a library of evidence-based methodologies, Lenny creates a set of personalized resources and tools for the educator. It delivers evidence-based lesson plans, creates targeted interventions, and assesses student needs.
// The Model & Traction
Lenny is a nonprofit, but it sells access to its platform to school districts nationwide. It raises philanthropic funding to provide the platform to underresourced districts at no cost. It also offers free access to the platform for individual educators, allowing anyone to sign in ([link removed] ) and try it for free.
Lenny grew by 10x in 2024 alone. It is in 700 schools in every state, reaching over 300,000 students. Gao and Bjork's vision is to be in every school in the country by 2030.
Koko
// Personal Story
Dr. Rob Morris, the founder of Koko ([link removed] ) , has always been fascinated by psychology and the study of the mind. When he was pursuing his PhD at MIT focused on digital mental health, he found he was struggling with his own mental health. So he started to build a tool just for himself. “It was bespoke to me. If I could help myself, maybe I could build something that would help others, too,” he said. That was the genesis of Koko. Today, Koko has reached over 4,000,000 people by working with digital platforms to provide free, evidence-based mental health interventions.
// The Problem
The results from a survey by the CDC in 2023 ([link removed] ) highlighted how we're in the midst of a youth mental health crisis in the United States.
- One in three students experiences poor mental health most of the time.
- One in three students has felt persistent sadness or hopelessness for 2 weeks or more during a 12-month period.
- One in five students has seriously considered attempting suicide during a 12-month period.
- One in 10 students has attempted suicide during a 12-month period.
Meanwhile, American teens ages 13-18 spend an average of 5.6 hours per day ([link removed] ) on their phones.
// The Solution
The philosophy behind Koko is to be pragmatic. “I don't think it's realistic that we're going to shut these platforms down entirely,” Dr. Morris said. “You have to go where the millions of eyeballs already are.”
This means that Koko’s tech adapts to the specific architecture of each social platform. If a user is searching for content on a social media platform that suggests they might need help (such as searching for pro-anorexia content), Koko uses AI to work with the platform to suppress that content and redirect the user to evidence-based resources.
- On Discord, the model is different. Over 24,000 Discord servers have installed Koko’s AI bot, allowing Discord users on that server to chat with a Koko chatbot and get support.
- On Instagram, users might come across a Koko video in their feed. When they click on the video, it opens a Whatsapp chat where users can interact with Koko resources.
- On TikTok, Koko uses hundreds of thousands of dollars of ad credits provided by the platform to direct young people to mental health services.
// The Future
Today, a young person completes an intervention via Koko every 90 seconds (an intervention is considered an instance when a user completes a course, receives peer support, or is connected to a crisis line). Dr. Morris's goal is to scale to a point where someone completes an intervention every second of every day. So far, users have completed 738,000 interventions. To scale, Koko is adding new platforms and expanding its work with university partners (it has completed multiple randomized control trials at MIT ([link removed] ) ).
// Building a Responsible Tech Ecosystem
One hallmark of a responsible AI company (nonprofit or otherwise) is their relationship to data—both in what they collect and what they don’t. In our conversations with Lenny Learning and Koko, what data they chose not to collect was indicative of their values around data privacy.
- Lenny intentionally doesn’t collect any student data on its platform. They only interact with adult educators.
- Koko’s data collection is minimal: The only data collected automatically is the social platform the user came from. Koko doesn’t know the IP address, username, or search history. Data privacy is just one of their ethical commitments ([link removed] ) .
Lenny and Koko are two of dozens of AI nonprofits in Fast Forward’s portfolio ([link removed] ) and in the Project Liberty Alliance ([link removed] ) that are building a safer, better internet.
We’d love to hear from you. What organizations are at the forefront of responsible AI development? Who is responsibly using AI to build The People’s Internet?
Project Liberty updates
// Project Liberty Institute is announcing a strategic partnership with VentureESG and ImpactVC at SuperVenture 2025 in Berlin this week.
Together, these networks of over 1,200 leading VCs and LPs are committed to integrating ESG principles and advancing responsible investment in data and AI. The first joint initiative: a sector-wide survey to benchmark current practices and chart a path toward more accountable, resilient, and future-ready investment models. Learn more here ([link removed] ) .
Other notable headlines
// 📱 A new law in Texas requires Apple and Google to verify ages for app downloads, giving parents more control over the apps that minors use, according to an article in the New York Times ([link removed] ) . (Paywall).
// 🖥 The AI browser wars are about to begin. Artificial intelligence is already writing an obituary for the internet as we know it. An article in The Platformer ([link removed] ) asks: Why is everyone building new web browsers? (Free).
// 🤖 An article in The New Yorker ([link removed] ) explored the two paths forward for AI. The technology is complicated, but our choices are simple: we can remain passive, or assert control. (Paywall).
// ⛪ An article in Noema Magazine ([link removed] ) considered how Pope Leo can address distributive justice in the age of AI. (Free).
// 🧠 According to a new study, more than half of the top 100 mental health TikToks contain misinformation. An investigation by The Guardian ([link removed] ) reveals the promotion of dubious advice, questionable supplements, and quick-fix healing methods. (Free).
// 🇧🇷 In a world first, Brazilians will soon be able to sell their digital data. According to an article in Rest of World ([link removed] ) , Brazil is piloting dWallet, a project that lets citizens earn money from their data. (Free).
Partner news & opportunities
// Negotiating a Future with AI and Us
June 9 | 6:30-9:00pm ET | Betaworks, New York, NY
Join thought leaders for an immersive evening at Betaworks examining how AI is reshaping our perceptions, work, and decision-making. This interactive event hosted by All Tech Is Human ([link removed] ) and Andus Labs invites participants to reflect, re-orient, and envision a more human-centered technological future. Space is limited—register here ([link removed] ) .
// Vana Academy: Accelerator for Data-Driven Startups
Applications close June 4 (tomorrow!)
Today and tomorrow Vana ([link removed] ) is accepting their final applications for its 9-week accelerator focused on building businesses powered by human data. Participants will learn to create DataDAOs—decentralized data organizations—with real user contributions, token models, and go-to-market strategies. Teams can earn up to $5,000 and pitch to top investors. Apply here ([link removed] ) .
// Humanity Hardwired: A New Voice from an Early Ally
Omidyar Network ([link removed] ) , one of the Project Liberty Alliance’s first partners, has launched Humanity Hardwired, a new monthly newsletter exploring the culture, governance, and business of tech with a human-first lens. This month’s edition covers the proposed federal ban on state AI laws, recent antitrust developments, and tips for working with employees to harness AI. Subscribe here ([link removed] ) .
What did you think of today's newsletter?
We'd love to hear your feedback and ideas. Reply to this email.
/ Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.
Thank you for reading.
Facebook ([link removed] )
LinkedIn ([link removed] )
Sin título-3_Mesa de trabajo 1 ([link removed] )
Instagram ([link removed] )
Project Liberty footer logo ([link removed] )
10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe ([link removed] ) Manage Preferences ([link removed] )
© 2025 Project Liberty LLC