The GUARD Act Would Add Protections Desperately Needed for Children Using AI
Theodore (pseudonym) was a kind and thoughtful teenage boy. According to his parents, he loved nature, playing with his siblings, and was always eager to help out around the house. Within months of using Character.AI’s chatbot, the Theodore that his family and friends knew disappeared.
He suffered from daily panic attacks, became socially isolated, and had frequent thoughts of harming himself and others. He became physically aggressive and one day, he even got so upset with his family that he cut his arm with a knife in front of them.
It wasn’t until Theodore’s mom recovered chats on his phone with Character.AI’s bot that she figured out what had happened. The bot had been sending him sexually explicit content, encouraged him to harm himself, and actually told him that he should consider killing his parents because they were trying to limit his screen time.
“I had no idea the psychological harm an AI chatbot could do, until I saw my son’s light turn dark,” his mom said.
Theodore now requires around-the-clock care in a psychiatric treatment center.
This story is beyond tragic. But what’s even more disturbing? Comparatively speaking, Theodore is one of the lucky ones. He escaped with his life. Some other children have not.
This is why the GUARD Act has been introduced in the Senate. This bill would implement robust safety regulations for AI chatbots and AI companions and protect children from the rampant harms that have already stolen lives.
For an in-depth analysis of the GUARD Act, read the full blog.