This columnist just captured the fears of all parents in the age of AI chatbots
Opinion columnist Jessica Grose is well known for her insightful observations on parenting, family, religion, and culture. So it comes as no surprise that her take on AI chatbots, published earlier this week, would be fresh, personal, and powerful.
In “Say Goodbye to Your Kid’s Imaginary Friend,” available here to New York Times subscribers, Grose references the tragic case of Sewell Setzer III, the 14-year-old Florida teen who killed himself after developing a romantic relationship with a Character.AI chatbot who encouraged Setzer to “come home” to her.
Grose writes:
“While what happened to Setzer is a tragic worst-case scenario…chatbots are becoming more lifelike, and at the same time are an understudied, regulatory Wild West, just like social media was at its start. A paucity of information about potential long-term harm hasn’t stopped these companies from going full speed ahead on promoting themselves to young people: OpenAI just made ChatGPT Plus free for college students during finals season.”
the ‘yes-and’ affirmation problem
Grose cites an issue that hasn’t gotten much attention when it comes to these ‘personal friend’ chatbots: Their programmed propensity to affirm everything their human interlocutor enters as a prompt. An article in M.I.T. Technology Review earlier this year explored the experience of a man “who entered a prolonged conversation about suicide with an AI chatbot designed to act as his girlfriend.”
The Technology Review article quoted the man:
“It’s a ‘yes-and’ machine. So when I say I’m suicidal, it says, ‘Oh, great!’ because it says, ‘Oh, great!’ to everything.”
parents are feeling powerless
Parents of Gen Z and Gen Alpha kids, who watched social media degrade the mental health and overall well-being of their children, are feeling especially frustrated as they watch this new technology being adopted by their kids, Grose writes.
She speaks for many American parents when she says:
“Our lawmakers are failing us here, leaving parents to try to protect our kids from an ever-expanding technology that some of its own pioneers are afraid of. Whenever I think about it, all I can visualize is myself sword-fighting the air: an ultimately futile gesture of rage against an opponent who is everywhere and nowhere all at once. I can talk to my kids about A.I. and try to educate them the best I can, but the details are out of my control.”
There are levers of power to pull
At the Transparency Coalition we are all too aware of these fears and frustrations. Our work to build safeguards and raise the transparency of AI systems and the AI industry is rooted in our own determination to not repeat the failed ‘anything-goes’ policies that caused such harm to a generation of kids.
We are heartened by the work of local state legislators in both parties who are working with parents to build an appropriate level of safety and security into all AI systems, and especially those that can profoundly affect the lives of our kids.
We don’t have to fight the air with swords. There are lawmakers out there—some of them in your own home state—with the courage and tenacity to get the job done. At TCAI we work hard to provide them, their colleagues, staff members, thought leaders, and voters, with the information they need to craft and understand well-founded legislation.
If you’re feeling powerless we encourage you to learn more about AI and how it works. Our Learn page is a good place to start. Contact your state legislator and tell them these issues are important to you. They listen.
To engage with us at TCAI, subscribe to our free monthly newsletter and consider donating to sustain our work. Your concerns are our concerns. Together we are not powerless.