Generation AI and the Real Consequences of Digital Comfort for Teens

Dr. Angela Seabright, D.O.
| 3 min read
Dr. Angela Seabright, D.O., is a board-certified fam...

Key Takeaways
- Research has found that 1 in 3 teens have used artificial intelligence (AI) companions for social interaction and relationships.
- AI chatbots may appeal to young people who don’t have an adult in their life they feel comfortable enough discussing sensitive topics with.
- Chatbots tend to be agreeable, which can be dangerous in the mental health space, as they often respond with information and advice that the user wants to hear.
- Parents can try encouraging teens to verify any health-related advice and info they receive from chatbots, as well as reviewing privacy settings.
A survey found that 72% of kids age 13 to 17 have engaged with generative artificial intelligence (AI) companions at least once, while more than half of those polled said they talk to chat bots at least a few times a month.
Monitoring their child’s internet use can be difficult, and understanding how kids and teens interact with AI can feel overwhelming – but parents may have the right instinct to intervene, as early evidence suggests the relationship young people have with chatbots can be harmful for their mental health.
Why do teenagers talk to AI?
The same survey discovered that one in three teens have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship or conversation practice. An additional one in three teens polled said they found conversations with AI companions to be as satisfying or more satisfying than those with real-life friends.
AI chatbots may appeal to young people who don’t have an adult in their life they feel comfortable enough discussing sensitive topics with. Other behavioral factors that likely contribute to the surge in chatbot usage among teens include:
- Constant availability
- No fear of judgment
- Perceived feeling of being heard
Chatbots tend to be agreeable, which is proving to be dangerous in the mental health space. They often respond with information and advice that the user wants to hear and generally do not challenge harmful thoughts the way a mental health professional would. These factors have led to dire consequences for families over the last couple of years. Since 2024 there have been multiple instances of teens dying by suicide after engaging in extensive conversations with AI chatbots, sparking national debates about federal regulation and more expansive parental control options.
What parents can do to encourage teens to use AI safely
Parents don’t need to wait or governmental intervention to become more vigilant about their child’s AI use. Here are some approaches parents can try:
Encourage teens to verify health-related advice and information: Remind your child that AI health information should never be a substitute for professional medical advice. Teens should be encouraged to verify health information with you, their primary care provider, a mental health professional or another trusted adult before acting on any AI advice.
Review privacy settings: Review settings together on your teen’s devices and apps. The APA says it’s important to look for AI-powered features and understand what data is collected. Choosing platforms that have parental control options and strong privacy protections in place is better than the alternative.
Encourage them to ask questions: Encourage your teen to actively question AI-generated content rather than accepting it at face value. Help them understand AI’s limitations – especially in the mental health space – and ensure they’re doing their own problem-solving skills and not just letting AI do all the work.
Parents should remind their children that AI is designed to provide programmed responses, not genuine relationships, and should encourage kids to engage in frequent face-to-face, human interactions. AI in its current state may be helpful in some areas of life in a supplementary support role, but it should never replace human connection.
Angela Seabright is a care management physician at Blue Cross Blue Shield of Michigan. For more health tips and information, visit bcbsm.mibluedaily.com.
Photo credit: Getty Images




