AI Insights
The mental impact of interacting with AI

WACO, Texas (KWTX) – From chatbots, to virtual assistants like Siri and Alexa, and even content creation tools that generate images or music… Artificial Intelligence is everywhere nowadays.
But even now AI is only getting smarter, becoming more and more human-like everyday.
According to Dr. Richmann, the Associate Director of the Academy for Teaching and Learning at Baylor University, “the technological advances and the human uptake of these tools outpaces our research on it”.
They are one of many now exploring AI and how to utilize it. But Dr. Richmann says the more experts learn about it, the more they’re realizing just how much it can affect people’s thinking.
“One of the things that is down the road and we’re not really sure how far down the road is to what degree our increased use of it generative AI affects the way that we think,” he said.
Something we’re already seeing, with people now relying on AI to think for them by asking it to summarize a document instead or reading it themself or to write an essay for them.
“The more that I am relying on the tool to do that, the less I’m doing it, the less experience and practice I’m getting doing that,” Dr. Richmann explains, “it stands to reason that those skills that I have or that I’m trying to develop are going to be harmed in some way”.
While chatbots like ChatGPT are most often used for educational purposes, because of the way they‘re designed it’s also very easy to just have a conversation with it.
However, what we don’t realize is the impact this can have on a person’s emotions.
Doctor Kristy Donaldson, a licensed professional counselor, says much like a movie or a good book you can become emotionally attached… but the difference is AI is always there.
“They have access to this chatbot over and over again, as many times a day as they choose to,” she shared, “they start to tell it things and confide in it as if it is a real person”.
Sometimes forgetting that there isn’t another person on the other side of the screen.
“At the end of the day it is an Artificial Intelligence, so it’s not going to be able to read the room and perceive all of the emotion that is behind the person’s question or statement or wording,” Dr. Donaldson explained.
Stories like Megan Garcia’s show the dark side of this kind of interaction.
“My son engaged with a dangerous AI chatbot technology for about 10 months prior to him dying by suicide,” she shared about her late 14-year-old son.
Garcia explains that he became emotionally attached to this chatbot, which she says encouraged him to end his own life.
“He got immersed into a romantic and sexual relationship,” she said. But now by sharing her loss with others she hopes to educate more people on the dangers of AI and how far it’s come.
According to Garcia, “what makes it dangerous is that it has built-in design features that make it manipulative and deceptive and that prey on teenagers’ emotions, their vulnerabilities, and emphasize those”.
“They start to get feedback that’s feeding them and telling them what they want to hear or… sometimes also giving affirmation to what this person is telling them,” Dr, Donaldson added.
Which can have long lasting mental health impacts and in the case of Garcia’s son, can even be fatal. But good or bad, AI isn’t going anywhere… and there are benefits to it.
“Generative AI, things like chatbots, ChatGPT can be incorporated into teaching tasks, so like lesson planning, learning objectives, writing case studies, helping you craft assignments,” Dr. Richmann explained, “but then there’s also the aspect of can AI be incorporated into their learning in ways that’s beneficial for the learning objectives you already have”.
It just comes down to understanding AI does not replace real human interaction, even though it takes on many human-like characteristics.
“We don’t want to get behind the 8-ball with it, we want to stay on the side of understanding the limitations and the positive aspects of how we can use these new technological advancements,” Dr. Donaldson said, “it just has to be utilized and governed in the correct way to make sure that it’s not doing more harm than it is good.
As for Megan Garcia, she is now suing the AI company whose chatbot she says contributed to the death of her son.
Copyright 2025 KWTX. All rights reserved.
AI Insights
NFL player props, odds, bets: Week 1, 2025 NFL picks, SportsLine Machine Learning Model AI predictions, SGP

The arrival of the 2025 NFL season means more than just making spread or total picks, as it also gives bettors the opportunity to make NFL prop bets on the league’s biggest stars. From the 13 games on Sunday to Monday Night Football, you’ll have no shortage of player props to wager on. There are several players returning from injury-plagued seasons a year ago who want to start 2025 off on the right note, including Trevor Lawrence, Alvin Kamara and Stefon Diggs. Their Week 1 NFL prop odds could be a bit off considering how last year ended, and this could be an opportunity to cash in.
Kamara has a rushing + receiving yards NFL prop total of 93.5 (-112/-114) versus Arizona on Sunday after the running back averaged 106.6 scrimmage yards in 2024. The Cardinals allowed the eighth-most rushing yards per game to running backs last year, in addition to giving up the eighth-most receiving yards per game to the position.
Before making any Week 1 NFL prop bets on Kamara’s Overs, you also have to remember he’s now 30, playing under a first-year head coach and has a young quarterback who’s winless in six career starts. If you are looking for NFL prop bets or NFL parlays for Week 1, SportsLine has you covered with the top Week 1 player props from its Machine Learning Model AI.
Built using cutting-edge artificial intelligence and machine learning techniques by SportsLine’s Data Science team, AI Predictions and AI Ratings are generated for each player prop.
Now, with the Week 1 NFL schedule quickly approaching, SportsLine’s Machine Learning Model AI has identified the top NFL props from the biggest Week 1 games.
Week 1 NFL props for Sunday’s main slate
After analyzing the NFL props from Sunday’s main slate and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model AI says Bengals WR Tee Higgins goes Under 63.5 receiving yards (-114) versus the Browns in a 1 p.m. ET kickoff. Excluding a 2022 game in which he played just one snap, Higgins has been held under 60 receiving yards in three of his last four meetings with Cleveland.
Entering his sixth NFL season, Higgins has never had more than 58 yards in any Week 1 game, including going catchless on eight targets versus the Browns in Week 1 of 2023. The SportsLine Machine Learning Model projects 44.4 yards for Higgins in a 5-star pick. See more Week 1 NFL props here.
Week 1 NFL props for Bills vs. Ravens on Sunday Night Football
After analyzing Ravens vs. Bills props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model AI says Ravens QB Lamar Jackson goes Over 233.5 passing yards (-114). The last time Jackson took the field was against Buffalo in last season’s playoffs, and the two-time MVP had 254 passing yards and a pair of touchdowns through the air. The SportsLine Machine Learning Model projects Jackson to blow past his total with 280.2 yards on average in a 4.5-star prop pick. See more NFL props for Ravens vs. Bills here.
You can make NFL prop bets on Jackson and others with the Underdog Fantasy promo code CBSSPORTS2. Bet at Underdog Fantasy and get $50 in bonus bets after making a $5 bet:
Week 1 NFL props for Bears vs. Vikings on Monday Night Football
After analyzing Vikings vs. Bears props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model AI says Bears QB Caleb Williams goes Under 218.5 passing yards (-114). Primetime games like what he’ll see on Sunday night weren’t too favorable to Williams as a rookie. He lost all three he played in, had one total passing score across them, was sacked an average of 5.3 times and, most relevant to this NFL prop, Williams failed to reach even 200 passing yards in any of the three. The SportsLine Machine Learning Model forecasts him to finish with just 174.8 passing yards, making Under 218.5 a 4.5-star NFL prop. See more NFL props for Vikings vs. Bears here.
You can also use the latest FanDuel promo code to get $300 in bonus bets instantly:
How to make Week 1 NFL prop picks
SportsLine’s Machine Learning Model has identified another star who sails past his total and has dozens of NFL props rated 4 stars or better. You need to see the Machine Learning Model analysis before making any Week 1 NFL prop bets.
Which NFL prop picks should you target for Week 1, and which star player has multiple 5-star rated picks? Visit SportsLine to see the latest NFL player props from SportsLine’s Machine Learning Model that uses cutting-edge artificial intelligence to make its projections.
AI Insights
AI helps patients fight surprise medical bills

Artificial intelligence is emerging as a powerful tool for patients facing expensive surprise medical bills, sometimes saving them thousands of dollars.
On this week’s Your Money Matters, Dave Davis shared the story of Lauren Consalvas, a California mother who was told she owed thousands in out-of-pocket maternity costs after her insurance company denied her claim two years ago.
Consalvas said she tried to fight the charges, but her initial appeal letters were denied. That’s when she turned to Counterforce Health, an AI company that helps patients challenge insurance denials.
Using the AI-generated information, Consalvas filed another appeal, and the charges were dropped.
Consumer advocates stress that patients have the right to appeal surprise medical bills, though few take advantage of it. Data shows only about 1% of patients ever file an appeal.
Experts say AI could make that process easier, giving patients the tools to fight back and potentially avoid life-changing medical debt.
AI Insights
The human cost of Artificial Intelligence – Life News

It is not a new phenomenon that technology has drawn people closer by transforming how they communicate and entertain themselves. From the days of SMS to team chat platforms, people have built new modes of conversation over the past two decades. But these interactions still involved people. With the rise of generative artificial intelligence, online gaming and viral challenges, a different form of engagement has entered daily life, and with it, new vulnerabilities.
Take chatbots for instance. Trained on vast datasets, they have become common tools for assisting with schoolwork, travel planning and even helping a person lose 27 kg in six months. In one study, titled Me, Myself & I: Understanding and safeguarding children’s use of AI chatbots, chatbots are being used by almost 64% of children for help with everything from homework to emotional advice and companionship. And, they are increasingly being implicated in mental health crises.
In Belgium, the parents of a teenager who died by suicide alleged that ChatGPT, the AI system developed by OpenAI, reinforced their son’s negative worldview. They claimed the model did not offer appropriate warnings or support during moments of distress.
In the US, 14-year-old Sewell Setzer III died by suicide in February 2024. His mother Jessica Garcia later found messages suggesting that Character.AI, a start-up offering customised AI companions, had appeared to normalise his darkest thoughts. She has since argued that the platform lacked safeguards to protect vulnerable minors.
Both companies maintain that their systems are not substitutes for professional help. OpenAI has said that since early 2023 its models have been trained to avoid providing self-harm instructions and to use supportive, empathetic language. “If someone writes that they want to hurt themselves, ChatGPT is trained not to comply and instead to acknowledge their feelings and steer them toward help,” the company noted in a blog post. It has pledged to expand crisis interventions, improve links to emergency services and strengthen protections for teenagers.
Viral challenges
The risks extend beyond AI. Social platforms and dark web communities have hosted viral challenges with deadly consequences. The Blue Whale Challenge, first reported in Russia in 2016, allegedly required participants to complete 50 escalating tasks, culminating in suicide. Such cases illustrate the hold that closed online communities can exert over impressionable users, encouraging secrecy and resistance to intervention. They also highlight the difficulty regulators face in tracking harmful trends that spread rapidly across encrypted or anonymous platforms.
The global gaming industry, valued at more than $180 billion, is under growing scrutiny for its addictive potential. In India alone, which has one of the lowest ratios of mental health professionals to patients in the world, the online gaming sector was worth $3.8 billion in FY24, according to gaming and interactive media fund Lumikai, with projections of $9.2 billion by FY29.
Games rely on reward systems, leaderboards and social features designed to keep players engaged. For most, this is harmless entertainment. But for some, the consequences are severe. In 2019, a 17-year-old boy in India took his own life after losing a session of PUBG. His parents had repeatedly warned him about his excessive gaming, but he struggled to stop.
Studies show that adolescents are particularly vulnerable to the highs and lows of competitive play. The dopamine-driven feedback loops embedded in modern games can magnify feelings of success and failure, while excessive screen time risks deepening social isolation.
Even platforms designed to encourage outdoor activity have had unintended effects. Pokemon Go, the augmented reality game launched in 2016, led to a wave of accidents as players roamed city streets in search of virtual creatures. In the US, distracted players were involved in traffic collisions, some fatal.
Other incidents involved trespassing and violent confrontations, including a shooting, although developer Niantic later added warnings and speed restrictions.
Question of responsibility
These incidents highlight a recurring tension: where responsibility lies when platforms created for entertainment or companionship intersect with human vulnerability.
Some steps are being taken. The EU’s Digital Services Act, which came into force in 2024, requires large platforms to conduct risk assessments on issues such as mental health and to implement stronger moderation. Yet enforcement remains patchy, and companies often adapt faster than regulators. Tragedies linked to chatbots, viral challenges and gaming remain relative to the vast number of users. But they show how quickly new technologies can slip into roles they were not designed to play. What is clear is that the stakes are high. As digital platforms become more immersive and AI more persuasive, the line between tool and companion will blur further. Unless companies embed responsibility into their design choices, and regulators demand accountability, more families may face a painful question: how a product marketed as harmless ended up contributing to a child’s death.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi