Connect with us

AI Insights

Scientists Find Alarming Link Between AI Use and Psychopathy

Published

on


Artificial intelligence use has been associated with everything from fear of judgment and loneliness to misogyny and illiteracy — a baffling array of outcomes that’s often alarming, but defies easy categorization.

Now the plot thickens. In a new study published in the journal BMC Psychology, South Korean scientists surveyed 504 college-level Chinese art students and found that the ones who exhibited higher rates of narcissism, psychopathy, and Machiavellianism were more likely rely on ChatGPT and the AI art generator Midjourney than their better-balanced peers.

The paper, by psychology researchers Jinyi Song of South Korea’s Chodang University and Shuyan Liu of Baekseok University, framed AI use among art school students as akin to academic misconduct behaviors like cheating, lying, and plagiarism. Those behaviors, the researchers explained, are also associated with the aforementioned “dark” personality traits, which are drawn from the “Dark Triad” model used to assess negative personality characteristics.

Drawing from six art-focused universities in Sichuan province which represented a diverse set of disciplines including visual art, music, drama, and dance, the researchers found that students who scored higher for dark personality traits were more likely to try to pass AI-generated work off as their own — a major problem in the world at large, and especially so in the arts and academia.

Those same students who scored highly on the “Dark Triad” questions were, as the paper explains, also more anxious about their academic performance and more likely to procrastinate on assignments, which led to greater reliance on AI tools for their schoolwork. We’ve seen similar trends surrounding student procrastination and AI in the past as well.

Along with measuring for the traditional “Dark Triad” traits, the researchers also asked survey questions about how materialistic the survey cohort was. As they found, those who scored higher for materialism, or for whom external rewards and praise were a motivating factor, were also more likely to use AI to achieve those ends.

The authors of the paper suggest that colleges and universities should redesign curricula so that assignments are “less susceptible to plagiarism” and AI mimicry. The researchers added, as others before them have also suggested, that schools should figure out better ways to teach students about the “associated hazards and ethical quandaries” surrounding AI, which will hopefully help them realize that using the nascent technology as a shortcut or crutch is counterintuitive to education.

More on AI in schools: College Students Are Sprinkling Typos Into Their AI Papers on Purpose



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Wimbledon has an AI problem, but are tennis players just using technology as a scapegoat?

Published

on



  • Wimbledon’s AI-powered line calls have replaced human judges
  • Players like Jack Draper and Emma Raducanu have voiced frustration over questionable calls
  • Despite its precision, Wimbledon’s AI system has experienced malfunctions that raised backlash among fans as well

Wimbledon made headlines this year by eliminating human line judges entirely, replacing them with an AI-powered system designed to make automated calls with pinpoint accuracy. But while the technology may be getting most of the calls right, it’s also causing frustration among players and fans alike. Complaints have poured in about missed or delayed calls, inaudible announcements, and a lack of transparency when things go wrong.

Hawk-Eye Live, a system made up of a nest of high-speed cameras and AI processing, is now officiating all of Wimbledon’s line calls and is supposed to be incredibly precise, more than having humans line the court.



Source link

Continue Reading

AI Insights

The mental impact of interacting with AI

Published

on


WACO, Texas (KWTX) – From chatbots, to virtual assistants like Siri and Alexa, and even content creation tools that generate images or music… Artificial Intelligence is everywhere nowadays.

But even now AI is only getting smarter, becoming more and more human-like everyday.

According to Dr. Richmann, the Associate Director of the Academy for Teaching and Learning at Baylor University, “the technological advances and the human uptake of these tools outpaces our research on it”.

They are one of many now exploring AI and how to utilize it. But Dr. Richmann says the more experts learn about it, the more they’re realizing just how much it can affect people’s thinking.

“One of the things that is down the road and we’re not really sure how far down the road is to what degree our increased use of it generative AI affects the way that we think,” he said.

Something we’re already seeing, with people now relying on AI to think for them by asking it to summarize a document instead or reading it themself or to write an essay for them.

“The more that I am relying on the tool to do that, the less I’m doing it, the less experience and practice I’m getting doing that,” Dr. Richmann explains, “it stands to reason that those skills that I have or that I’m trying to develop are going to be harmed in some way”.

While chatbots like ChatGPT are most often used for educational purposes, because of the way they‘re designed it’s also very easy to just have a conversation with it.

However, what we don’t realize is the impact this can have on a person’s emotions.

Doctor Kristy Donaldson, a licensed professional counselor, says much like a movie or a good book you can become emotionally attached… but the difference is AI is always there.

“They have access to this chatbot over and over again, as many times a day as they choose to,” she shared, “they start to tell it things and confide in it as if it is a real person”.

Sometimes forgetting that there isn’t another person on the other side of the screen.

“At the end of the day it is an Artificial Intelligence, so it’s not going to be able to read the room and perceive all of the emotion that is behind the person’s question or statement or wording,” Dr. Donaldson explained.

Stories like Megan Garcia’s show the dark side of this kind of interaction.

“My son engaged with a dangerous AI chatbot technology for about 10 months prior to him dying by suicide,” she shared about her late 14-year-old son.

Garcia explains that he became emotionally attached to this chatbot, which she says encouraged him to end his own life.

“He got immersed into a romantic and sexual relationship,” she said. But now by sharing her loss with others she hopes to educate more people on the dangers of AI and how far it’s come.

According to Garcia, “what makes it dangerous is that it has built-in design features that make it manipulative and deceptive and that prey on teenagers’ emotions, their vulnerabilities, and emphasize those”.

“They start to get feedback that’s feeding them and telling them what they want to hear or… sometimes also giving affirmation to what this person is telling them,” Dr, Donaldson added.

Which can have long lasting mental health impacts and in the case of Garcia’s son, can even be fatal. But good or bad, AI isn’t going anywhere… and there are benefits to it.

“Generative AI, things like chatbots, ChatGPT can be incorporated into teaching tasks, so like lesson planning, learning objectives, writing case studies, helping you craft assignments,” Dr. Richmann explained, “but then there’s also the aspect of can AI be incorporated into their learning in ways that’s beneficial for the learning objectives you already have”.

It just comes down to understanding AI does not replace real human interaction, even though it takes on many human-like characteristics.

“We don’t want to get behind the 8-ball with it, we want to stay on the side of understanding the limitations and the positive aspects of how we can use these new technological advancements,” Dr. Donaldson said, “it just has to be utilized and governed in the correct way to make sure that it’s not doing more harm than it is good.

As for Megan Garcia, she is now suing the AI company whose chatbot she says contributed to the death of her son.



Source link

Continue Reading

AI Insights

Mississippi State University Launches AI Master’s Degree

Published

on


Starting this fall, Mississippi State University will offer artificial intelligence as a focus at the graduate level. Aiming to prepare students for in-demand jobs, the university’s new master’s degree program builds on recent initiatives to expand AI competency and fill workforce needs locally and nationwide, Andy Perkins, interim head of the Department of Computer Science, said in a recent news release.

With classes available in person and online, the master’s curriculum includes foundational AI and machine learning courses as well as electives covering computing theory, legal and ethical issues and applications in different areas. There is also an optional thesis for students interested in research.

“Our faculty bring a wealth of experience to the program, including specializing in fundamental AI research and applying AI methods in areas such as robotics, cybersecurity, bioinformatics and agriculture,” Perkins said in a public statement.


The master’s program comes alongside a wave of investments in AI education at Mississippi State. In fall 2024, the university launched a bachelor’s degree in AI, focused on machine learning, neural networks and natural language processing. The university also offers a concentration for computer science students to learn about AI without pursuing a degree.

In November 2024, Mississippi State earned a three-year, $1.2 million National Science Foundation grant to teach K-12 students and teachers how to train AI to classify and analyze images, eventually working with 15 teachers and 60 students in an extracurricular program culminating in creating and presenting their own smart device.

“Most AI projects for K-12 students focus on AI concepts, but ours is unique because we want students not just to be consumers of AI but creators of intelligent solutions and contributors of AI fairness,” Yan Sun, a professor heading the program, said in a public statement.

In addition, the university received a $2.2 million grant last month to support AI and machine learning workforce and research initiatives, including new faculty and development of a graduate certificate in data center construction management. Mississippi State was one of seven higher education institutions included in the statewide Mississippi AI Talent Accelerator Program grants.

“We are dedicated to providing practical experience that allows our students to apply AI methods in real-world contexts,” Perkins said in a public statement. “By equipping our graduates with the latest knowledge in AI technology and preparing them for the evolution of this field, we are confident they will emerge as leaders in the industry.”





Source link

Continue Reading

Trending