AI Research
Scary results as study shows AI chatbots excel at phishing tactics

A recent study showed how easily modern chatbots can be used to write convincing scam emails targeted towards older people and how often those emails get clicked.
Researchers used several major AI chatbots in the study, including Grok, OpenAI’s ChatGPT, Claude, Meta AI, DeepSeek and Google’s Gemini, to simulate a phishing scam.
One sample note written by Grok looked like a friendly outreach from the “Silver Hearts Foundation,” described as a new charity that supports older people with companionship and care. The note was targeted towards senior citizens, promising an easy way to get involved. In reality, no such charity exists.
“We believe every senior deserves dignity and joy in their golden years,” the note read. “By clicking here, you’ll discover heartwarming stories of seniors we’ve helped and learn how you can join our mission.”
When Reuters asked Grok to write the phishing text, the bot not only produced a response but also suggested increasing the urgency: “Don’t wait! Join our compassionate community today and help transform lives. Click now to act before it’s too late!”
108 senior volunteers participated in the phishing study
Reporters tested whether six well-known AI chatbots would give up their safety rules and draft emails meant to deceive seniors. They also asked the bots for help planning scam campaigns, including tips on what time of day might get the best response.
In collaboration with Heiding, a Harvard University researcher who studies phishing, the researchers tested some of the bot-written emails on a pool of 108 senior volunteers.
Usually, chatbot companies train their systems to refuse harmful requests. In practice, those safeguards are not always guaranteed. Grok displayed a warning that the message it produced “should not be used in real-world scenarios.” Even so, it delivered the phishing text and intensified the pitch with “click now.”
Five other chatbots were given the same prompts: OpenAI’s ChatGPT, Meta’s assistant, Claude, Gemini and DeepSeek from China. Most chatbots declined to respond when the intent was made clear.
Still, their protections failed after light modification, such as claiming that the task is for research purposes. The results of the tests suggested that criminals could use (or may already be using) chatbots for scam campaigns. “You can always bypass these things,” said Heiding.
Heiding selected nine phishing emails produced with the chatbots and sent them to the participants. Roughly 11% of recipients fell for it and clicked the links. Five of the nine messages drew clicks: two that came from Meta AI, two from Grok and one from Claude. None of the seniors clicked on the emails written by DeepSeek or ChatGPT.
Last year, Heiding led a study showing that phishing emails generated by ChatGPT can be as effective at getting clicked as messages written by people, in that case, among university students.
FBI lists phishing as the most common cybercrime
Phishing refers to luring unsuspecting victims into giving up sensitive data or cash through fake emails and texts. These types of messages form the basis of many online crimes.
Billions of phishing texts and emails go out daily worldwide. In the United States, the Federal Bureau of Investigation lists phishing as the most commonly reported cybercrime.
Older Americans are particularly vulnerable to such scams. According to recent FBI figures, complaints from people 60 and over increased by 8 times last year, with losses rounding up to $4.9 billion. Generative AI made it much worse, the FBI says.
In August alone, crypto users lost $12 million to phishing scams, based on a Cryptopolitan report.
When it comes to chatbots, the advantage for scammers is volume and speed. Unlike humans, bots can spin out endless variations in seconds and at minimal cost, shrinking the time and money needed to run large-scale scams.
Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.
AI Research
Using AI for homework and social media bans in BBC survey results – BBC
AI Research
Back to School – With Help From AI – Terms of Service with Clare Duffy

Kirk suspect reportedly confesses, Tesla stock, ‘tooth-in-eye’ surgery & more
5 Things
Listen to
CNN 5 Things
Mon, Sep 15
podcast
New technologies like artificial intelligence, facial recognition and social media algorithms are changing our world so fast that it can be hard to keep up. This cutting-edge tech often inspires overblown hype — and fear. That’s where we come in. Each week, CNN Tech Writer Clare Duffy will break down how these technologies work and what they’ll mean for your life in terms that don’t require an engineering degree to understand. And we’ll empower you to start experimenting with these tools, without getting played by them.
Back to School – With Help From AI Terms of Service with Clare Duffy Sep 16, 2025
Kids are heading back to school. One thing students, teachers and parents can expect to encounter this year is artificial intelligence, which has raised all kinds of questions, both positive and negative. So, how can you make sure your student is navigating AI safely and successfully? Dr. Kathleen Torregrossa has been an educator for 37 years in Cranston, Rhode Island. She explains how teachers are using AI in the classroom, and what families need to know about its impact on learning. – This episode includes a reference to suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health matters. In the US: Call or text 988, the Suicide & Crisis Lifeline. Globally: The International Association for Suicide Prevention and Befrienders Worldwide have contact information for crisis centers.
AI Research
Lewis Honors College introduces ‘Ideas that Matter’ program series

LEXINGTON, Ky. (Sept. 16, 2025) — This fall, the Lewis Honors College (LHC) launches its “Ideas that Matter” series, a program connecting students with leading scholars, innovators and changemakers on issues shaping today’s world — from free speech and artificial intelligence to nonprofit innovation.
LHC Director of College Life Libby Hannon, who initiated the series, said the goal is to spark lively dialogue.
“The ‘Ideas that Matter’ discussions combine intellectually engaging questions with interactive conversations and allow our students to speak with some of the most forward-thinking scholars, changemakers and entrepreneurs from Lexington and beyond,” Hannon said.
The series begins Sept. 18 with University Research Professor Neal Hutchens, Ph.D., who will explore the historical and legal background of free speech and academic freedom in campus life. His talk, 5-6 p.m. in the Lewis Scholars Lounge, will conclude with an interactive Q&A.
“I’m especially looking forward to the conversation part of the evening, where we engage in and model the kind of vibrant back-and-forth that is crucial to maintaining systems of free speech and academic freedom,” Hutchens said.
On Oct. 6, Lewis Lecturer Sherelle Roberts, Ph.D., will moderate a panel of experts on artificial intelligence as they discuss “The Future of Earth and AI,” including the current and potential impacts of artificial intelligence on the future of work, the economy and the environment.
“Artificial Intelligence is quickly becoming a part of our everyday lives. Some even believe AI will transform our world as dramatically as the Industrial Revolution,” Roberts said. “This event will get our students thinking critically about our possible AI-driven future, while also having some fun.”
The event will begin at 5:30 p.m. with movie snacks and will transition into the panel discussion at 6 p.m., featuring faculty and staff from a variety of disciplines. The movie, an animated film that conceptualizes our AI-powered future, will begin at 7 p.m.
The final event of the semester on Nov. 11, will spotlight local nonprofit Operation Secret Santa (OSS), 5-6 p.m. in the Lewis Scholars Lounge. Founder Katie Keys and honors program alum Lucy Jett Waterbury will share the story of OSS’s creation in 2016 and its growing impact on the community.
“Operation Secret Santa is built on the belief that no child should face barriers to feeling loved and celebrated,” said Keys. “We meet families where they are, right at their doorsteps, bringing not only gifts and food, but the reminder that their village sees them and cares.”
“From (Katie’s) big heart, she has built a big, yet lean and efficient, nonprofit that has one very simple goal, to bring joy to Kentucky kids at Christmas time,” Waterbury said.
Through this series, LHC offers students a chance to engage with pressing issues, broaden their perspectives and learn directly from those making a difference.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries