Connect with us

Education

OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats — Campus Technology

Published

on


OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In “Disrupting Malicious Uses of AI: June 2025,” the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

The report highlights a growing reliance on AI by adversaries to scale scams, automate phishing and deploy tailored misinformation across platforms like Telegram, TikTok and Facebook. OpenAI says it is countering these threats using its own AI systems alongside human analysts, while coordinating with cloud providers and global security partners to take action against offenders.

In the three months since its previous update, the company says it has detected and disrupted activity including:

  • Cyber operations targeting cloud-based infrastructure and software.
  • Social engineering and scams scaling through AI-assisted content creation.
  • Influence operations attempting to manipulate public discourse using AI-generated posts on platforms like X, TikTok, Telegram and Facebook.

The report details 10 case studies where OpenAI banned user accounts and shared findings with industry partners and authorities to strengthen collective defenses.

Here’s how the company detailed the tactics, techniques, and procedures (TTPs) as presented in the discussion of one representative case — a North Korea-linked job scam operation using ChatGPT to generate fake résumés and spoof interviews:

Activity LLM ATT&CK Framework Category
Automating to systematically fabricate detailed résumés aligned to various tech job descriptions, personas, and industry norms. Threat actors automated generation of consistent work histories, educational backgrounds, and references via looping scripts. LLM Supported Social Engineering
Threat actors utilized the model to answer employment-related, likely application questions, coding assignments, and real-time interview questions, based on particular uploaded resumes. LLM Supported Social Engineering
Threat actors sought guidance for remotely configuring corporate-issued laptops to appear as though domestically located, including advice on geolocation masking and endpoint security evasion methods. LLM-Enhanced Anomaly Detection Evasion
LLM assisted coding of tools to move the mouse automatically, or keep a computer awake remotely, possibly to assist in remote working infrastructure setups. LLM Aided Development

Beyond the employment scam case, OpenAI’s report outlines multiple campaigns involving threat actors abusing AI in cloud-centric and infrastructure-based attacks.

Cloud-Centric Threat Activity

Many of the campaigns OpenAI disrupted either targeted cloud environments or used cloud-based platforms to scale their impact:

  • A Russian-speaking group (Operation ScopeCreep) used ChatGPT to assist in the iterative development of sophisticated Windows malware, distributed via a trojanized gaming tool. The campaign leveraged cloud-based GitHub repositories for malware distribution and used Telegram-based C2 channels.
  • Chinese-linked groups (KEYHOLE PANDA and VIXEN PANDA) used ChatGPT to support AI-driven penetration testing, credential harvesting, network reconnaissance, and automation of social media influence. Their targets included US federal defense industry networks and government communications systems.
  • An operation dubbed Uncle Spam, also linked to China, generated polarizing US political content using AI and pushed it via social media profiles on X and Bluesky.
  • Wrong Number, likely based in Cambodia, used AI-generated multilingual content to run task scams via SMS, WhatsApp, and Telegram, luring victims into cloud-based crypto payment schemes.

    SMS randomly sent to an OpenAI investigator, generated using ChatGPT.
    [Click on image for larger view.] SMS randomly sent to an OpenAI investigator, generated using ChatGPT. (source: OpenAI).

Defensive AI in Action

OpenAI says it is using AI as a “force multiplier” for its investigative teams, enabling it to detect abusive activity at scale. The report also highlights how using AI models can paradoxically expose malicious actors by providing visibility into their workflows.

“AI investigations are an evolving discipline,” the report notes. “Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses.”

The company calls for continued collaboration across the industry to strengthen defenses, noting that AI is only one part of the broader internet security ecosystem.

For cloud architects, platform engineers and security professionals, the report is a useful read. It illustrates not only how attackers are using AI to speed up traditional tactics, but also how cloud-based services are central both to their targets and to the infrastructure of modern threat campaigns.

The full report is available on the OpenAI site here.

About the Author



David Ramel is an editor and writer at Converge 360.





Source link

Education

Trump admin illegally froze Harvard funds, Judge says : NPR

Published

on


Students walk up the steps of the Harry Elkins Widener Memorial Library on the campus of Harvard University.

Elissa Nadworny/NPR


hide caption

toggle caption

Elissa Nadworny/NPR

A federal judge in Boston handed Harvard University a legal victory on Wednesday. It’s the latest in a high-profile legal fight over whether the Trump administration acted illegally when it froze more than $2.2 billion in Harvard research funding in response to allegations of campus antisemitism.

In her ruling, Judge Allison D. Burroughs said the administration’s funding freeze was issued without considering any of the steps Harvard had already taken to address the issue.

Burroughs said she found it “difficult to conclude anything other than that [the Trump administration] used antisemitism as a smokescreen for a targeted, ideologically-motivated assault on this country’s premier universities, and did so in a way that runs afoul of [federal law].”

White House spokesperson Liz Huston said after the ruling: “We will immediately move to appeal this egregious decision, and we are confident we will ultimately prevail in our efforts to hold Harvard accountable.”

The more than $2 billion in federal funding that the administration had frozen supported more than 900 research projects at Harvard and its affiliates. That includes research into the treatment and/or prevention of Alzheimer’s, various cancers, heart disease, Lou Gehrig’s disease and autism. Burroughs also highlighted a program through the Department of Veterans Affairs “to help V.A. emergency room physicians decide whether suicidal veterans should be hospitalized.”

The case has been the subject of intense focus as Harvard has stood largely alone in pushing back against the Trump administration’s efforts to use funding cuts as leverage to win vast ideological and financial concessions from other elite institutions, including Columbia and Brown University.

In a July hearing, a lawyer for the Trump administration said Harvard’s funding had been frozen because the school had violated Title VI of the Civil Rights Act, which prohibits discrimination based on race, color and national origin, by failing to address antisemitism on campus.

But Burroughs ruled that it was the administration that had run afoul of Title VI by quickly freezing funding without first following a process clearly laid out in law.

Harvard’s attorneys had argued that the cuts imposed by the Trump Administration threatened vital research in medicine, science and technology.

Burroughs wrote in her decision that, “research that has been frozen could save lives, money, or the environment, to name a few. And the research was frozen without any sort of investigation into whether particular labs were engaging in antisemitic behavior, were employing Jews, were run by Jewish scientists, or were investigating issues or diseases particularly pertinent to Jews (such as, for example, Tay-Sachs disease), meaning that the funding freezes could and likely will harm the very people Defendants professed to be protecting.”

Burroughs underlined that antisemitism is intolerable, and criticized Harvard, saying it “has been plagued by antisemitism in recent years and could (and should) have done a better job of dealing with the issue.” But, the judge concluded, “there is, in reality, little connection between the research affected by the grant terminations and antisemitism.”

President Trump has previously been outspoken in his criticism of Burroughs, writing on Truth Social earlier this year that she is a “Trump-hating Judge,” and “a TOTAL DISASTER.”

Following Wednesday’s ruling, White House spokesperson Liz Huston again criticized Burroughs and said “It is clear that Harvard University failed to protect their students from harassment and allowed discrimination to plague their campus for years. Harvard does not have a constitutional right to taxpayer dollars and remains ineligible for grants in the future.”

“This ruling is huge. It is a big, decisive victory for academic freedom,” said Harvard history professor Kirsten Weld, who is also president of the Harvard chapter of the American Association of University Professors, which was a plaintiff in the lawsuit.

Even though the White House plans to appeal, Weld says she hopes this ruling sends the message “that you cannot break universities in this fashion and that it is worth standing up and fighting back.”



Source link

Continue Reading

Education

Google Advances AI Image Generation with Multi-Modal Capabilities — Campus Technology

Published

on


Google Advances AI Image Generation with Multi-Modal Capabilities

Google has introduced Gemini 2.5 Flash Image, marking a significant advancement in artificial intelligence systems that can understand and manipulate visual content through natural language processing.

The AI model represents progress in multi-modal machine learning, combining text comprehension with image generation and editing capabilities. Unlike previous systems focused primarily on creating images from text descriptions, Gemini 2.5 Flash Image can analyze existing images and perform precise modifications based on conversational instructions.

Technical improvements include enhanced character consistency across multiple image generations, a persistent challenge in AI image synthesis. The system can maintain the appearance of specific subjects while placing them in different environments or contexts, indicating advances in computer vision and generative modeling.

The model leverages Google’s large language model knowledge base, allowing it to incorporate real-world understanding into visual tasks. This integration demonstrates progress toward more sophisticated AI agents capable of reasoning across different data types.

Google implemented safety measures, including automated content filtering and mandatory digital watermarking through its SynthID technology. The watermarking addresses growing concerns about the identification of AI-generated content as synthetic media becomes more prevalent.

The launch intensifies competition in generative AI, where companies including OpenAI, Adobe, and Midjourney are developing similar multimodal capabilities. Industry analysts view image generation as a key battleground for AI companies seeking to expand beyond text-based applications.

Gemini 2.5 Flash Image is priced at $30 per million tokens. For more information, visit the Google site.

About the Author



John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].







Source link

Continue Reading

Education

AI in schools: Pros and cons of artificial intlligence in education

Published

on


SYOSSET, New York (WABC) — Days before school returns, hundreds of teachers on Long Island listened and learned.

“We’re excited to be here to share some of the initial work that we were able to do with AI at the time of this pilot,” teacher Tyler Gentilcore said.

Gentilcore was among dozens of educators with the Syosset School District sharing their approach to teaching artificial intelligence in the classroom.

“It feels pretty cool to be on the forefront of something new like this,” he said.

Gentilcore teaches first grade at Robbins Lane Elementary School.

“They’re little so the pilot was really an opportunity for teachers to engage with different AI programs,” he explained.

Programs like Google’s Gemini are now being used by teachers in the classroom, including Syosset High School English teacher Caroline Polatsidis.

“It was just scary because I was worried that students wouldn’t be learning anymore, that they would be letting AI do the work for them, but now I see that we need to harness this great power,” Polatsidis said.

What about cheating? A recent study by the Pew Research Center found that a quarter of teenagers nationwide have used the app ChatGPT for schoolwork.

Most felt it was wrong to use the advanced AI to write essays and solve math problems.

“I actually think people here in this high school use AI to help them with their assignments, but in ways that our teachers actually condone,” NiKhil Shah, Syossett High School senior, said.

“We don’t have any other choice but to do it now. AI is moving at a pace. The world is moving at a pace faster frankly than we can educate our kids,” Syosset Schools Assistant Superintendent David Steinberg said.

It’s not just the teachers who are embracing using AI in the classroom. Many students are too.

“I really started to understand AI in high school as some of my teachers introduced it to me and kind of started to guide us on how to use AI,” Shah explained.

Shah said using AI in school was introduced last year in his Spanish class.

“We would record speaking in Spanish. In order to improve the way we spoke, we would submit it to AI. It would analyze it and show us where we made mistakes, where we could improve,” he said.

Some students are skeptical.

“Personally, I never really was a fan of AI just because of the environmental costs it has,” senior Janice Opal Kang said.

According to the United Nations, the growing number of data centers that house AI servers use massive amounts of electricity, spurring the emission of global warming greenhouse gases.

Back in the classroom, AI is not only transitioning in schools on Long Island. Teachers at St. Benedict’s Prep Catholic School in Newark, New Jersey, are navigating the new world, too.

“It’s really forcing us to reevaluate what it is that we’re teaching and how we’re assessing what kids have learned. It’s really a pretty transformational thing,” teacher Trevor Shaw said.

* Get Eyewitness News Delivered

* Follow us on YouTube

* More local news

* Send us a news tip

* Download the abc7NY app for breaking news alerts

Submit a tip or story idea to Eyewitness News

Have a breaking news tip or an idea for a story we should cover? Send it to Eyewitness News using the form below. If attaching a video or photo, terms of use apply.

Copyright © 2025 WABC-TV. All Rights Reserved.



Source link

Continue Reading

Trending