Education
Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems — Campus Technology
Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems
The Cloud Security Alliance (CSA) has introduced a guide for red teaming Agentic AI systems, targeting the security and testing challenges posed by increasingly autonomous artificial intelligence.
The Red Teaming Testing Guide for Agentic AI Systems outlines practical, scenario-based testing methods designed for security professionals, researchers, and AI engineers.
Agentic AI, unlike traditional generative models, can independently plan, reason, and execute actions in real-world or virtual environments. These capabilities make red teaming — the simulation of adversarial threats — a critical component in ensuring system safety and resilience.
Shift from Generative to Agentic AI
The report highlights how Agentic AI introduces new attack surfaces, including orchestration logic, memory manipulation, and autonomous decision loops. It builds on previous work such as CSA’s MAESTRO framework and OWASP’s AI Exchange, expanding them into operational red team scenarios.
Twelve Agentic Threat Categories
The guide outlines 12 high-risk threat categories, including:
- Authorization & control hijacking: exploiting gaps between permissioning layers and autonomous agents.
- Checker-out-of-the-loop: bypassing safety checkers or human oversight during sensitive actions.
- Goal manipulation: using adversarial input to redirect agent behavior.
- Knowledge base poisoning: corrupting long-term memory or shared knowledge spaces.
- Multi-agent exploitation: spoofing, collusion, or orchestration-level attacks.
- Untraceability: masking the source of agent actions to avoid audit trails or accountability.
Each threat area includes defined test setups, red team goals, metrics for evaluation, and suggested mitigation strategies.
Tools and Next Steps
Red teamers are encouraged to use or extend agent-specific security tools such as MAESTRO, Promptfoo’s LLM Security DB, and SplxAI’s Agentic Radar. The guide also references experimental tools such as Salesforce’s FuzzAI and Microsoft Foundry’s red teaming agents.
“This guide isn’t theoretical,” said CSA researchers. “We focused on practical red teaming techniques that apply to real-world agent deployments in finance, healthcare, and industrial automation.”
Continuous Testing as Security Baseline
Unlike static threat modeling, the CSA’s guidance emphasizes continuous validation through simulation-based testing, scenario walkthroughs, and portfolio-wide assessments. It urges enterprises to treat red teaming as part of the development lifecycle for AI systems that operate independently or in critical environments.
The full guide can be found on the Cloud Security Alliance site here.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].
Education
Cambridge Judge Business School Executive Education launches four-month Cambridge AI Leadership Programme — EdTech Innovation Hub
Launched in collaboration with Emeritus, a provider of short courses, degree programmes, professional certificates, and senior executive programs, the Cambridge Judge Business School Executive Education course is now available for a September 2025 start.
The Cambridge AI Leadership Programme aims to help participants navigate the complexities of AI adoptions, identify scalable opportunities and build a strategic roadmap for successful implementation.
Using a blend of in-person and online learning, the course covers AI concepts, applications, and best practice to improve decision-making skills. It also covers digital transformation and ethical AI governance.
The program is aimed at senior leaders looking to lead their organizations through transformations and integrate AI technologies.
“AI is a transformative force reshaping business strategy, decision-making and leadership. Senior executives must not only understand AI but also use it to drive business goals, efficiency and new revenue opportunities,” explains Professor David Stillwell, Co-Academic Programme Director.
“The Cambridge AI Leadership Programme offers a strategic road map, equipping leaders with the skills and mindset to integrate AI into their organisations and lead in an AI-driven world.”
“The Cambridge AI Leadership Programme empowers decision-makers to harness AI in ways that align with their organisation’s goals and prepare for the future,” says Vesselin Popov, Co-Academic Programme Director.
“Through a comprehensive learning experience, participants gain strategic insights and practical knowledge to drive transformation, strengthen decision-making and navigate technological shifts with confidence.”
RTIH AI in Retail Awards
Our sister title, RTIH, organiser of the industry leading RTIH Innovation Awards, proudly brings you the first edition of the RTIH AI in Retail Awards, which is now open for entries.
As we witness a digital transformation revolution across all channels, AI tools are reshaping the omnichannel game, from personalising customer experiences to optimising inventory, uncovering insights into consumer behaviour, and enhancing the human element of retailers’ businesses.
With 2025 set to be the year when AI and especially gen AI shake off the ‘heavily hyped’ tag and become embedded in retail business processes, our newly launched awards celebrate global technology innovation in a fast moving omnichannel world and the resulting benefits for retailers, shoppers and employees.
Our 2025 winners will be those companies who not only recognise the potential of AI, but also make it usable in everyday work – resulting in more efficiency and innovation in all areas.
Winners will be announced at an evening event at The Barbican in Central London on Wednesday, 3rd September.
Education
AI, Irreality and the Liberal Educational Project (opinion)
I work at Marquette University. As a Roman Catholic, Jesuit university, we’re called to be an academic community that, as Pope John Paul II wrote, “scrutinize[s] reality with the methods proper to each academic discipline.” That’s a tall order, and I remain in the academy, for all its problems, because I find that job description to be the best one on offer, particularly as we have the honor of practicing this scrutinizing along with ever-renewing groups of students.
This bedrock assumption of what a university is continues to give me hope for the liberal educational project despite the ongoing neoliberalization of higher education and some administrators’ and educators’ willingness to either look the other way regarding or uncritically celebrate the generative software (commonly referred to as “generative artificial intelligence”) explosion over the last two years.
In the time since my last essay in Inside Higher Ed, and as Marquette’s director of academic integrity, I’ve had plenty of time to think about this and to observe praxis. In contrast to the earlier essay, which was more philosophical, let’s get more practical here about how access to generative software is impacting higher education and our students and what we might do differently.
At the academic integrity office, we recently had a case in which a student “found an academic article” by prompting ChatGPT to find one for them. The chat bot obeyed, as mechanisms do, and generated a couple pages of text with a title. This was not from any actual example of academic writing but instead was a statistically probable string of text having no basis in the real world of knowledge and experience. The student made a short summary of that text and submitted it. They were, in the end, not found in violation of Marquette’s honor code, since what they submitted was not plagiarized. It was a complex situation to analyze and interpret, done by thoughtful people who care about the integrity of our academic community: The system works.
In some ways, though, such activity is more concerning than plagiarism, for, at least when students plagiarize, they tend to know the ways they are contravening social and professional codes of conduct—the formalizations of our principles of working together honestly. In this case, the student didn’t see the difference between a peer-reviewed essay published by an academic journal and a string of probabilistically generated text in a chat bot’s dialogue box. To not see the difference between these two things—or to not care about that difference—is more disconcerting and concerning to me than straightforward breaches of an honor code, however harmful and sad such breaches are.
I already hear folks saying: “That’s why we need AI literacy!” We do need to educate our students (and our colleagues) on what generative software is and is not. But that’s not enough. Because one also needs to want to understand and, as is central to the Ignatian Pedagogical Paradigm that we draw upon at Marquette, one must understand in context.
Another case this spring term involved a student whom I had spent several months last fall teaching in a writing course that took “critical AI” as its subject matter. Yet this spring term the student still used a chat bot to “find a quote in a YouTube video” for an assignment and then commented briefly on that quote. The problem was that the quote used in the assignment does not appear in the selected video. It was a simulacrum of a quote; it was a string of probabilistically generated text, which is all generative software can produce. It did not accurately reflect reality, and the student did not cite the chat bot they’d copied and pasted from, so they were found in violation of the honor code.
Another student last term in the Critical AI class prompted Microsoft Copilot to give them quotations from an essay, which it mechanically and probabilistically did. They proceeded to base their three-page argument on these quotations, none of which said anything like what the author in question actually said (not even the same topic); their argument was based in irreality. We cannot scrutinize reality together if we cannot see reality. And many of our students (and colleagues) are, at least at times, not seeing reality right now. They’re seeing probabilistic text as “good enough” as, or conflated with, reality.
Let me point more precisely to the problem I’m trying to put my finger on. The student who had a chat bot “find” a quote from a video sent an email to me, which I take to be completely in earnest and much of which I appreciated. They ended the email by letting me know that they still think that “AI” is a really powerful and helpful tool, especially as it “continues to improve.” The cognitive dissonance between the situation and the student’s assertion took me aback.
Again: the problem with the “We just need AI literacy” argument. People tend not to learn what they do not want to learn. If our students (and people generally) do not particularly want to do work, and they have been conditioned by the use of computing and their society’s habits to see computing as an intrinsic good, “AI” must be a powerful and helpful tool. It must be able to do all the things that all the rich and powerful people say it does. It must not need discipline or critical acumen to employ, because it will “supercharge” your productivity or give you “10x efficiency” (whatever that actually means). And if that’s the case, all these educators telling you not to offload your cognition must be behind the curve, or reactionaries. At the moment, we can teach at least some people all about “AI literacy” and it will not matter, because such knowledge refuses to jibe with the mythology concerning digital technology so pervasive in our society right now.
If we still believe in the value of humanistic, liberal education, we cannot be quiet about these larger social systems and problems that shape our pupils, our selves and our institutions. We cannot be quiet about these limits of vision and questioning. Because not only do universities exist for the scrutinizing of reality with the various methods of the disciplines as noted at the outset of this essay, but liberal education also assumes a view of the human person that does not see education as instrumental but as formative.
The long tradition of liberal education, for all its complicity in social stratification down the centuries, assumes that our highest calling is not to make money, to live in comfort, to be entertained. (All three are all right in their place, though we must be aware of how our moneymaking, comfort and entertainment derive from the exploitation of the most vulnerable humans and the other creatures with whom we share the earth, and how they impact our own spiritual health.)
We are called to growth and wisdom, to caring for the common good of the societies in which we live—which at this juncture certainly involves caring for our common home, the Earth, and the other creatures living with us on it. As Antiqua et nova, the note released from the Vatican’s Dicastery for Culture and Education earlier this year (cited commendingly by secular ed-tech critics like Audrey Watters) reiterates, education plays its role in this by contributing “to the person’s holistic formation in its various aspects (intellectual, cultural, spiritual, etc.) … in keeping with the nature and dignity of the human person.”
These objectives of education are not being served by students using generative software to satisfy their instructors’ prompts. And no amount of “literacy” is going to ameliorate the situation on its own. People have to want to change, or to see through the neoliberal, machine-obsessed myth, for literacy to matter.
I do believe that the students I’ve referred to are generally striving for the good as they know how. On a practical level, I am confident they’ll go on to lead modestly successful lives as our society defines that term with regard to material well-being. I assume their motivation is not to cause harm or dupe their instructors; they’re taking part in “hustle” culture, “doing school” and possibly overwhelmed by all their commitments. Even if all this is indeed the case, liberal education calls us to more, and it’s the role of instructors and administrators to invite our students into that larger vision again and again.
If we refuse to give up on humanistic, liberal education, then what do we do? The answer is becoming clearer by the day, with plenty of folks all over the internet weighing in, though it is one many of us do not really want to hear. Because at least one major part of the answer is that we need to make an education genuinely oriented toward our students. A human-scale education, not an industrial-scale education (let’s recall over and over that computers are industrial technology). The grand irony of the generative software moment for education in neoliberal, late-capitalist society is that it is revealing so many of the limits we’ve been putting on education in the first place.
If we can’t “AI literacy” our educational problems away, we have to change our pedagogy. We have to change the ways we interact with our students inside the classroom and out: to cultivate personal relationships with them whenever possible, to model the intellectual life as something that is indeed lived out with the whole person in a many-partied dialogue stretching over millennia, decidedly not as the mere ability to move information around. This is not a time for dismay or defeat but an incitement to do the experimenting, questioning, joyful intellectual work many of us have likely wanted to do all along but have not had a reason to go off script for.
This probably means getting creative. Part of getting creative in our day probably means de-computing (as Dan McQuillan at the University of London labels it). To de-compute is to ask ourselves—given our ambient maximalist computing habits of the last couple decades—what is of value in this situation? What is important here? And then: Does a computer add value to this that it is not detracting from in some other way? Computers may help educators collect assignments neatly and read them clearly, but if that convenience is outweighed by constantly having to wonder if a student has simply copied and pasted or patch-written text with generative software, is the value of the convenience worth the problems?
Likewise, getting creative in our day probably means looking at the forms of our assessments. If the highly structured student essay makes it easier for instructors to assess because of its regularity and predictability, yet that very regularity and predictability make it a form that chat bots can produce fairly readily, well: 1) the value for assessing may not be worth the problems of teeing up chat bot–ifiable assignments and 2) maybe that wasn’t the best form for inviting genuinely insightful and exciting intellectual engagement with our disciplines’ materials in the first place.
I’ve experimented with research journals rather than papers, with oral exams as structured conversations, with essays that focus intently on one detail of a text and do not need introductions and conclusions and that privilege the student’s own voice, and other in-person, handmade, leaving-the-classroom kinds of assessments over the last academic year. Not everything succeeded the way I wanted, but it was a lively, interactive year. A convivial year. A year in which mostly I did not have to worry about whether students were automating their educations.
We have a chance as educators to rethink everything in light of what we want for our societies and for our students; let’s not miss it because it’s hard to redesign assignments and courses. (And it is hard.) Let’s experiment, for our own sakes and for our students’ sakes. Let’s experiment for the sakes of our institutions that, though they are often scoffed at in our popular discourse, I hope we believe in as vibrant communities in which we have the immense privilege of scrutinizing reality together.
Education
Shanklea primary school stays shut after solar panel fire
A primary school will remain closed until Thursday following a fire which started in solar panels on the roof.
Northumberland Fire and Rescue Service (NFRS) said the blaze began just before 14:00 BST at Shanklea Primary School in Cramlington on Saturday.
No-one was injured and Northumberland County Council said the damage was “not as significant as first thought”.
The local authority said the school would remain closed on Tuesday and Wednesday to allow remedial works and additional health and safety checks.
NFRS said five crews were sent to the scene where the solar panels on the west side of the building were ablaze.
A council spokesperson said: “School staff have worked hard alongside structural and electrical engineers to understand the extent of the damage caused by the fire.”
They added parents and carers would be informed of the next steps.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers5 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit