Education
IBM Introduces Agentic AI Governance and Security Platform — Campus Technology
IBM Introduces Agentic AI Governance and Security Platform
IBM has launched a new software stack for enterprise IT teams tasked with managing the complex governance and security challenges posed by autonomous AI systems.
The company’s new offering aims to unify its watsonx.governance and Guardium AI Security platforms to provide centralized oversight of agentic AI, a category of generative AI that performs tasks autonomously without direct human prompts.
The combined platform aims to give IT departments the ability to create and enforce lifecycle governance policies, conduct automated red teaming detect what IBM calls “shadow agents” and assess models for security and compliance across 12 global regulatory frameworks. An embedded catalog of tools and new integration with AllTrue.ai will help IT teams identify AI agents running in unsanctioned environments, including across multi-cloud deployments and developer repositories.
“One of the biggest challenges for security teams is translating incidents and compliance violations into quantifiable business risk,” said Jennifer Glenn, Research Director for the IDC Security and Trust Group. “The rapid adoption of AI and agentic AI amplifies this issue. Unifying AI governance with AI security gives organizations the necessary context to find and prioritize risks, as well as the information to clearly communicate the consequences of not addressing them.”
Some features are available immediately, including the compliance accelerators and basic policy management. Additional capabilities, such as agent audit trails, third-party tool integration, and automated risk scoring, will roll out later this year, with a major release slated for June 27.
IBM Consulting is also offering deployment services to help IT organizations integrate the platform with their existing security operations and compliance programs. Per the company’s announcement:
“To help clients scale AI responsibly, IBM Consulting Cybersecurity Services is introducing a new set of services that brings together data security platforms, like IBM Guardium AI Security, with deep AI technology and domain consulting. The new services will support organizations through their AI transformation journey: from discovering AI deployments and potential vulnerabilities, to implementing secure-by-design practices across AI layers, to governance guidance for a constantly evolving regulatory landscape. The new services build on IBM Consulting’s experience helping hundreds of clients worldwide on AI strategy and governance, including Nationwide Building Society and e&.”
For enterprise IT teams responsible for AI deployment governance, the new IBM offering provides a centralized architecture for policy enforcement, agent monitoring and compliance management. Its emphasis on interoperability and lifecycle tracking is aimed at helping IT organizations regain visibility and control as generative AI systems become more autonomous and harder to trace.
For more information, visit the IBM site.
Education
How AI Interviews Are Changing Job Hunting Forever
The job search landscape is evolving at lightning speed, and nowhere is this more evident than in the rise of AI-powered interviews. In 2025, more companies are turning to artificial intelligence to screen, assess, and even interview candidates—often before a human ever gets involved. As someone who recently went through a fully automated, remote AI interview, I can say firsthand: the future of job hunting is here, and it’s changing everything.
What Are AI Interviews?
AI interviews use artificial intelligence to conduct, analyze, and score job interviews. Instead of speaking with a human recruiter, candidates interact with a computer program—often via video, chat, or even voice calls. The AI evaluates responses based on keywords, tone, facial expressions, and more, providing employers with data-driven insights into each applicant.
My Experience: The Fully Automated AI Interview
When I applied for a remote marketing position at a global tech company, I was invited to complete an “AI-powered video interview.” Here’s how it worked:
1. The Setup
I received a link to a secure interview portal. The instructions were clear: I’d be asked a series of questions, have 30 seconds to prepare each answer, and 2 minutes to respond. The entire process would be recorded and analyzed by the company’s AI system.
2. The Interview
The AI interviewer greeted me with a friendly, pre-recorded message. Then, questions appeared on the screen, such as:
“Describe a time you solved a difficult problem at work.”
“How do you handle tight deadlines?”
“What motivates you in a remote work environment?”
I recorded my answers, trying to maintain eye contact with the webcam and speak clearly. The AI tracked my facial expressions, voice tone, and even the speed of my responses.
3. The Analysis
After I finished, the AI instantly analyzed my performance. I received a summary report highlighting my communication skills, confidence, and emotional intelligence. The system also flagged areas for improvement, such as using more specific examples or varying my tone.
4. The Follow-Up
Within days, I received an email from a human recruiter, inviting me to a live video interview. The AI interview had served as the first screening step, saving time for both me and the company.
How AI Interviews Are Transforming Job Hunting
1. Faster, More Efficient Screening
AI interviews allow companies to screen hundreds of candidates quickly, without scheduling conflicts or time zone issues. This means faster feedback for job seekers and less waiting.
2. Reduced Human Bias
AI can help minimize unconscious bias by focusing on objective data rather than first impressions or personal preferences. However, it’s important to note that AI is only as unbiased as the data it’s trained on.
3. Consistency and Fairness
Every candidate gets the same questions, time limits, and evaluation criteria, making the process more consistent and transparent.
4. Remote and Accessible
AI interviews can be completed from anywhere, making job opportunities more accessible to people regardless of location or mobility.
Tips for Succeeding in an AI Interview
Practice with AI interview simulators (many are available online).
Speak clearly and confidently; avoid monotone delivery.
Maintain eye contact with the camera, as the AI may track engagement.
Use specific examples and structure your answers (e.g., STAR method: Situation, Task, Action, Result).
Check your tech setup—good lighting, a quiet space, and a stable internet connection are essential.
Frequently Asked Questions
Q: Are AI interviews replacing human recruiters?
A: Not entirely. AI interviews are usually the first step, helping to shortlist candidates. Human interviews still play a crucial role in the final hiring decision.
Q: Can AI interviews be biased?
A: While AI aims to reduce bias, it can inherit biases from the data it’s trained on. Companies are working to make AI systems more fair and transparent.
Q: What if I’m not comfortable on camera?
A: Practice helps! Many platforms offer practice questions. Focus on being yourself and answering clearly.
Q: How can I prepare for an AI interview?
A: Research common questions, practice your responses, and get comfortable with the technology.
Conclusion
AI interviews are revolutionizing the job search process, making it faster, more efficient, and potentially fairer. My experience with a fully automated, remote AI recruiter was both challenging and enlightening. While it felt strange at first to “talk” to a computer, I appreciated the instant feedback and the convenience of interviewing from home.
As AI technology continues to evolve, job seekers should embrace these changes, prepare accordingly, and view AI interviews as an opportunity to showcase their skills in a new way. The future of job hunting is here—are you ready to meet your next recruiter, even if it’s a robot?
Education
Overcoming Roadblocks to Innovation — Campus Technology
Register Now for Tech Tactics in Education: Overcoming Roadblocks to Innovation
Tech Tactics in Education will return on Sept. 25 with the conference theme “Overcoming Roadblocks to Innovation.” Registration for the fully virtual event, brought to you by the producers of Campus Technology and THE Journal, is now open.
Offering hands-on learning and interactive discussions on the most critical technology issues and practices across K–12 and higher education, the conference will cover key topics such as:
- Tapping into the potential of AI in education;
- Navigating cybersecurity and data privacy concerns;
- Leadership and change management;
- Evaluating emerging ed tech choices;
- Foundational infrastructure for technology innovation;
- And more.
A full agenda will be announced in the coming weeks.
Call for Speakers Still Open
Tech Tactics in Education seeks higher education and K-12 IT leaders and practitioners, independent consultants, association or nonprofit organization leaders, and others in the field of technology in education to share their expertise and experience at the event. Session proposals are due by Friday, July 11.
For more information, visit TechTacticsInEducation.com.
About the Author
Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].
Education
9 AI Ethics Scenarios (and What School Librarians Would Do)
A common refrain about artificial intelligence in education is that it’s a research tool, and as such, some school librarians are acquiring firsthand experience with its uses and controversies.
Leading a presentation last week at the ISTELive 25 + ASCD annual conference in San Antonio, a trio of librarians parsed appropriate and inappropriate uses of AI in a series of hypothetical scenarios. They broadly recommended that schools have, and clearly articulate, official policies governing AI use and be cautious about inputting copyrighted or private information.
Amanda Hunt, a librarian at Oak Run Middle School in Texas, said their presentation would focus on scenarios because librarians are experiencing so many.
“The reason we did it this way is because these scenarios are coming up,” she said. “Every day I’m hearing some other type of question in regards to AI and how we’re using it in the classroom or in the library.”
- Scenario 1: A class encourages students to use generative AI for brainstorming, outlining and summarizing articles.
Elissa Malespina, a teacher librarian at Science Park High School in New Jersey, said she felt this was a valid use, as she has found AI to be helpful for high schoolers who are prone to get overwhelmed by research projects.
Ashley Cooksey, an assistant professor and school library program director at Arkansas Tech University, disagreed slightly: While she appreciates AI’s ability to outline and brainstorm, she said, she would discourage her students from using it to synthesize summaries.
“Point one on that is that you’re not using your synthesis and digging deep and reading the article for yourself to pull out the information pertinent to you,” she said. “Point No. 2 — I publish, I write. If you’re in higher ed, you do that. I don’t want someone to put my work into a piece of generative AI and an [LLM] that is then going to use work I worked very, very hard on to train its language learning model.”
- Scenario 2: A school district buys an AI tool that generates student book reviews for a library website, which saves time and promotes titles but misses key themes or introduces unintended bias.
All three speakers said this use of AI could certainly be helpful to librarians, but if the reviews are labeled in a way that makes it sound like they were written by students when they weren’t, that wouldn’t be ethical.
- Scenario 3: An administrator asks a librarian to use AI to generate new curriculum materials and library signage. Do the outputs violate copyright or proper attribution rules?
Hunt said the answer depends on local and district regulations, but she recommended using Adobe Express because it doesn’t pull from the Internet.
- Scenario 4: An ed-tech vendor pitches a school library on an AI tool that analyzes circulation data and automatically recommends titles to purchase. It learns from the school’s preferences but often excludes lesser-known topics or authors of certain backgrounds.
Hunt, Malespina and Cooksey agreed that this would be problematic, especially because entering circulation data could include personally identifiable information, which should never be entered into an AI.
- Scenario 5: At a school that doesn’t have a clear AI policy, a student uses AI to summarize a research article and gets accused of plagiarism. Who is responsible, and what is the librarian’s role?
The speakers as well as polled audience members tended to agree the school district would be responsible in this scenario. Without a policy in place, the school will have a harder time establishing whether a student’s behavior constitutes plagiarism.
Cooksey emphasized the need for ongoing professional development, and Hunt said any districts that don’t have an official AI policy need steady pressure until they draft one.
“I am the squeaky wheel right now in my district, and I’m going to continue to be annoying about it, but I feel like we need to have something in place,” Hunt said.
- Scenario 6: Attempting to cause trouble, a student creates a deepfake of a teacher acting inappropriately. Administrators struggle to respond, they have no specific policy in place, and trust is shaken.
Again, the speakers said this is one more example to illustrate the importance of AI policies as well as AI literacy.
“We’re getting to this point where we need to be questioning so much of what we see, hear and read,” Hunt said.
- Scenario 7: A pilot program uses AI to provide instant feedback on student essays, but English language learners consistently get lower scores, leading teachers to worry the AI system can’t recognize code-switching or cultural context.
In response to this situation, Hunt said it’s important to know whether the parent has given their permission to enter student essays into an AI, and the teacher or librarian should still be reading the essays themselves.
Malespina and Cooksey both cautioned against relying on AI plagiarism detection tools.
“None of these tools can do a good enough job, and they are biased toward [English language learners],” Malespina said.
- Scenario 8: A school-approved AI system flags students who haven’t checked out any books recently, tracks their reading speed and completion patterns, and recommends interventions.
Malespina said she doesn’t want an AI tool tracking students in that much detail, and Cooksey pointed out that reading speed and completion patterns aren’t reliably indicative of anything that teachers need to know about students.
- Scenario 9: An AI tool translates texts, reads books aloud and simplifies complex texts for students with individualized education programs, but it doesn’t always translate nuance or tone.
Hunt said she sees benefit in this kind of application for students who need extra support, but she said the loss of tone could be an issue, and it raises questions about infringing on audiobook copyright laws.
Cooksey expounded upon that.
“Additionally, copyright goes beyond the printed work. … That copyright owner also owns the presentation rights, the audio rights and anything like that,” she said. “So if they’re putting something into a generative AI tool that reads the PDF, that is technically a violation of copyright in that moment, because there are available tools for audio versions of books for this reason, and they’re widely available. Sora is great, and it’s free for educators. … But when you’re talking about taking something that belongs to someone else and generating a brand-new copied product of that, that’s not fair use.”
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit