Tools & Platforms
NIST Proposes New Cybersecurity Guidelines for AI Systems — Campus Technology
NIST Proposes New Cybersecurity Guidelines for AI Systems
The National Institute of Standards and Technology (NIST) has announced plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence (AI) systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.
The concept paper outlines a framework called Control Overlays for Securing AI Systems (COSAIS), which adapts existing federal cybersecurity standards (SP 800-53) to address unique vulnerabilities in AI. NIST said the overlays will provide practical, implementation-focused security measures for organizations deploying AI technologies, from large language models to predictive decision-making systems.
“AI systems introduce risks that are distinct from traditional software, particularly around model integrity, training data security, and potential misuse,” according to the concept paper. “By leveraging familiar SP 800-53 controls, COSAIS offers a technical foundation that organizations can adapt to AI-specific threats.”
The initial overlays will cover five categories of use: generative AI applications such as chatbots and image generators; predictive AI systems used in business and finance; single-agent and multi-agent AI systems designed for automation; and secure software development practices for AI developers. Each overlay will address risks to model training, deployment, and outputs, with a focus on protecting data confidentiality, integrity, and availability.
The effort builds on NIST’s existing AI Risk Management Framework and related guidelines on adversarial machine learning and dual-use foundation models. COSAIS will also complement the agency’s work on a Cybersecurity Framework Profile for AI, ensuring consistency across risk management approaches.
NIST is inviting feedback from AI developers, cybersecurity professionals, and industry groups on the draft, including whether the proposed use cases capture real-world adoption patterns and how the overlays should be prioritized. The agency plans to release a public draft of the first overlay in fiscal year 2026, alongside a stakeholder workshop.
Interested parties can share feedback via e-mail or through a Slack channel dedicated to the project.
For more information, visit the NIST site.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].
Tools & Platforms
Greater collaboration in AI high on agenda

The Shanghai Cooperation Organization remains committed to deepening pragmatic cooperation in artificial intelligence, and China”s rapid development in the field is drawing growing attention from other SCO countries, officials and experts said.
AI cooperation is among the fastest-growing areas within the SCO. In recent years, a series of important multilateral agreements have been concluded and member states have adopted a plan for cooperation on AI development, said SCO Deputy Secretary-General Oleg Kopylov.
“Within the SCO framework, we will promote the interconnection of AI and digital infrastructure, improve the AI ecosystem, foster coordinated development across national industries, and at the same time strengthen academic exchanges and cooperation on talent cultivation,” Kopylov said.
China and other SCO countries are continuously deepening exchanges and cooperation in AI, with a number of enterprises and projects actively participating and achieving notable results, said Huang Ru, an official with the National Development and Reform Commission.
Huang said “AI-plus agriculture “is transforming the face of the industry, and is also a microcosm of how China is providing the world with various AI-powered products.
In May, usually a month of bumper soybean harvest, Ji Jiangtao, a technician from Tianjin-based agricultural machinery manufacturer Nongxin Technology, was in Ussuriysk in Russia’s Far East training local farmers to use automatic navigation for agricultural machinery.
Ji said the machines execute operations precisely along preset routes through positioning technology coupled with AI algorithms. The system employs an adaptive path-tracking algorithm and can navigate in straight lines as well as curved, circular and automatic U-turn modes, effectively enhancing operational efficiency.
“We are continuously intensifying research in smart agriculture, and have already sold dozens of sets of agricultural machinery automatic navigation systems to Russia,” said Yan Bingxin, a senior engineer at Nongxin Technology.
In another case, a pool-cleaning robot from China, remotely operated via a mobile app, is gaining popularity in Kazakhstan as it leverages an integrated infrared-ultrasonic sensor suite and AI-driven path planning to methodically clean every part of the pool.
“Users can monitor both the route and the process even when they are away,” noted Yu Guoxing, a manager at Deepinfar Ocean Technology. He said that a Kazakh distributor has placed a single order for 40 units, while the appeal of underwater intelligent devices is also drawing interest from users in Russia and Tajikistan.
Industry observers say these discrete pilots are not isolated. Together, they sketch an emerging regional latticework of innovation. Teng Bingsheng, professor of strategic management and associate dean for strategic research at Cheung Kong Graduate School of Business, said China’s advances in AI applications can help other participating countries achieve leapfrog development and narrow the digital technology divide.
The SCO encompasses 42 percent of the world’s population, offering abundant application scenarios and vast data resources for AI, Teng said.
Such regional cooperation helps build a more open and inclusive AI ecosystem, contributes an “SCO approach” to global AI governance, and promotes better use of AI in serving regional development and improving people’s livelihoods, Teng added.
Beyond firm-led pilot projects, governments of the SCO countries are articulating national AI adoption pathways. Kyrgyzstan, for instance, hopes to study in-depth and draw on the technological achievements and practical experience of China and other member states in the field of AI.
“China serves as a model for us in developing AI. The Chinese government has continuously increased its efforts in AI technology and resource investment, and has introduced a series of supportive policies that have produced remarkable results,” said Azat Ibraimov, director for management and monitoring of the implementation of decisions of the Presidential Administration of Kyrgyzstan.
Ibraimov said China has introduced many advanced AI models and platforms whose open, shared technological resources provide useful references for other countries.
With China’s experience, Kyrgyzstan aims to develop AI technologies suited to its own national conditions and gradually narrow the gap with more technologically advanced nations, he said.
Another SCO member state, Tajikistan, is among the early adopters of AI among the five Central Asian countries and has designated 2025-30 the Years of Digital Economy and Innovation Development.
Azizjon Azimi, chairman of the AI Council under Tajikistan’s Ministry of Industry and New Technologies, said the digital economy can only flourish with AI as an enabling force and that AI has strongly propelled the country’s economic development.
“We are amazed by the pace and scale of China’s AI development. China commands strong research and development strengths. Meanwhile, Tajikistan, as a leading nation in green hydropower, can furnish training resources to support China’s frontier large models, helping more innovations like DeepSeek to arise and unlocking greater growth potential,” Azimi said.
Tools & Platforms
Why our business is going AI-in-the-loop instead of human-in-the-loop

True story: I had to threaten Replit AI’s brain that I would report it’s clever but dumb suggestions to the AI police for lying.
I also told ChatGPT image creation department how deeply disappointed I was that it could not, after 24 hrs of iterations, render the same high-quality image twice without changing an item on the image or misspelling. All learnings and part of the journey.
We need to remain flexible and open to new tools and approaches, and simultaneously be laser focused. It’s a contradiction, but once you start down this road, you will understand. Experimentation is a must. But it’s also important to ignore the noise and constant hype and CAPS.
How our business’ tech stack evolves
A few years ago, we started with ChatGPT and a few spreadsheets. Today, our technology arsenal spans fifteen AI platforms, from Claude and Perplexity to specialised tools like RollHQ for project management and Synthesia for AI video materials. Yet the most important lesson we’ve learned isn’t about the technology itself. It’s about the critical space between human judgment and machine capability.
The data tells a compelling story about where business stands today: McKinsey reports that 72 percent of organizations have adopted AI for at least one business function, yet only one percent believe they’ve reached maturity in their implementation. Meanwhile, 90 percent of professionals using AI report working faster, with 80 percent saying it improves their work quality.
This gap between widespread adoption and true excellence defines the challenge facing every service organisation today, including our own.
Our journey began like many others, experimenting with generative AI for document drafting and research. We quickly discovered that quality was low and simply adding tools wasn’t enough. What mattered was creating a framework that put human expertise at the center while leveraging AI’s processing power. This led us to develop what we call our “human creating the loop” approach, an evolution beyond the traditional human-in-the-loop model. It has become more about AI-in-the-loop for us than the other way round.
The distinction matters.
Human-in-the-loop suggests people checking machine outputs. Human creating the loop means professionals actively designing how AI integrates into workflows, setting boundaries, and maintaining creative control. Every client deliverable, every strategic recommendation, every customer interaction flows through experienced consultants who understand context, nuance, and the subtleties that define quality service delivery.
Our evolving tech stack
Our technology portfolio has grown strategically, with each tool selected for specific capabilities.
Each undergoes regular evaluation against key metrics, with fact-checking accuracy being paramount. We’ve found that combining multiple tools for fact checking and verification, especially Perplexity’s cited sources with Claude’s analytical capabilities, dramatically improves reliability.
The professional services landscape particularly demonstrates why human judgment remains irreplaceable. AI can analyse patterns, generate reports, and flag potential issues instantly. But understanding whether a client concern requires immediate attention or strategic patience, whether to propose bold changes or incremental improvements; these decisions require wisdom that comes from experience, not algorithms.
That’s also leaving aside the constant habit of AI generalising, making things up and often blatantly lying.
For organisations beginning their AI journey, start with clear boundaries rather than broad adoption.
Investment in training will be crucial.
Research shows that 70 percent of AI implementation obstacles are people and process-related, not technical. Create internal champions who understand both the technology and your industry’s unique requirements.
Document what works and what doesn’t. Share learnings across teams. Address resistance directly by demonstrating how AI enhances rather than replaces human expertise.
The data supports this approach. Organisations with high AI-maturity report three times higher return on investment than those just beginning. But maturity doesn’t mean maximum automation. It means thoughtful integration that amplifies human capabilities.
Looking ahead, organisations that thrive will be those that view AI as an opportunity to elevate human creativity rather than replace it.
Alexander PR’s AI policy framework
Our approach to AI centres on human-led service delivery, as outlined in our core policy pillars:
- Oversight: Human-Led PR
We use AI selectively to improve efficiency, accuracy, and impact. Every output is reviewed, adjusted, and approved by experienced APR consultants – our approach to AI centres on AI-in-the-loop assurance and adherence to APR’s professional standards.
- Confidentiality
We treat client confidentiality and data security as paramount. No sensitive client information is ever entered into public or third-party AI platforms without explicit permission.
- Transparency
We are upfront with clients and stakeholders about when, how, and why we use AI to support our human-led services. Where appropriate, this includes clearly disclosing the role AI plays in research, content development, and our range of communications outputs.
- Objectivity
We regularly audit AI use to guard against bias and uphold fair, inclusive, and accurate communication. Outputs are verified against trusted sources to ensure factual integrity.
- Compliance
We adhere to all applicable privacy laws, industry ethical standards, and our own company values. Our approach to AI governance is continuously updated as technology and regulation evolve.
- Education
Our team stays up to date on emerging AI tools and risks. An internal working group regularly reviews best practices and ensures responsible and optimal use of evolving technologies.
This framework is a living document that adapts as technology and regulations evolve. The six pillars provide structure while allowing flexibility for innovation. We’ve learned transparency builds trust. Clients appreciate knowing when AI assists in their projects, understanding it means more human time for strategic thinking.
Most importantly, we’ve recognised our policy must balance innovation with responsibility. As new tools emerge and capabilities expand, we evaluate them against our core principle: does this enhance our ability to deliver exceptional service while maintaining the trust our clients place in us?
The answer guides every decision, ensuring our AI adoption serves our mission rather than defining it.
For more on our approach and regular updates on all things AI reputation, head to Alexander PR’s website or subscribe to the AI Rep Brief newsletter.
Tools & Platforms
Markets with Bertie: Is AI driving real productivity or just market valuations?

No conversation, especially in finance circles, is complete these days without the mention of artificial intelligence (AI). It is either about how AI is the only thing driving the economy and markets in the US—and by extension all the ‘AI plays’ around the world—or everyone is busy asking each other how they use AI in their daily workflows, searching for that elusive silver bullet of technology-aided productivity.
From his experience with past technology cycles, Bertie has calibrated himself as a 76 percentile adopter, which is to say that 24 out of 100 people are likely to adopt a new technology faster or better than Bertie.
Which is why Bertie is always on the lookout for these twenty-four to learn from them and improve his rank. Once you have appeared for a few competitive exams in India, some habits die hard.
The elusive promise of AI
The reality is that the early and/or prolific adopters haven’t been able to teach Bertie anything meaningful of late. After the initial burst of excitement about the World Wide Web becoming searchable—with search results delivered to you in a format of your choosing—Bertie’s workflow hasn’t improved much. Yes, he has ‘Ghibli-ed’ a few of his pictures and changed the background of a cousin’s photo from a Ganpati pandal to a nightclub in order to enjoy the ensuing uproar in the family WhatsApp group, but a big productivity breakthrough has been elusive. A year ago, Bertie thought that AI would fulfil the Keynesian promise of a 15-hour work week by 2030, but the dream of lounging in a hammock while AI rakes in the cash for him still seems distant.
This realisation brought Bertie back to the other hot topic of all AI discussions—of how it is powering the US economy and markets. The story there is that the hyper-scalers—which is tech-speak for big spenders on AI infrastructure, primarily data centres—are shelling out unprecedented amounts of money in what feels like an arms race. The prime beneficiary of this trend is, of course, the AI chip maker Nvidia, along with every company downstream that supplies the components and equipment needed to fuel the growth of these data centres. The world hangs on to every word of Nvidia’s talismanic CEO Jensen Huang, and like a line of chicks following the mother hen, the stock prices of all the downstream beneficiaries from hardware makers to electricity grid suppliers follow Nvidia’s lead.
Now Bertie isn’t a technology clairvoyant but is smart enough to recognise the inherent inconsistency between these two realities. On one hand, the marginal utility of AI seems to have gone down, but on the other, the amount of money being spent on its advancement has reached new heights. Something, Bertie’s intuition tells him, has to give—either a mother of all breakthroughs that takes us closer to the 15-hour work week utopia is around the corner, or we are living through history’s largest wasteful capital spending binge. And knowingly or not, willingly or otherwise, if you are an investor, you are making a bet on the outcome.
Bertie is a Mumbai-based fund manager whose compliance department wishes him to cough twice before speaking and then decide not to say it after all.
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies