Connect with us

Tools & Platforms

How StanChart balances AI-powered innovation with security

Published

on


In the world of global banking, the roles of driving innovation and enforcing control can seem at odds. One pushes for speed and disruption, the other for stability and security. For Alvaro Garrido, these competing priorities are a daily reality.

As Standard Chartered’s chief operating officer for technology and operations, and CIO for information security and data, Garrido is tasked with harnessing technologies such as artificial intelligence (AI) to drive efficiency and innovation, while addressing the operational and regulatory requirements of a global financial institution.

But it’s not a matter of choosing one over the other. The key is integrating them from the start. “I’ve never seen an area where we’ve compromised innovation for control or the other way around,” he said. “It comes very naturally, and we are also getting better along the way.”

Garrido said this balance can be achieved by having a deep understanding of market dynamics, the regulatory environment and the bank’s core strategy. “You need to elevate yourself to the ethos of the bank and the company. It’s an application of the systematic regime under which you operate, but also common sense.”

Nowhere is this balancing act more evident than in the adoption of AI. Pointing to the debate between innovating with AI to secure a first-mover advantage versus taking a wait-and-see approach, Garrido said Standard Chartered (StanChart) is firmly in the innovation camp, but is supported by a “very well-orchestrated non-financial risk engine” to ensure it proceeds safely.

That includes employing a defence-in-depth strategy, underpinned by threat-led scenario risk assessments, where the bank analyses assets against specific threats to determine gross risk, then overlays existing controls to calculate the residual risk. This not only ensures security resources are applied where they are most needed, but also allows for multi-layered defences.

For example, the decision to patch a vulnerable device could depend on whether it is switched off or ringfenced by a deep packet inspection firewall. “Sometimes, it might be more beneficial to patch it, but other times it might be better to segment it or do both at the same time. Of course, there’s economics and synergies to consider, but we have multiple controls at our disposal,” said Garrido.

The multi-layered approach extends to securing the bank’s employees. While security awareness training is provided, Garrido noted the importance of protecting employees with sophisticated inbound and outbound security tools. The process of detecting and responding to phishing attacks, for example, is now highly automated, moving from a manual, ticket-based system to one where countermeasures are deployed in near real-time.

Securing data in a balkanised world

With AI models dealing with vast amounts of data, ensuring data security and integrity is key, with security built in from the onset and not as an afterthought. “The fundamental rule is to try not to install the seat belt at the end,” said Garrido. “Retrofitting the seat belt at the end is expensive and probably going to kill you.”

This principle is applied across the software development lifecycle, where the bank is shifting security left and embedding controls directly into its continuous integration and continuous deployment pipeline to intercept and analyse code in real-time, ensuring safer code from the very beginning.

To me, the definition of critical doesn’t come from what you think the system is – it’s defined by the data you have in it. If the data is PII or financial data, it will need additional controls
Alvaro Garrido, Standard Chartered

Critically, the bank’s security posture is defined not by the system, but by the data it contains. If sensitive production data must be used in a test environment for a specific reason, that environment is immediately elevated to production-level security.

“To me, the definition of critical doesn’t come from what you think the system is – it’s defined by the data you have in it,” Garrido explained. “If the data is PII [personally identifiable information] or financial data, it will need additional controls.”

The bank also adopts a holistic data protection strategy that includes having an inventory and taxonomy of critical data entities, understanding where data is at any point in time, ensuring data persistence and quality, as well as detecting data anomalies using machine learning models.

All of that work can become complex for a global bank operating in over 70 markets, each with its own data sovereignty laws. “There is a trend right now for more data balkanisation, with governments becoming more protective of data,” Garrido noted.

To address this, the bank is moving towards a set of global data platforms built on the principles of federation and orchestration. The goal is not to create one single monolithic data lake, but an intelligent integration layer that can enforce rules and cater to different sovereignty requirements.

Hybrid roles

The convergence of technologies such as AI, data management and cyber security is not only forcing CIOs and chief information security officers (CISOs) to rethink how their teams are structured, but is also giving rise to a rare breed of hybrid tech workers who are expected to be well-versed in multiple domains.

“In a way, it’s like finding the unicorn,” said Garrido. “You want a good data scientist who is also an expert in cyber security. Those people don’t exist, so you need to find the best way to cross-train people.”

Fortunately, the bank has seen an appetite for reskilling among its employees. “The level of interest is unbelievable. Everyone is training and retraining,” said Garrido. “With AI, we’re actually making room for more innovation. The shape of the organisation is going to be different, and it’s happening almost organically.”

Moving forward, Garrido sees his teams becoming more modular and agile to respond to the fast-changing macro environment. He expects to see tighter integration between business and technology teams, enabling highly focused, boutique capabilities to be developed while maintaining the rigour, predictability and uniformity of a global bank.

“Banks usually build for scale, but with more data balkanisation and data sovereignty, scale is no longer the paradigm,” said Garrido. “You need to build for modularity to cater to different regulatory requirements and jurisdictions and find that sweet spot.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Opinion | Governing AI – The Kathmandu Post

Published

on



Opinion | Governing AI  The Kathmandu Post



Source link

Continue Reading

Tools & Platforms

AI Week in Review 25.09.06

Published

on


Figure 1. It’s another robot party, with Google Androidify inviting people to create their own Android bot characters.

Switzerland’s EPFL launched the open multilingual Apertus LLM that comes in two sizes, 8B and 70B, and targets massively multilingual coverage, with over 1800 languages supported. The models were trained on 15 trillion tokens across more than 1,000 languages; 40% of the data is non-English. Apertus is fully open, sharing weights on HuggingFace as well as data, training script details and a Technical Report.

Nous Research released Hermes 4 14B, a hybrid-reasoning LLM positioned as the compact sibling to its 70B and 405B variants. The 14B model can run locally (available on HuggingFace) while supporting dual “think/non-think” modes and function-calling support. Nous Research published a technical report sharing training and benchmark details.

Nous also announced the Husky Hold’em Bench, a poker-themed benchmark designed to test long-horizon reasoning and strategy under uncertainty. The suite provides a consistent evaluation scaffold for agentic AI models.

Tencent announced the open-source release of Hunyuan-MT-7B, a multilingual translation model that supports 33 languages, boasts lightning-fast high-performance inference for translation, and can be deployed on edge devices. They also released an ensemble “Chimera” variant:

But that’s not all. We’re also open-sourcing Hunyuan-MT-Chimera-7B, the industry’s first open-source integrated translation model. It intelligently refines translations from multiple models to deliver more accurate and professional results for specialized use cases.

Both Hunyuan-MT-7B and Hunyuan-MT-Chimera-7B are available on HuggingFace.

Alibaba’s Tongyi Lab released WebWatcher, an open research agent that can browse and complete web tasks end-to-end. The HuggingFace demo and WebWatcher code and technical paper are provided for reproducibility. It is positioned as a reference for building web-capable agents.

A graph of different levels of exam

AI-generated content may be incorrect.

Figure 2. Webwatcher benchmark metrics shows its excellent performance on visual understanding benchmarks.

Google introduced EmbeddingGemma, a small 308M-parameter multilingual embedding model for on-device RAG, search and similarity workloads. EmbeddingGemma documentation emphasizes small memory footprint with strong MMTEB performance relative to its size.

Google launched Androidify, a playful generative tool for custom Android avatars. Powered by Gemini 2.5 Flash and Imagen, Androidify lets users generate Android-style characters, remix them into sticker packs, and share to social apps and Messages. Google positions it as an “AI sandbox” for creative expression.

GitHub shipped Actions for AI Labeler and AI Content Moderator, powered by GitHub Models. The Actions let repos auto-apply labels or flag content in workflows, reducing toil in triage and moderation. GitHub positions them as drop-in CI/CD components with configurable prompts and thresholds.

Mistral added enterprise connectors and a Memories capability to Le Chat to integrate data sources and persist knowledge for better context. The update expands Le Chat beyond chat to more agentic workflows. Mistral’s post outlines new features and supported integrations.

OpenAI launched Projects in ChatGPT, a feature similar to Claude Workspaces that adds structured workspaces, larger file uploads, and more granular memory controls. They also added a “Branching” feature to ChatGPT to fork a conversation into alternative directions without losing the original thread.

Apple’s FastVLM family (0.5B/1.5B/7B) is now available on Hugging Face, providing faster vision encoding via the FastViTHD encoder. The FastVLM model was presented in the FastVLM paper published last December.

Luxury membership platform Vivrelle launched Ella, an AI personal styling tool, in partnership with fashion retailers Revolve and FWRD. Ella offers personalized outfit and styling recommendations across all platforms, covering rental, retail, and pre-owned options.

Google’s latest video-generation model, Veo 3, is coming to Google Photos. This new model allows U.S. users to turn still images into higher-quality four-second video clips (without audio) via the mobile app’s Create tab.

OpenAI published a research explainer on why language models hallucinate. The team argues current training evaluation incentives reward “confident guessing” over calibrated uncertainty; they advocate benchmarks that penalize wrong answers more than abstentions. The paper ties hallucinations to the statistical nature of next-token prediction and calls for uncertainty-aware evaluation.

Google DeepMind revealed how Deep Loop Shaping improves gravitational wave observations, helping scientists understand the universe better. In experiments at LIGO Livingston, Deep Loop Shaping reinforcement-learning method reduced noise in a critical feedback loop by 30–100 times, enabling detection of many more gravitational-wave events. DeepMind says the technique could be applied to vibration and noise suppression in aerospace and robotics.

Share

Anthropic raised $13B at a $183B post-money valuation, citing $5B run-rate revenue and 300k business customers. The company says Claude Code passed $500M run-rate within months of full launch; funds will support enterprise demand, safety research, and global expansion.

Anthropic agreed to a landmark $1.5B settlement with authors over training data and opt-out rights. The deal, awaiting court approval, creates a claims process, an opt-out mechanism for authors, and a remuneration program for future training use. Over 500,000 writers are eligible for payouts. This record payout is for illegally pirating books to train its Claude AI model, not for the act of training on copyrighted material itself. It’s the most sweeping U.S. copyright settlement related to AI training to date.

Microsoft announced an agreement with the Federal Government to scale AI adoption, including Microsoft Copilot and Azure OpenAI. Microsoft AI services have achieved security and compliance authorizations, including FedRAMP High and DoD provisional authorization for Microsoft 365 Copilot, to support rapid AI rollout across Federal agencies.

Broadcom’s quarterly shows accelerating AI chip revenue and a blockbuster new customer order that could be OpenAI. The company said a new customer placed over $10 billion in AI infrastructure orders, fueling speculation about OpenAI ties. OpenAI is reportedly partnering with Broadcom to launch its first in-house AI chip in 2026, as it aims to diversify compute away from off-the-shelf GPUs.

OpenAI has had a busy week of aquisitions, deals, reorganization, and launches:

CoreWeave agreed to acquire OpenPipe, a startup known for reinforcement-learning tooling for agents. The companies announced a definitive agreement. This helps build out CoreWeave’s technology stack to support AI use cases.

New York Fed survey finds AI adoption up sharply, but job impacts limited, for now. Roughly 40% of services firms and 26% of manufacturers in the New York region report using AI, with retraining more common than layoffs; firms expect both hiring and some future reductions as AI adoption deepens.

Warner Bros. is suing AI startup Midjourney for copyright infringement, alleging it allows users to generate copyrighted characters like Superman and Batman without permission. The lawsuit claims Midjourney knowingly lifted content restrictions, profiting from “piracy and copyright infringement.”

California and Delaware Attorneys General are investigating OpenAI following reports of tragic deaths potentially linked to ChatGPT interactions, demanding improved safety measures for young users. In related news, both OpenAI’s ChatGPT and Google’s Gemini AI products have raised significant child safety concerns, with Common Sense Media rating Gemini as “High Risk” due to the potential for inappropriate content and unsafe advice.

Concerns about inappropriate interactions and lack of safety features in AI for children could drive further regulatory action. The FTC is making inquiries into how AI chatbots affect children’s mental health, according to WSJ reporting. The agency may seek documents from OpenAI, Meta and Character.AI.

Tesla shareholders will vote on investing in Elon Musk’s xAI to bolster its AI, robotics, and energy ambitions. Proposed by a shareholder, this move aims to secure Tesla’s stake in advanced AI capabilities and drive value.

AI agent startup Sierra, co-founded by Bret Taylor, raised $350 million in funding, valuing the company at $10 billion. Sierra helps enterprises build customer service AI agents.

Harish Abbott’s Augment, featuring its AI assistant “Augie” for logistics, secured an $85 million Series A round led by Redpoint. Augie automates repetitive tasks like gathering bids, tracking packages, and managing invoices for freight companies.

Isotopes AI came out of stealth on Thursday with a $20 million seed round. Its AI agent, Aidnn, enables business managers to query complex data in natural language, drafting planning documents from various sources like ERP and CRM.

OpenAI lays out its case for AI’s role in expanding economic opportunity. This company essay by Applications CEO Fidji Simo is a strategic positioning piece ahead of the launch of their Jobs platform, to explain push for AI literacy through certifications and their Jobs Platforms. He argues that AI broadens access to income-generating tools, but that AI’s disruption requires reskilling at scale.

The premise for AI reskilling is simple:

Studies show⁠(opens in a new window) that AI-savvy workers are more valuable, more productive, and are paid more than workers without AI skills.

Yes, AI literacy is vital to being successful in the AI era, and soon most American workers will interface with AI at work. The AI technology revolution is so fast that even OpenAI’s stated goal of 10 million certifications by 2030 seems insufficient.

Our motivation for writing AI Changes Everything is similar; we want you to help you stay on top of AI. If there is a topic you want me to dive further into for your own AI up-skilling, leave a comment and I’ll dig into it.

Leave a comment



Source link

Continue Reading

Tools & Platforms

Microlearning Offers A Flexible Approach to Gen AI Education

Published

on


Microlearning has emerged as a dynamic approach to corporate education, breaking down complex topics into concise, focused lessons that are easier to digest and apply. For corporations striving to remain competitive in the age of generative artificial intelligence (Gen AI), this strategy offers a powerful way to upskill employees without disrupting daily operations.

By delivering bite-sized, actionable content tailored to specific roles, microlearning empowers employees to absorb information at their own pace, practice what they’ve learned, and quickly apply new skills. For businesses navigating the complexities of digital transformation, this approach provides the agility needed to stay ahead of the curve.

Why corporations need microlearning for Gen AI education  

In today’s fast-paced business environment, corporate leaders face the challenge of equipping employees with the skills required to harness the power of technologies like Gen AI. The vast potential of Gen AI for streamlining processes, enhancing decision-making, and driving innovation makes it an essential area of focus. Yet traditional training programs, which often demand significant time and resources, are no longer practical for many companies.

Microlearning offers a solution by making education flexible, personalized, and accessible. Lessons typically last 10–15 minutes and are delivered through formats that cater to different learning styles, such as videos, interactive exercises, and quizzes. This format is ideal for employees juggling demanding workloads, as it allows them to integrate learning into their schedules seamlessly.

Furthermore, microlearning ensures relevance by offering tailored learning paths. For example, a marketing team can focus on modules that explore Gen AI-powered audience segmentation, while a customer service team might learn about automated response systems and predictive analytics. This customization ensures that training is directly applicable, increasing engagement and retention.

Client Case Study in Gen AI Education: Microlearning in Action 

To illustrate how microlearning can transform corporate training, consider the case of a multinational consumer packaged goods (CPG) firm that sought to integrate Gen AI into its operations. The company recognized the potential of AI tools to enhance productivity and innovation but faced several challenges:

  1. Time Constraints: Employees were already stretched thin, managing tight deadlines and critical projects.
  2. Skill Gaps: Teams varied widely in their familiarity with AI technologies, requiring training tailored to different levels of expertise.
  3. Scalability: With offices spread across multiple time zones, delivering consistent, high-quality training to a global workforce was a major challenge.

To address these challenges, the company asked me to help it adopt a microlearning strategy.

Designing a Microlearning Program 

We began by identifying the key areas where Gen AI could make an immediate impact, including sales forecasting, product development, and customer experience management. Working with subject matter experts, they created a series of microlearning modules tailored to specific roles and objectives.

For example:

  • Sales Teams: Modules focused on using AI tools to predict customer needs, improve lead scoring, and optimize outreach strategies.
  • Product Developers: Training covered AI-driven design tools and algorithms to accelerate prototyping and refine product features.
  • Customer Support Teams: Lessons explored AI chatbots, sentiment analysis, and personalized service recommendations.

Each module was designed to be engaging and interactive, encouraging employees to apply what they learned immediately. The content was hosted on a mobile-friendly Learning Management System (LMS), ensuring accessibility for employees regardless of location or time zone.

Making Learning Flexible and Personalized 

Flexibility was a cornerstone of the program. Employees could access the modules whenever it suited them, such as during breaks, commutes, or downtime between meetings. The LMS also included progress tracking, enabling participants to monitor their development and revisit areas where they needed additional support.

To enhance engagement, we helped the company incorporate gamification elements, such as badges and leaderboards, to motivate learners and celebrate achievements. Employees could also choose their own learning paths, selecting modules that aligned with their roles and career aspirations. This personalization ensured that training was not only relevant but also empowering, as employees felt a greater sense of ownership over their learning journey.

Support and Mentorship 

From our experience with other companies, self-paced learning works best with guidance, so we helped the company pair the microlearning program with optional mentorship opportunities. Experienced AI practitioners within the organization served as mentors, hosting weekly virtual office hours where employees could ask questions and receive advice.

For instance, a sales manager might consult a mentor about integrating AI tools into an existing CRM system, while a customer support specialist could seek tips on optimizing chatbot responses for better customer satisfaction. These interactions provided valuable context and practical insights, reinforcing the concepts covered in the microlearning modules.

Results That Speak for Themselves 

After six months, the microlearning initiative delivered measurable results across multiple metrics:

  1. Increased Efficiency: Sales teams reported a 22% reduction in time spent on lead qualification, thanks to AI-enhanced processes.
  2. Improved Innovation: Product developers cut prototyping time by 18%, enabling faster iteration and delivery of new products.
  3. Enhanced Customer Experience: Customer satisfaction scores improved by 26%, as support teams used AI tools to provide quicker, more personalized service.

These results not only demonstrated the immediate impact of microlearning but also highlighted its long-term potential to drive operational excellence and competitive advantage.

Building a Culture of Continuous Learning 

Beyond the tangible outcomes, the microlearning program had a profound effect on the company’s culture. Employees became more confident and proactive in experimenting with AI tools, sharing their learnings with colleagues, and proposing new applications for the technology.

For example, a marketing team used insights from their training to develop an AI-powered campaign that outperformed previous efforts by 30%. Similarly, a regional office implemented an AI tool for inventory management, significantly reducing waste and costs. These successes reinforced a culture of continuous learning and innovation, where employees were empowered to take initiative and explore the possibilities of emerging technologies.

Microlearning is not a one-and-done solution; it is a dynamic approach that evolves with the needs of the business. As Gen AI capabilities advance, companies can expand their training libraries to cover new applications, ensuring that employees remain at the forefront of innovation.

For example, future modules might focus on advanced AI ethics, regulatory compliance, or integrating AI into sustainability initiatives, while managing risks. By continuously updating and refining their microlearning programs, corporations can maintain a skilled and adaptable workforce ready to tackle the challenges of tomorrow.

The Strategic Advantage of Gen AI Education Through Microlearning 

For corporations, microlearning offers a strategic advantage in an increasingly competitive landscape. It allows businesses to upskill employees quickly and efficiently, driving productivity and innovation while minimizing disruption. Moreover, by tailoring training to the unique needs of different teams and roles, microlearning ensures that every employee can contribute meaningfully to the company’s success. Whether it’s a sales representative using AI to close deals faster or an operations manager leveraging AI for process optimization, the benefits of this approach extend across the organization.

By embracing microlearning, corporations not only enhance their operational capabilities but also foster a culture of growth, adaptability, and forward-thinking. In an era defined by rapid technological change, this mindset is critical for long-term success. Microlearning represents the future of corporate education. Its ability to deliver focused, engaging, and personalized training makes it the ideal approach for equipping employees with the skills they need to thrive in the age of Gen AI. By adopting this strategy, corporations can ensure that their teams are not just keeping up with change but leading it, driving innovation and setting new benchmarks for success.


Have you read?
The World’s Best Medical Schools.
The World’s Best Universities.
The World’s Best International High Schools.
Global Mobility and Wealth Security: Why Citizenship by Investment (CBI) and Residency by Investment (RBI) Programs Are Essential for Global Executives and High-Net-Worth Individuals.

Copyright 2025 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine’ prior written consent. For media queries, please contact: info@ceoworld.biz




Source link

Continue Reading

Trending