Connect with us

Tools & Platforms

How StanChart balances AI-powered innovation with security

Published

on


In the world of global banking, the roles of driving innovation and enforcing control can seem at odds. One pushes for speed and disruption, the other for stability and security. For Alvaro Garrido, these competing priorities are a daily reality.

As Standard Chartered’s chief operating officer for technology and operations, and CIO for information security and data, Garrido is tasked with harnessing technologies such as artificial intelligence (AI) to drive efficiency and innovation, while addressing the operational and regulatory requirements of a global financial institution.

But it’s not a matter of choosing one over the other. The key is integrating them from the start. “I’ve never seen an area where we’ve compromised innovation for control or the other way around,” he said. “It comes very naturally, and we are also getting better along the way.”

Garrido said this balance can be achieved by having a deep understanding of market dynamics, the regulatory environment and the bank’s core strategy. “You need to elevate yourself to the ethos of the bank and the company. It’s an application of the systematic regime under which you operate, but also common sense.”

Nowhere is this balancing act more evident than in the adoption of AI. Pointing to the debate between innovating with AI to secure a first-mover advantage versus taking a wait-and-see approach, Garrido said Standard Chartered (StanChart) is firmly in the innovation camp, but is supported by a “very well-orchestrated non-financial risk engine” to ensure it proceeds safely.

That includes employing a defence-in-depth strategy, underpinned by threat-led scenario risk assessments, where the bank analyses assets against specific threats to determine gross risk, then overlays existing controls to calculate the residual risk. This not only ensures security resources are applied where they are most needed, but also allows for multi-layered defences.

For example, the decision to patch a vulnerable device could depend on whether it is switched off or ringfenced by a deep packet inspection firewall. “Sometimes, it might be more beneficial to patch it, but other times it might be better to segment it or do both at the same time. Of course, there’s economics and synergies to consider, but we have multiple controls at our disposal,” said Garrido.

The multi-layered approach extends to securing the bank’s employees. While security awareness training is provided, Garrido noted the importance of protecting employees with sophisticated inbound and outbound security tools. The process of detecting and responding to phishing attacks, for example, is now highly automated, moving from a manual, ticket-based system to one where countermeasures are deployed in near real-time.

Securing data in a balkanised world

With AI models dealing with vast amounts of data, ensuring data security and integrity is key, with security built in from the onset and not as an afterthought. “The fundamental rule is to try not to install the seat belt at the end,” said Garrido. “Retrofitting the seat belt at the end is expensive and probably going to kill you.”

This principle is applied across the software development lifecycle, where the bank is shifting security left and embedding controls directly into its continuous integration and continuous deployment pipeline to intercept and analyse code in real-time, ensuring safer code from the very beginning.

To me, the definition of critical doesn’t come from what you think the system is – it’s defined by the data you have in it. If the data is PII or financial data, it will need additional controls
Alvaro Garrido, Standard Chartered

Critically, the bank’s security posture is defined not by the system, but by the data it contains. If sensitive production data must be used in a test environment for a specific reason, that environment is immediately elevated to production-level security.

“To me, the definition of critical doesn’t come from what you think the system is – it’s defined by the data you have in it,” Garrido explained. “If the data is PII [personally identifiable information] or financial data, it will need additional controls.”

The bank also adopts a holistic data protection strategy that includes having an inventory and taxonomy of critical data entities, understanding where data is at any point in time, ensuring data persistence and quality, as well as detecting data anomalies using machine learning models.

All of that work can become complex for a global bank operating in over 70 markets, each with its own data sovereignty laws. “There is a trend right now for more data balkanisation, with governments becoming more protective of data,” Garrido noted.

To address this, the bank is moving towards a set of global data platforms built on the principles of federation and orchestration. The goal is not to create one single monolithic data lake, but an intelligent integration layer that can enforce rules and cater to different sovereignty requirements.

Hybrid roles

The convergence of technologies such as AI, data management and cyber security is not only forcing CIOs and chief information security officers (CISOs) to rethink how their teams are structured, but is also giving rise to a rare breed of hybrid tech workers who are expected to be well-versed in multiple domains.

“In a way, it’s like finding the unicorn,” said Garrido. “You want a good data scientist who is also an expert in cyber security. Those people don’t exist, so you need to find the best way to cross-train people.”

Fortunately, the bank has seen an appetite for reskilling among its employees. “The level of interest is unbelievable. Everyone is training and retraining,” said Garrido. “With AI, we’re actually making room for more innovation. The shape of the organisation is going to be different, and it’s happening almost organically.”

Moving forward, Garrido sees his teams becoming more modular and agile to respond to the fast-changing macro environment. He expects to see tighter integration between business and technology teams, enabling highly focused, boutique capabilities to be developed while maintaining the rigour, predictability and uniformity of a global bank.

“Banks usually build for scale, but with more data balkanisation and data sovereignty, scale is no longer the paradigm,” said Garrido. “You need to build for modularity to cater to different regulatory requirements and jurisdictions and find that sweet spot.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Anthropic settles, Apple sued: Tech giant faces lawsuit over AI copyright dispute

Published

on


Apple has been drawn into the growing legal battle over the use of copyrighted works in artificial intelligence training, after two authors filed a lawsuit in the United States accusing the technology giant of misusing their books.

Claims of pirated dataset use

The proposed class action, lodged on Friday in federal court in Northern California, alleges that Apple copied protected works without permission, acknowledgement or compensation. Authors Grady Hendrix and Jennifer Roberson claim their books were included in a dataset of pirated material that Apple allegedly used to train its “OpenELM” large language models.

The filing argues that Apple has failed to seek consent or provide remuneration, despite the commercial potential of its AI systems. Both the company and the authors’ legal representatives declined to comment when approached.

Part of a wider copyright battle?

The case adds to a mounting wave of litigation targeting technology firms over intellectual property in the AI age. Earlier this week, AI start-up Anthropic disclosed it had reached a $1.5 billion settlement with a group of authors who accused the company of using their books to develop its Claude chatbot without authorisation. The payout, which Anthropic agreed to without admitting liability, has been described by lawyers as the largest publicly reported copyright settlement to date.

Lawyers who represented the authors against Anthropic described the accord as unprecedented. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from pirate websites is wrong,” said Justin Nelson of Susman Godfrey.

The settlement is among the first to be reached in a wave of copyright lawsuits filed against AI firms, including Microsoft, OpenAI, Meta and Midjourney, over their use of proprietary online content. Some competitors have pre-emptively struck licensing deals with publishers to avoid litigation; Anthropic has not disclosed any such agreements.

(With inputs from Reuters)



Source link

Continue Reading

Tools & Platforms

‘Your Work Now Shapes Your Life Decades Ahead’: Anna Gagarina, Career Expert, on Using AI to Land the Right Professional Path

Published

on


By the end of 2025, the AI market in HR is set to reach nearly $7 billion—almost a billion more than in 2024. Global corporations are ready to invest heavily in technology to attract top talent. But amid the surge of AI adoption, recruitment is going through turbulent times. What was once a predominantly manual process has transformed into a high-tech operation—both for candidates and employers—so much so that the market sometimes finds AI competing not with humans, but with other AI systems.

Understanding these new trends has become crucial both for career consultants and job seekers. How to avoid missing out on promising positions in an algorithm-driven world, where to find your place in the evolving tech landscape, and how to turn AI to your advantage—all of this is explained by Anna Gagarina, career development expert, founder of Job Mentor, an AI platform for career guidance.

Anna, before you began guiding others in effective job searching, you went through an extensive personal journey exploring different countries and careers. When and how did that journey begin?

I often joke that I’m the classic millennial from the memes—the one who had no clue what she wanted to be when she grew up. My career choice was entirely spontaneous: I didn’t go into business or technology; I studied history. I dedicated seven years of my life to it, winning competitions, publishing academic papers, and presenting at conferences—but even during university, I realised that teaching probably wasn’t for me. Then I wondered: who needs all this?

This personal crisis coincided with my first encounter with the business world. I had no prior experience—neither personal nor family-related. Everything started from scratch: I studied the profession, explored case studies, and observed people and companies. I took short courses in sales and marketing, and my head was bursting with information so different from the academic world I was used to. But it was during an internship at an educational company that I truly discovered a new world. I think that was my first real breakthrough in mindset—a step onto the career path I’ve been following for over 11 years.

The relocations were truly pivotal moments for me. In 2020, I was invited to work in Ireland, where I first met families who had been running their businesses for generations. That experience gave me a key insight: “It’s possible to build a business for the long term, creating a legacy and a community around it.”

Another breakthrough came to me after moving to the U.S. Here, I saw what real competition for talent looks like, how to plan a career strategically, and how to consciously and methodically build a path to success, brick by brick. In Russia, decisions are often made with a “just don’t miss out” mindset; here, people follow the principle: “Your career today is an investment in your wealth 50 years from now.”

And so much depends on your ability to build relationships—and how you do it. You can be extremely talented, but if you can’t forge connections, your talent alone won’t take you far.

At what point did you decide to focus on AI when it comes to attracting top talent?

Honestly, I’m not an early adopter or a tech evangelist.

By nature—both personally and professionally—I’ve always been a bit of a conservative. The real turning point came when I started working with companies from Silicon Valley. At first, I didn’t even handle ChatGPT very well—but I quickly learned, and then it hit me: “If even a total tech novice like me can master this, it’s clear this technology is going to change the world.”

From there, client demand pushed me further. I began to see exactly which talents the market needed, where investments were flowing, and what new opportunities and roles were emerging as these technologies advanced. That’s when I realised: as a career consultant, I simply had to move toward AI and emerging tech—because they are shaping the near future of careers.

My first large-scale experiments with AI tools started in the corporate space. As a recruiter, for example, AI-driven sourcing lets me identify more than 100 candidates a day for leading startups and draft over 50 personalized professional emails—even in a language that isn’t my own. This proactive approach helps companies hire high-quality, high-potential talent, where the balance of time and quality is impossible to overstate: every day without a qualified employee can cost a company tens of thousands of dollars. AI also delivers another critical edge—speed. With it, I can create extensive training programs, learning materials, and simulations in just hours instead of months.

You’ve developed over 40 corporate programs and advised more than 1000 HR professionals. Which project was a true breakthrough for you?

I’d say it was a project tied to an employee career management course, where I worked with HR specialists from large companies. A single 20-minute consultation with me could evolve into a full-scale project that was later implemented across companies with 10,000 or even 20,000 employees. This fundamentally changed my mindset: as an individual consultant, you work one-on-one with a client. But when your idea scales within a company of thousands, you’re genuinely influencing the system.

One outcome of this realization was the creation of my project, Job Mentor. The idea stemmed from a very personal challenge: I ran into the classic consulting problem—my resources were limited by my time and expertise. Gradually, I began automating processes, starting with reports, content, and analytics.

Over the past two years, I’ve guided more than 200 career consultations, integrating AI into every step—from defining career paths to refining résumés and identifying the right opportunities. What started as an experiment has grown into a structured system: I begin by introducing clients to AI tools, then provide customized agents that help automate job searches and self-reflection. The result is tangible: I save hours of work, while clients gain something even more valuable—time they can spend with family instead of navigating endless applications. And this was only the first stage of the transition.

At some point, I asked myself: “Could I replace myself entirely?” That’s how the idea for a service where my calendar and 15 hours of individual work aren’t needed was born. Instead, users get a ready-made solution in just 30 minutes of interacting with the system. This drastically lowers the cost of the service and makes it accessible to a much wider audience.

Traditional career coaching doesn’t come cheap. In the U.S., an hour with a consultant averages around $400, while full-service packages often range from $2,000 to $3,000.
By contrast, an AI-powered consultation costs about $100. Of course, no algorithm can fully replace the human connection—but what if you need urgent career support and can’t afford traditional fees? For some, a job is a matter of survival; for others, it’s the chance to unlock potential and achieve a breakthrough in their field. Ultimately, expanding access to career guidance means creating a labor market that is fairer and more transparent for everyone.

How exactly do you replace your involvement? Which AI technologies and tools do you integrate into your work with clients and companies?

We’re a fully AI-based agency, so you could say that almost all of our core technologies fall under the AI umbrella. This includes agents that help automate routine tasks, notetakers that analyse and organise information, as well as tools like Perplexity for deep research and handling large volumes of data.

Such automation brings measurable business results. For example, by reducing the need for manual data processing and admin tasks, we save around $8,000–10,000 per month in operational costs (which would otherwise require 1–2 full‑time specialists). It also significantly reduces classic risks associated with consulting and recruiting businesses, such as knowledge gaps when team members leave, or over-dependency on individual experts.

Additionally, AI allows us to continuously collect and structure career data from 100+ client interactions. Most of this is unstructured information — recorded consultations that often include tens of thousands of words with low repetition and very few clear patterns. Instead of requiring consultants to manually revisit and decode these conversations, our AI instantly analyzes the material, extracts actionable insights, and organizes it for further use.

Thanks to this capability, our consultant can conduct 20–30% more career sessions per month, raising both the speed and the depth of expertise at each stage of the client journey.

Can you share an example of when AI completely transformed recruitment outcomes?

Absolutely—but like any story, there are two sides to the coin: impressive wins and some unexpected headaches.

On the positive side, today’s notetakers do much more than just record interviews like in the old Zoom days—they gather data, take smart notes, and, crucially, learn from company-specific information. This supercharges the recruitment process: from crafting job posts and writing emails to analysing candidates and enhancing communication. For example, emails and text content can now be generated automatically, slashing the recruiter’s time on routine tasks. Natural language sourcing tools allow you to describe your ideal candidate and instantly get relevant profiles—something that used to require complex Boolean searches.

But there’s a flip side. AI has dramatically increased the number of low-quality applications, including spam, making it easy for strong candidates to get lost in the noise. Candidates are using AI to whip up instant responses and cover letters, which only amplifies the flood. Recruiters who used to handle dozens of applications now face hundreds—or even thousands—forcing them to sift, filter, and compete in a whole new landscape. In a sense, AI-driven candidates and AI-powered recruiters are now battling it out on the same field.

How will these changes—AI and automation—impact the global job market in the coming years?

You can roughly divide today’s professions into three groups. The first is non-human: jobs that are already automated or will be soon. Think heavy, dangerous, or repetitive mechanical work—tasks based on algorithmic, repeatable movements. Robots are already taking over these roles, from automated warehouse workers and bartenders to delivery drivers, taxi drivers, and even nail technicians.

The second group is human + AI copilot. These are roles where systems and platforms collect, organise, and analyse data—in medicine, logistics, sales, finance, and education—but the final decision is still made by a human.

Finally, the third group is purely human: top management roles—tasks only a person can perform, overseeing both non-human and human + AI copilot teams.

Technological changes are reshaping all three groups. Essentially, a profession is becoming a platform reflecting your education and professional expertise—the foundation on which everything else, including technology, is built. For example, being an engineer is a profession, but acquiring a more specialised skill set and integrating certain technologies can turn you into, say, a machine learning operations (MLOps) engineer.

Training and learning methods are bound to change. We need to find ways to quickly acquire in-demand skills, specialise, and understand the realities of the modern workplace. AI copilots already help accelerate the junior phase, enabling professionals to move faster into human + AI copilot roles—and eventually reach purely human-level positions.

Yes, lately, we’ve seen waves of layoffs, especially among junior employees, as AI takes over. How can companies balance automation with keeping jobs?

It’s a tricky question. In practice, companies will automate wherever it makes economic sense. If robots or AI can produce a product cheaper than a human, businesses will naturally go for the cost cut. Without regulations or mandatory limits, there’s very little to slow automation down.

Yet the reality is more nuanced. Jobs stick around as long as automating a role costs more than keeping a human. Take giants like Amazon—they pour billions into warehouse and logistics automation, yet remain among the biggest employers in the sector because some tasks are simply cheaper to do by hand.

The future of work is all about reskilling and the shift to gig and project-based careers. Lifelong employment at a single company is disappearing—and the traditional idea of permanent work is fading too. More people will become small independent “businesses,” managing careers like a portfolio of projects and tasks. It’s a world of opportunity, but it demands flexibility, constant upskilling, and the ability to pivot quickly.

What’s the number-one piece of advice you’d give to someone worried that AI will take over their job?

Every time a person frees up time, it gets filled with something new. That’s how new sectors, fields of knowledge, and professions emerge. My advice would be to ask yourself: where do you want to go next? In a year, two, five, ten—and strategically, throughout your life? Where will your mind, creativity, skills, and resources create value and help solve real problems for people?

Which skills will define “new literacy” in the next 10 years?

Even sooner than a decade from now, being literate will mean knowing how to work effectively with AI as a tool. Everyone will need to grasp how these technologies function, what they can—and can’t—do, and how to craft the right prompts—a skill surprisingly few people master today.

Equally crucial will be the ability to design systems and automate routine tasks, orchestrating different agents and tools to maximise efficiency. Basic programming will no longer be just for coders—it will become a must-have for anyone building AI-powered products and services.

The ability to structure information visually and create clear, compelling designs will also be essential, helping ideas and results cut through the noise.

And, of course, critical thinking and the capacity to filter massive streams of information will be indispensable. We’ll need to digest enormous amounts of data and make smart decisions in a constantly shifting landscape. Curiosity, experimentation, and adaptability—being ready to try new approaches and pivot quickly—will become the hallmarks of both professional and personal growth.



Source link

Continue Reading

Tools & Platforms

Duke University pilot project examining pros and cons of using artificial intelligence in college

Published

on


DURHAM, N.C. — As generative artificial intelligence tools like ChatGPT have become increasingly prevalent in academic settings, faculty and students have been forced to adapt.

The debut of OpenAI’s ChatGPT in 2022 spread uncertainty across the higher education landscape. Many educators scrambled to create new guidelines to prevent academic dishonesty from becoming the norm in academia, while some emphasized the strengths of AI as a learning aid.

As part of a new pilot with OpenAI, all Duke undergraduate students, as well as staff, faculty, and students across the University’s professional schools, gained free, unlimited access to ChatGPT-4o beginning June 2. The University also announced DukeGPT, a University-managed AI interface that connects users to resources for learning and research and ensures “maximum privacy and robust data protection.”

Duke launched a new Provost’s Initiative to examine the opportunities and challenges AI brings to student life on May 23. The initiative will foster campus discourse on the use of AI tools and present recommendations in a report by the end of the fall 2025 semester.

The Chronicle spoke to faculty members and students to understand how generative AI is changing the classroom.

ALSO SEE Job seekers, HR professionals grapple with use of artificial intelligence

Embraced or banned

Although some professors are embracing AI as a learning aid, others have implemented blanket bans and expressed caution regarding the implications of AI on problem-solving and critical thinking.

David Carlson, associate professor of civil and environmental engineering, took a “lenient” approach to AI usage in the classroom. In his machine learning course, the primary learning objective is to utilize these tools to understand and analyze data.

Carlson permits his students to use generative AI as long as they are transparent about their purpose for using the technology.

“You take credit for all of (ChatGPT’s) mistakes, and you can use it to support whatever you do,” Carlson said.

He added that although AI tools are “not flawless,” they can help provide useful secondary explanations of lectures and readings.

Matthew Engelhard, assistant professor of biostatistics and bioinformatics, said he also adopted “a pretty hands-off approach” by encouraging the use of AI tools in his classroom.

“My approach is not to say you can’t use these different tools,” Engelhard said. “It’s actually to encourage it, but to make sure that you’re working with these tools interactively, such that you understand the content.”

Engelhard emphasized that the use of these tools should not prevent students from learning the fundamental principles “from the ground up.” Engelhard noted that students, under the pressure to perform, have incentives to rely on AI as a shortcut. However, he said using such tools might be “short-circuiting the learning process for yourself.” He likened generative AI tools to calculators, highlighting that relying on a calculator hinders one from learning how addition works.

Like Engelhard, Thomas Pfau, Alice Mary Baldwin distinguished professor of English, believes that delegating learning to generative AI means students may lose the ability to evaluate the process and validity of receiving information.

“If you want to be a good athlete, you would surely not try to have someone else do the working out for you,” Pfau said.

Pfau recognized the role of generative AI in the STEM fields, but he believes that such technologies have no place in the humanities, where “questions of interpretation … are really at stake.” When students rely on AI to complete a sentence or finish an essay for them, they risk “losing (their) voice.” He added that AI use defeats the purpose of a university education, which is predicated on cultivating one’s personhood.

Henry Pickford, professor of German studies and philosophy, said that writing in the humanities serves the dual function of fostering “self-discovery” and “self-expression” for students. But with increased access to AI tools, Pickford believes students will treat writing as “discharging a duty” rather than working through intellectual challenges.

“(Students) don’t go through any kind of self-transformation in terms of what they believe or why they believe it,” Pickford said.

Additionally, the use of ChatGPT has broadened opportunities for plagiarism in his classes, leading him to adopt a stringent AI policy.

Faculty echoed similar concerns at an Aug. 4 Academic Council meeting, including Professor of History Jocelyn Olcott, who said that students who learn to use AI without personally exploring more “humanistic questions” risk being “replaced” by the technology in the future.

How faculty are adapting to generative AI

Many of the professors The Chronicle interviewed expressed difficulty in discerning whether students have used AI on standard assignments. Some are resorting to a range of alternative assessment methods to mitigate potential AI usage.

Carlson, who shared that he has trouble detecting student AI use in written or coding assignments, has introduced oral presentations to class projects, which he described as “very hard to fake.”

Pickford has also incorporated oral assignments into his class, including having students present arguments through spoken defense. He has also added in-class exams to lectures that previously relied solely on papers for grading.

“I have deemphasized the use of the kind of writing assignments that invite using ChatGPT because I don’t want to spend my time policing,” Pickford said.

However, he recognized that ChatGPT can prove useful in generating feedback throughout the writing process, such as when evaluating whether one’s outline is well-constructed.

A ‘tutor that’s next to you every single second’

Students noted that AI chatbots can serve as a supplemental tool to learning, but they also cautioned against over-relying on such technologies.

Junior Keshav Varadarajan said he uses ChatGPT to outline and structure his writing, as well as generate code and algorithms.

“It’s very helpful in that it can explain concepts that are filled with jargon in a way that you can understand very well,” Varadarajan said.

Varadarajan has found it difficult at times to internalize concepts when utilizing ChatGPT because “you just go straight from the problem to the answer” without paying much thought to the problem. Varadarajan acknowledged that while AI can provide shortcuts at times, students should ultimately bear the responsibility for learning and performing critical thinking tasks.

For junior Conrad Qu, ChatGPT is like a “tutor that’s next to you every single second.” He said that generative AI has improved his productivity and helped him better understand course materials.

Both Varadarajan and Qu agreed that AI chatbots come in handy during time crunches or when trying to complete tasks with little effort. However, they said they avoid using AI when it comes to content they are genuinely interested in exploring deeper.

“If it is something I care about, I will go back and really try to understand everything (and) relearn myself,” Qu said.

The future of generative AI in the classroom

As generative AI technologies continue evolving, faculty members have yet to reach consensus on AI’s role in higher education and whether its benefits for students outweigh the costs.

“To me, it’s very clear that it’s a net positive,” Carlson said. “Students are able to do more. Students are able to get support for things like debugging … It makes a lot of things like coding and writing less frustrating.”

Pfau is less optimistic about generative AI’s development, raising concerns that the next generation of high school graduates will be too accustomed to chatbots coming into the college classroom. He added that many students find themselves at a “competitive disadvantage” when the majority of their peers are utilizing such tools.

Pfau placed the responsibility on students to decide whether the use of generative AI will contribute to their intellectual growth.

“My hope remains that students will have enough self-respect and enough curiosity about discovering who they are, what their gifts are, what their aptitudes are,” Pfau said. “… something we can only discover if we apply ourselves and not some AI system to the tasks that are given to us.”
___

This story was originally published by The Chronicle and distributed through a partnership with The Associated Press.

Featured video is ABC11 24/7 Livestream

Copyright © 2025 by The Associated Press. All Rights Reserved.



Source link

Continue Reading

Trending