Connect with us

Tools & Platforms

Shift left might have failed – but AI looks set to deliver on its promise

Published

on


“AI will replace QA.” It was not the first time I had heard this claim. But when someone said it to me directly, I asked them to demonstrate how and they simply couldn’t.

That exchange occurred shortly after my co-founder Guy and I launched our second company, BlinqIO. This time, we focused our efforts on building a fully autonomous AI Test Engineer.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Ally CIO: Pace of tech change ‘weighs on me’

Published

on


Since the July rollout of Ally’s proprietary artificial intelligence platform, the breadth of use is what’s surprised Sathish Muthukrishnan, the bank’s chief information, data and digital officer.

“We have people in the sales force that are using it, people in the operations side, customer care associates using it; obviously, folks in the technology side; marketing; our risk control partners, risk compliance; audit, privacy – they’re all big users of it,” said Muthukrishnan, who’s been in his role at the digital bank since 2019.

The Detroit-based lender gave its 10,000 employees access to Ally.ai two months ago, after testing it with a smaller group for more than a year. About 400,000 prompts have been submitted to the platform, and adoption is at about 50%. 

The bank wants employees to use the platform, which was built in-house, to handle tasks such as drafting emails and proofreading copy, to free up their time for other projects. 

When asked how AI might affect the company’s headcount, Muthukrishnan said it’s set to “have a meaningful impact on the business outcomes.”

Ally has “ambitious” growth plans, so for the company to generate more revenue while maintaining current spending levels, “technology and AI become critical,” Muthukrishnan said in a recent interview with Banking Dive. “That’s both driving efficiency and effectiveness. It’s not just efficiency of cost; it’s efficiency of speed.” 

Editor’s note: This interview has been edited for clarity and brevity.

BANKING DIVE: Where does Ally go from here with AI?

SATHISH MUTHUKRISHNAN: Since the launch, there is tremendous demand and a lot of use cases coming our way. Now, let’s turn the tables and see how we can identify use cases that are harder to solve on the business side, and how do we bring that to the forefront? 

With the pace at which technology is evolving, something that seems impossible, something that seems super hard to solve right now, we will be able to solve in a few months. So we want to tackle those hard problems now, and we want to do it collectively across the organization. 

Our CEO has asked me to come and educate the entire executive committee on how we are advancing in AI, and we’re going to call it an executive committee AI day, and it’s just purely to set aside dedicated time, bring us all together, fully focused on AI. These are all busy people running big organizations, so there’s a little bit of pressure on making sure that I use their time efficiently. But we’re going to talk about what are the things that we can collectively solve for the company. We have thoughtfully rolled out AI, and there is interest across the company, but we need to bring the company along.

How has Ally’s AI governance approach evolved since implementation?

It might sound like a cliche, but we focus on doing simple things savagely well. Things that are simple – having risk controls, having data protection, having access controls – can be cast aside because you see the shinier object. 

For us, to have an AI working group, then having an AI governance steering council, then having an enterprise-level committee, then the board – having this many levels of governance to ensure that AI is scaled safely and responsibly is super critical. We did the hard work ahead of time, we have exercised this governance muscle extremely well, and people have gotten used to it.

How do you see the role of AI agents evolving at Ally in the coming years?

Agentic AI allows you to look at the complicated paths, complicated processes, and allows you to digitize that. It’s still in an experimental stage for us. 

For example, all applications in our tech ecosystem have observability. If there is an issue, we want to be the first to find out, before the customer finds out, or our business partner finds out. So a ton of alerts come our way. If I have to process those alerts, but not increase my headcount as I’m increasing the number of customers, I’m looking at agentic AI to do that. The usage of digital has doubled in the last four years by our customers, but the cost of serving them has gone down. That’s because of the introduction of new technology. 

If you want somebody to reset your password, that could be agentic AI that does that internally. Those are some of the experiments that we are doing; nothing that is in production or at scale yet.



Source link

Continue Reading

Tools & Platforms

South Korea unveils support measures for AI, deep-tech startups | MLex

Published

on


( September 17, 2025, 08:42 GMT | Official Statement) — MLex Summary: South Korea’s Ministry of SMEs and Startups said Wednesday it will fully support entrepreneurs in artificial intelligence and other deep-tech fields, while announcing a 13.5 trillion won ($9.8 billion) program to help startups grow into unicorns. The program, to be run alongside the government’s 150 trillion won National Growth Fund, will give “promising companies” investment tailored to their growth stages. The ministry also said the government will build a cross-ministerial support system for startups in key technology sectors including AI, defense and climate tech. To back their overseas expansion, a “startup and venture campus” will also be set up in Silicon Valley to provide integrated services that help startups settle and grow abroad, the ministry added.
The statement, in Korean, is attached….

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.

Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.



Source link

Continue Reading

Tools & Platforms

Women are being unfairly penalised by an imaginary AI competence gap

Published

on


AI adoption rates show no sign of slowing. Organisations are rushing to adopt state-of-the-art AI coding technologies for their employees, with 78% of businesses in 2025 using AI in at least one function. However, amid the pace of transformation, a hidden stigma is preventing employees from embracing it fully in their day-to-day roles. 

Recent research speaks to this problem. A recent Pew Research Centre survey revealed that 91% of American workers are allowed to use AI – but only 16% actually did. In some cases, this might be due to skills gaps or a lack of awareness of where AI could help them. However, we’re increasingly seeing workers hesitating to use the tool because they fear how they will be perceived if ‘caught’ using it. 

Harvard Business Review’s latest findings on the ‘hidden penalty of using AI at work’ unveiled similar insights. The publication surveyed engineers at a leading tech company, only to realise that less than half of the company’s engineers had been using the AI tools it offered. To understand why this might be, participants evaluated code written by another engineer, either with or without AI assistance. Engineers who are believed to use AI are deemed, on average, 9% less competent by their peers.

This unveils an unfortunate but unsurprising truth: competence penalty is still rampant in the tech world. Perhaps most alarming, the problem is twice as severe for women. Those deemed to have used AI faced a 13% reduction, compared to 6% for male engineers.

Women are disproportionately affected by the competence penalty, often judged as less capable simply for leveraging the tool. Now this bias is showing up in how they, and other employees, might use AI in the workplace. Understandably, women might therefore hesitate to use AI for coding – if they do use it, it might be used as ‘proof’ against them that they can’t do the work otherwise. 

This is echoed by the findings: engineers who hadn’t adopted AI were the harshest critics of those who had used it. Specifically, male non-adopters evaluated code by women who’d used AI 26% more harshly than male engineers who had.

The competence penalty and negative perceptions around AI will do more than just knock women’s confidence – it’ll create larger barriers to adoption of new technologies. Some may put off using the tools as a self-preservation instinct, reinforcing inequality between employees and impeding adoption within the business. They’ll also be impacted negatively, as their AI investment won’t yield the return expected.

It is in an organisation’s best interest to address prejudice against AI adoption. It can play a key role in normalising it as a tool that enhances productivity, not one that replaces competence. This includes creating learning environments where people can experiment with AI without judgment, and ensuring that employees are not penalised for using sanctioned tools.

Unfortunately, the longstanding bias against women in STEM is nothing new and predates the advent of AI. GitHub’s 2016 study – nearly ten years ago – revealed a similar sentiment. Three million code submissions were presented without disclosing the engineer’s gender. Anonymised female-written code saw a 78.6% approval rate, but shot down to 62.5% when gender was announced.

Business leaders have a responsibility to address this prejudice and help women embrace new technology without fear of judgment. Organisations need to find creative ways to get more women involved and to support them. On a day-to-day level, this can look like organising hackathons and workshops to get teams excited. These efforts, however, should be holistic: business leaders need to ensure that the culture around technologies is inclusive and that female employees can give feedback on their experience.

Simone Mink is the product operations lead at Mendix, a Siemens business.




Source link

Continue Reading

Trending