Connect with us

AI Insights

Experian Unveils New AI Tool for Managing Credit and Risk Models

Published

on


Experian Assistant for Model Risk Management is designed to help financial institutions better manage the complex credit and risk models they use to decide who gets a loan or how much credit someone should receive. The tool validates models faster and improves their auditability and transparency, according to a Thursday (July 31) press release.

The tool helps speed up the review process by using automation to create documents, check for errors and monitor model performance, helping organizations reduce mistakes and avoid regulatory fines. It can cut internal approval times by up to 70% by streamlining model documentation, the release said.

It is the latest tool to be integrated into Experian’s Ascend platform, which unifies data, analytics and decision tools in one place. Ascend combines Experian’s data with clients’ data to deliver AI-powered insights across the credit lifecycle to do things like fraud detection.

Last month, Experian added Mastercard’s identity verification and fraud prevention technology to the Ascend platform to bolster identity verification services for more than 1,800 Experian customers using Ascend to help them prevent fraud and cybercrime.

The tool is also Experian’s latest AI initiative after it launched its AI assistant in October. The assistant provides a deeper understanding of credit and fraud data at an accelerated pace while optimizing analytical models. It can reduce months of work into days, and in some cases, hours.

Experian said in the Thursday press release that the model risk management tool may help reduce regulatory risks since it will help companies comply with regulations in the United States and the United Kingdom, a process that normally requires a lot of internal paperwork, testing and reviews.

As financial institutions embrace generative AI, the risk management of their credit and risk models must meet regulatory guidelines such as SR 11-7 in the U.S. and SS1/23 in the U.K., the release said. Both aim to ensure models are accurate, well-documented and used responsibly.

SR 11-7 is guidance from the Federal Reserve that outlines expectations for how banks should manage the risks of using models in decision making, including model development, validation and oversight.

Similarly, SS1/23 is the U.K. Prudential Regulation Authority’s supervisory statement that sets out expectations for how U.K. banks and insurers should govern and manage model risk, especially in light of increasing use of AI and machine learning.

Experian’s model risk management tool offers customizable, pre-defined templates, centralized model repositories and transparent internal workflow approvals to help financial institutions meet regulatory requirements, per the release.

“Manual documentation, siloed validations and limited performance model monitoring can increase risk and slow down model deployment,” Vijay Mehta, executive vice president of global solutions and analytics at Experian, said in the release. With this new tool, companies can “create, review and validate documentation quickly and at scale,” giving them a strategic advantage.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:

Experian and Plaid Partner on Cash Flow Data for Lenders

Experian Targets ‘Credit Invisible’ Borrowers With Cashflow Score

CFPB Sues Experian, Alleging Improper Investigations of Consumer Complaints



Source link

AI Insights

Can artificial intelligence start a nuclear war?

Published

on


Stanford University simulations have shown that current artificial intelligence models are prone to escalating conflicts to the point of nuclear weapons. The study raises serious questions about the risks of automating military decisions and the role of AI in future wars.

This is reported by Politico .

The results of war games conducted by researcher Jacqueline Schneider from Stanford indicate that artificial intelligence could become a dangerous factor in modern wars if it gains influence over military decision-making.

According to the scientist, during simulations, the latest AI models consistently chose aggressive escalation scenarios, including the use of nuclear weapons. Schneider compared the behavior of the algorithms to the approach of Cold War general Curtis LeMay, who was known for his willingness to use nuclear force on minimal pretext.

“Artificial intelligence models understand perfectly well how to escalate a conflict, but are actually unable to offer options for its de-escalation,” the researcher explained.

In her opinion, this is due to the fact that most of the military literature used to train AI describes escalation scenarios, not those that avoided war.

The Pentagon assures that AI will not have the right to make decisions about launching nuclear missiles, and emphasizes the preservation of “human control.” At the same time, modern warfare is increasingly dependent on automated systems. Already today, projects like Project Maven rely entirely on machine-generated intelligence data, and in the future, algorithms will even be able to advise on countermeasures.

In addition, there are already examples of automation in the field of nuclear weapons in the world. Russia has the Perimeter system, capable of delivering a strike without human intervention, and China is investing huge resources in the development of military artificial intelligence.

Journalists also recall the case of 1979, when US President Jimmy Carter’s advisor Zbigniew Brzezinski received a message about the alleged launch of 200 Soviet missiles. Only a moment before the decision to retaliate was made, it turned out that this was a system error. The question is whether artificial intelligence, which works “reflexively”, would have been able to wait for more detailed information, or would have pressed the “red button” automatically.

Thus, the discussion about the role of AI in the military sphere is becoming increasingly relevant, because not only the outcome of the battle, but also the fate of all humanity may be at stake.

It was previously reported that the Thwaites Glacier in Antarctica, nicknamed the “Doomsday Glacier,” is losing stability and could trigger a rapid rise in sea levels by several meters.

Recall that Hollywood actor and musician Will Smith was involved in a scandal. In particular, the star of “Men in Black” was suspected of using artificial intelligence.

Also follow “Pryamim” on Facebook , Twitter , Telegram , and Instagram.





Source link

Continue Reading

AI Insights

Varo Bank Appoints Asmau Ahmed as Chief Artificial Intelligence Officer to Drive AI Innovation

Published

on


Varo Bank has hired Asmau Ahmed as its first Chief Artificial Intelligence and Data Officer (CAIDO) to lead company-wide AI and machine-learning efforts. Ahmed has over 20 years of experience in leading teams and delivering products at Google X, Bank of America, Capital One, and Deloitte. She will focus on advancing Varo’s mission-driven tech evolution and improving customers’ financial experiences through AI. Varo uses AI to enhance its credit-decisioning processes, and Ahmed’s expertise will help guide future institution-wide advancements in AI.

Title: Varo Bank Appoints Asmau Ahmed as Chief Artificial Intelligence and Data Officer

Varo Bank, the first all-digital nationally chartered bank in the U.S., has announced the hiring of Asmau Ahmed as its first Chief Artificial Intelligence and Data Officer (CAIDO). Ahmed, who brings over 20 years of expertise in innovation from Google, Bank of America, and Capital One, will lead the company’s AI and machine-learning efforts, reporting directly to CEO Gavin Michael [1].

Ahmed’s appointment comes as Varo Bank continues to leverage AI to enhance its core functions. The bank has expanded credit access by using data and advanced machine learning-driven decisioning, reinforcing its mission of advancing financial inclusion with technology. The Varo Line of Credit, launched in 2024, uses self-learning models to improve its credit-decisioning processes based on proprietary algorithms, allowing some customers with reliable Varo banking histories access to loans that traditional credit score systems would have excluded [1].

Ahmed’s extensive experience includes leading technology, portfolio, and customer-facing product teams at Bank of America and Capital One, as well as co-leading the Digital Innovation team at Deloitte. She has also founded a visual search advertising tech company, Plum Perfect. Her expertise will be instrumental in guiding Varo Bank’s future advancements in AI.

“As a nationally-chartered bank, Varo is able to use data and AI in an innovative way that stands out across the finance industry,” said Ahmed. “Today we are applying machine learning for underwriting, as well as fraud prevention and detection. I am thrilled to lead the next phase of Varo’s mission-driven tech evolution and ensure AI can improve our customers’ experiences and financial lives” [1].

Varo Bank’s AI and data science efforts are designed to enhance various core functions of the company’s tech stack. The appointment of Ahmed as CAIDO underscores the bank’s commitment to leveraging AI to improve customer experiences and financial outcomes.

References

[1] https://www.businesswire.com/news/home/20250904262245/en/Varo-Bank-to-Accelerate-Responsible-and-Customer-Focused-AI-Efforts-with-New-Chief-Artificial-Intelligence-Officer-Asmau-Ahmed



Source link

Continue Reading

AI Insights

Guest column—University of Tennessee “Embraces” Artificial Intelligence, Downplays Dangers – The Pacer

Published

on


At the end of February, the University of Tennessee Board of Trustees adopted its first artificial intelligence policy.

The board produced its policy statement with little attempt to engage faculty and students in meaningful discussions about the serious problems that may arise from AI.

At UT Martin, the Faculty Senate approved the board’s policy statement in late April, also without significant input from faculty or students.

In Section V of the document, “Policy Statement and Guiding Principles,” the first subsection states: “UT Martin embraces the use of AI as a powerful tool for the purpose of enhancing human learning, creativity, analysis, and innovation within the academic context.”

The document notes potential problems such as academic integrity, the compromise of intellectual property rights and the security of protected university data. But it does not address what may be the most dangerous and most likely consequence of AI’s rapid growth: the limiting of human learning, creativity, analysis and innovation.

Over the past two years, faculty in the humanities have seen students increasingly turn to AI, even for low-stakes assignments. AI allows students to bypass the effort of trying to understand a reading.

If students attempt a difficult text and struggle to make sense of it, they can ask AI to explain. More often, however, students skip reading altogether and ask AI for a summary, analysis or other grade-directed answers.

In approaching a novel, a historical narrative or even the social realities of our own time, readers start with limited knowledge of the characters, events or forces at play. To understand a character’s motives, the relationship between events, or the social, economic and political interests driving them, we must construct and refine a mental image—a hypothesis—through careful reading.

This process is the heart of education. Only by grappling with a text, a formula or a method for solving a problem do we truly learn. Without that effort, students may arrive at the “right” answer, but they have not gained the tools to understand the problems they face—or to live morally and intelligently in the world.

As complex as a novel or historical narrative may be, the real world is far more complex. If we rely on AI’s interpretation instead of building our own understanding, we deprive ourselves of the skills needed to engage with that complexity.

UT Martin’s mission statement says: “The University of Tennessee at Martin educates and engages responsible citizens to lead and serve in a diverse world.” Yet we fail this mission in many ways. Most students do not follow current events and are unaware of pressing issues. Few leave the university with a love of reading, despite its importance to responsible citizenship.

With this new AI policy, the university risks compounding these failures by embracing a technology that may further erode students’ ability to think critically about the world around them.



Source link

Continue Reading

Trending