AI Insights
Why AI in finance often stalls—and how to fix it

Too often, AI initiatives begin with a promising tool or use case, but fail to address an immediate, measurable business problem. The most successful finance functions start small—but deliberately—with real client data and concerns. Whether it’s improving query turnaround times, accelerating dispute resolution or streamlining reconciliations, the goal is to build trust quickly through visible outcomes. These early wins build confidence—and more importantly, the momentum needed to scale.
For example, a global building materials manufacturer engaged IBM Consulting® to tackle a backlog of over 1.2 million customer queries annually. Using real operational data, we implemented a coordinated set of AI-powered agents to triage queries, assess financial risk and automate enterprise resource planning (ERP) updates. The result was a 60% improvement in query resolution efficiency, faster deliveries and measurable cash flow gains. This approach helped the client to reduce the number of days sales outstanding (DSO) within the same fiscal year.
AI Insights
General-purpose LLMs can be used to track true critical findings

General-purpose large language models (LLMs), such as GPT-4, can be adapted to detect and categorize multiple critical findings within individual radiology reports, using minimal data annotation, researchers have reported.
A team led by Ish Talati, MD, of Stanford University, with colleagues from the Arizona Advanced AI and Innovation (A3I) Hub and Mayo Clinic Arizona, retrospectively evaluated two “out-of-the-box” LLMs — GPT-4 and Mistral-7B — to see how well they might perform at classifying findings indicating medical emergency or requiring immediate action, among others. Their results were published on September 10 in the American Journal of Roentgenology.
Timely critical findings communication can be challenging due to the increasing complexity and volume of radiology reports, the authors noted. “Workflow pressures highlight the need for automated tools to assist in critical findings’ systematic identification and categorization,” they said.
The study demonstrated that few-shot prompting, incorporating a small number of examples for model guidance, can aid general-purpose LLMs in adapting to the medical task of complex categorization of findings into distinct actionable categories.
To that end, Talati and colleagues evaluated GPT-4 and Mistral-7B on more than 400 radiology reports selected from the MIMIC-III database of deidentified health data from patients in the intensive care unit (ICU) at Beth Israel Deaconess Medical Center from 2001 to 2012.
Analysis included 252 radiology reports of varying modalities (56% CT, ~30% radiography, 9% MRI, for example) and anatomic regions (mostly chest, pelvis, and head).
The reports were divided into a prompt engineering tuning set of 50, a holdout test set of 125, and a pool of 77 remaining reports used as examples for few-shot prompting. An external test set consisted of 180 chest x-ray reports extracted from the CheXpert Plus database.
With a board-certified radiologist and software separately, manual reviews of the reports classified them at consensus into one of three categories:
- True critical finding (new, worsening, or increasing in severity since prior imaging)
- Known/expected critical finding (a critical finding that is known and unchanged, improving, or decreasing in severity since prior imaging)
- Equivocal critical finding (an observation that is suspicious for a critical finding but that is not definitively present based on the report)
The models analyzed the submitted report and provided structured output containing multiple fields, listing model-identified critical findings within each of the three categories, according to the group. Evaluation included automated text similarity metrics (BLEU-1, ROUGE-F1, G-Eval) and manual performance metrics (precision, recall) in the three categories.
Precision and recall comparison for LLMs tracking true critical findings |
||
Type of test set and classification |
GPT-4 |
Mistral-7B |
Precision |
||
Holdout test set, true critical findings |
90.1% |
75.6% |
Holdout test set, known/expected critical findings |
80.9% |
34.1% |
Holdout test set, equivocal critical findings |
80.5% |
41.3% |
External test set, True critical findings |
82.6% |
75% |
External test set, known/expected critical findings |
76.9% |
33.3% |
External test set, equivocal critical findings |
70.8% |
34% |
Recall |
||
Holdout test set, true critical findings |
86.9% |
77.4% |
Holdout test set, known/expected critical findings |
85% |
70% |
Holdout test set, equivocal critical findings |
94.3% |
74.3% |
External test set, True critical findings |
98.3% |
93.1% |
External test set, known/expected critical findings |
71.4% |
92.9% |
External test set, equivocal critical findings |
85% |
80% |
“GPT-4, when optimized with just a small number of in-context examples, may offer new capabilities compared to prior approaches in terms of nuanced context-dependent classifications,” Tatali and colleagues wrote. “This capability is crucial in radiology, where identification of findings warranting referring clinician alerts requires differentiation of whether the finding is new or already known.”
Though promising, further refinement is needed before clinical implementation, the group noted. In addition, the group highlighted a role for electronic health record (EHR) integration to inform more nuanced categorization in future implementations.
Furthermore, additional technical development remains required before potential real-world applications, the group said.
See all metrics and the complete paper here.
AI Insights
The AI Trade Picks Up Steam After Oracle’s ‘Truly Historic’ Quarter

Key Takeaways
- Cloud computing and software provider Oracle on Tuesday reported its backlog grew to $455 billion last quarter, a 359% increase from the prior year.
- The company’s pipeline signaled artificial intelligence spending should remain strong for several years, sending Oracle shares sharply higher and boosting the majority of AI stocks.
- Wall Street analysts expressed shock on Oracle’s earnings call Tuesday, with one calling the company’s growth forecasts “truly historic.”
The artificial intelligence trade got a shot of adrenaline on Wednesday after results from database software and cloud provider Oracle suggested the AI spending bonanza has ample room to run.
Oracle (ORCL) on Tuesday reported its backlog swelled to $455 billion, a 359% year-over-year increase, after it signed four multibillion-dollar cloud deals in the first quarter of its 2026 fiscal year. Executives said the backlog is expected to surpass half a trillion as Oracle inks more big deals in the coming months.
Oracle also forecast cloud revenue would grow from an estimated $18 billion this fiscal year to $144 billion in 2030, about $50 billion more than Wall Street had forecast. Oracle said most of that revenue forecast was already reflected in its backlog, giving some investors greater confidence in the numbers. Meanwhile, the Wall Street Journal reported Wednesday that Oracle had signed a five-year contract worth $300 billion with ChatGPT creator OpenAI.
Oracle’s projections completely overshadowed lackluster first-quarter results, and sent its shares soaring as much as 43% on Wednesday.
The rising tide of robust AI spending was lifting plenty of boats on Wednesday. Shares of AI chip giants Nvidia (NVDA) and Broadcom (AVGO) were recently up more than 4% and 9%, respectively, while chip design company Arm Holdings (ARM) surged more than 8%. The PHLX Semiconductor Index (SOX) was up about 2%. Data center infrastructure provider Vertiv Holdings (VRT) jumped about 9%, while power generators Vistra (VST) and Constellation Energy (CEG) advanced 8% and 6%, respectively.
Oracle’s major cloud competitors were the only drag on the AI trade on Wednesday. Amazon (AMZN) declined more than 3%, while Meta Platforms (META) dropped nearly 2%. Alphabet (GOOG) and Microsoft (MSFT) ticked higher.
Wall Street Hails ‘Momentous’ Quarter
Wall Street’s ebullience over the results was first visible on Oracle’s earnings call Tuesday night.
“Even I’m sort of blown away by what this looks like going forward,” said Guggenheim analyst John DiFucci at the top of the question and answer portion of the call. “I tell my team, ‘Pay attention to this’—even those that are not working on Oracle—‘because this is a career event happening right now,” DiFucci added.
“There’s no better evidence of a seismic shift happening in computing than these results that you just put up,” said Deutsche Bank analyst Brad Zelnick. Others called the quarter “momentous” and the backlog growth “truly historic.”
AI Demand, Investments Expected To Remain Strong
AI investments have been driven by what many have characterized as insatiable demand for training and inference, and Oracle’s results appeared to support that assessment.
On Oracle’s earnings call, co-founder and chair Larry Ellison said an unnamed company had requested all of Oracle’s available inferencing capacity. “I’d never gotten a call like that,” Ellison said.
Big Tech’s investment in capacity to meet that demand is expected to remain robust in the coming years, supported by healthy cash flows at the biggest cloud providers and supportive tax incentives.
Cloud providers like Microsoft, Alphabet and Amazon have been key drivers of the AI infrastructure trade in recent years. Hyperscalers are expected to spend a cumulative $368 billion on infrastructure this year, with much of that earmarked for data centers and the chips and servers that fill them, according to Goldman Sachs.
Oracle on Tuesday forecast capital expenditures of $35 billion in the 12 months through May 2026, about $10 billion more than the figure executives gave as a minimum last quarter.
Tax incentives written into the recently passed One Big Beautiful Bill Act should also support AI infrastructure investment. Morgan Stanley expects the bill’s immediate capital investment depreciation provisions to boost Big Tech’s free cash flows by nearly $50 billion this year. The firm expects a sizable portion of those tax savings to be spent on AI infrastructure.
AI Insights
From Language Sovereignty to Ecological Stewardship – Intercontinental Cry

Last Updated on September 10, 2025
Artificial intelligence is often framed as a frontier that belongs to Silicon Valley, Beijing, or the halls of elite universities. Yet across the globe, Indigenous peoples are shaping AI in ways that reflect their own histories, values, and aspirations. These efforts are not simply about catching up with the latest technological wave—they are about protecting languages, reclaiming data sovereignty, and aligning computation with responsibilities to land and community.
From India’s tribal regions to the Māori homelands of Aotearoa New Zealand, Indigenous-led AI initiatives are emerging as powerful acts of cultural resilience and political assertion. They remind us that intelligence—whether artificial or human—must be grounded in relationship, reciprocity, and respect.
Giving Tribal Languages a Digital Voice
Just this week, researchers at IIIT Hyderabad, alongside IIT Delhi, BITS Pilani, and IIIT Naya Raipur, launched Adi Vaani, a suite of AI-powered tools designed for tribal languages such as Santali, Mundari, and Bhili.
At the heart of the project is a simple premise that technology should serve the people who need it most. Adi Vaani offers text-to-speech, translation, and optical character recognition (OCR) systems that allow speakers of marginalized languages to access education, healthcare, and public services in their mother tongues.
One of the project’s most promising outputs is a Gondi translator app that enables real-time communication between Gondi, Hindi, and English. For the nearly three million Gondi speakers who have long been excluded from India’s digital ecosystem, this tool is nothing less than transformative.
Speaking about the value of the app, research scholar Gopesh Kumar Bharti commented, “Like many tribal languages, Gondi faces several challenges due to its lack of representation in the official schedule, which hampers its preservation and development. The aim is to preserve and restore the Gondi language so that the next generation understands its cultural and historical significance.”
Latin America’s Open-Source Revolution
In Latin America, a similar wave of innovation is underway. Earlier this year, researchers at the Chilean National Center for Artificial Intelligence (CENIA) unveiled Latam-GPT, a free and open-source large language model trained not only on Spanish and Portuguese, but also incorporating Indigenous languages such as Mapuche, Rapanui, Guaraní, Nahuatl, and Quechua.
Unlike commercial AI systems that extract and commodify, Latam-GPT was designed with sovereignty and accessibility in mind.
To be successful, Latam-GPT needs to ensure the participation of “Indigenous peoples, migrant communities, and other historically marginalized groups in the model’s validation,” said Varinka Farren, chief executive officer of Hub APTA.
But as with most good things, it’s going to take time. Rodrigo Durán, CENIA’s general manager, told Rest of World that it will likely take at least a decade.
Māori Data Sovereignty: “Our Language, Our Algorithms”
Half a world away, the Māori broadcasting collective Te Hiku Media has become a global leader in Indigenous AI. In 2021, the organization released an automatic speech recognition (ASR) model for Te Reo Māori with an accuracy rate of 92%—outperforming international tech giants.
Their achievement was not the result of corporate investment or vast computing power, but of decades of community-led language revitalization. By combining archival recordings with new contributions from fluent speakers, Te Hiku demonstrated that Indigenous peoples can own not only their languages but also the algorithms that process them.
As co-director Peter-Lucas Jones explained, “In the digital world, data is like land,” he says. “If we do not have control, governance, and ongoing guardianship of our data as indigenous people, we will be landless in the digital world, too.”
Indigenous Leadership at UNESCO
On the global policy front, leadership is also shifting. Earlier this year, UNESCO appointed Dr. Sonjharia Minz, an Oraon computer scientist from India’s Jharkhand state, as co-chair of the Indigenous Knowledge Research Governance and Rematriation program.
Her mandate is ambitious: to guide the development of AI-based systems that can securely store, share, and repatriate Indigenous cultural heritage. For communities who have seen their songs, rituals, and even sacred objects stolen and digitized without consent, this initiative signals a long-overdue turn toward justice.
As Dr. Minz told The Times of India, “We are on the brink of losing indigenous languages around the world. Indigenous languages are more than mere communication tools. They are repository of culture, knowledge and knowledge system. They are awaiting urgent attention for revitalization.”
AI and Environmental Co-Stewardship
Artificial intelligence is also being harnessed to care for the land and waters that sustain Indigenous peoples. In the Arctic, communities are blending traditional ecological knowledge with AI-driven satellite monitoring to guide adaptive mariculture practices—helping to ensure that changing seas still provide food for generations to come.
In the Pacific Northwest, Indigenous nations are deploying AI-powered sonar and video systems to monitor salmon runs, an effort vital not only to ecosystems but to cultural survival. Unlike conventional “black box” AI, these systems are validated by Indigenous experts, ensuring that machine predictions remain accountable to local governance and ecological ethics.
Such projects remind us that AI need not be extractive. It can be used to strengthen stewardship practices that have protected biodiversity for millennia.
The Hidden Toll of AI’s Appetite
As Indigenous communities lead the charge toward ethical and ecologically grounded AI, we must also confront the environmental realities underpinning the technology—especially the vast energy and water demands of large language models.
In Chile, the rapid proliferation of data centers—driven partly by AI demands—has sparked fierce opposition. Activists argue that facilities run by tech giants like Amazon, Google, and Microsoft exacerbate water scarcity in drought-stricken regions. As one local put it, “It’s turned into extractivism … We end up being everybody’s backyard.”
The energy hunger of LLMs compounds this strain further. According to researchers at MIT, training clusters for generative AI consume seven to eight times more energy than typical computing workloads, accelerating energy demands just as renewable capacity lags behind.
Globally, by 2022, data centers had consumed a staggering 460 terawatt-hours—a scale comparable to the electricity use of entire states such as France—and are projected to reach 1,050 TWh by 2026, which would place data centers among the top five global electricity users.
LLMs aren’t just energy-intensive; their environmental footprint also extends across their whole lifecycle. New modeling shows that inference—the use of pre-trained models—now contributes to more than half of total emissions. Meanwhile, Google’s own reporting suggests that AI operations have increased greenhouse gas emissions by roughly 48% over five years.
Communities hosting data centers often face additional challenges, including:
This environmental reckoning matters deeply to Indigenous-led AI initiatives—because AI should not replicate colonial patterns of extraction and dispossession. Instead, it must align with ecological reciprocity, sustainability, and respect for all forms of life.
Rethinking Intelligence
Together, these Indigenous-led initiatives compel us to rethink both what counts as intelligence and where AI should be heading. In the mainstream tech industry, intelligence is measured by processing power, speed, and predictive accuracy. But for Indigenous nations, intelligence is relational: it lives in languages that carry ancestral memory and in stories that guide communities toward balance and responsibility.
When these values shape artificial intelligence, the results look radically different from today’s extractive systems. AI becomes a tool for reciprocity instead of extraction. In other words, it becomes less about dominating the future and more about sustaining the conditions for life itself.
This vision matters because the current trajectory of AI as an arms race of ever-larger models, resource-hungry data centers, and escalating ecological costs—cannot be sustained.
The challenge is no longer technical but political and ethical. Will governments, institutions, and corporations make space for Indigenous leadership to shape AI’s future? Or will they repeat the same old colonial logics of extraction and exclusion? Time will tell.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi