Connect with us

AI Research

Enterprises Confront the Real Price Tag of AI Deployment

Published

on


The rush to integrate artificial intelligence (AI) into enterprise operations is colliding with a complex and sometimes underestimated reality: Deploying AI at scale can be pricey, and the true cost can extend far beyond the per-million-token rates on vendor websites.

According to recent PYMNTS Intelligence data, the cost of deploying AI is the second biggest drawback of generative AI adoption, with 46.7% citing it as a concern, following only integration complexity.

On paper, the cost of using today’s generative models is falling based on what AI companies are charging.

For example, OpenAI’s GPT-4 with an 8K context window had cost $30 per million input tokens and $60 per million output tokens as of early 2023. This year, GPT-4 Turbo, which is more powerful, nonetheless costs 50% to 67% less: $10 per million input tokens and $30 for the output.

According to Stanford’s 2025 Artificial Intelligence Index report, as AI models become more capable and smaller, the costs for applying them in use cases — inference — “have fallen anywhere from nine to 900 times per year,” the report said.

When it comes to infrastructure, costs have declined by 30% annually, while energy efficiency has improved by 40% each year, according to the Stanford report. Moreover, open-weight models that are free to use are closing the gap with closed models in performance.

But these headline numbers tell only part of the story.

Although the cost of the models has dropped since 2022, the overall cost of ownership “has been resistant to declines,” said Muath Juady, founder of SearchQ.AI. “The real expenses lie in the hidden infrastructure, including data engineering teams, security compliance, constant model monitoring, and integration architects necessary to connect AI with existing systems.”

For every dollar spent on AI models, businesses are spending five to $10 to make the models “production-ready and enterprise-compliant,” Juady told PYMNTS. “The integration challenges tend to be more expensive than the technology itself and require substantial investment in change management and process redesign, which many organizations underestimate.”

Moreover, the cost of AI deployment “is not a one-time expense but an ongoing operational commitment,” Juady added.

So why is AI adoption soaring? Juady said, “businesses that are successfully adopting AI are not waiting for costs to drop further; they are identifying specific use cases where even current costs can provide a measurable ROI.”

Read also: High Impact, Big Reward: Meet the GenAI-Focused CFO

Self-Hosting Can Lower Costs

For many enterprises, early decisions, such as whether to self-host, use the cloud or use third-party infrastructure, can dictate as much as 40% of AI expenses, said Pavel Bantsevich, project manager and solutions advisor at Pynest. Cloud-based hosting may be ideal for prototypes, but costs can spike as workloads scale.

Bantsevich said he worked with a U.S. construction company that’s been in business for a century to develop an AI predictive analytics tool and hosted it in the cloud. Infrastructure costs came to under $200 a month. But once it went live and people started using it, costs soared to around $10,000 a month. Switching to self-hosting using Meta’s open-source Llama model instead of the cloud lowered the cost to about $7,000 a month and has remained under control.

In another case, a European retailer client of Bantsevich’s with more than 50,000 employees wanted to implement a computer vision module for self-checkout machines. But the company didn’t want to use the cloud. It self-hosted instead using a small Llama AI model that performed well. Costs came to less than $10 a month per machine. “If a cloud solution had been selected, the numbers would have gone sky high,” he said.

Bantsevich believes that costs will continue to decline because datasets are more readily available today and cloud providers also have cut rates to retain customers. “It is likely we shall see AI costs be similar to electricity bills in the near future,” he predicted.

Meanwhile, Bill Chief Financial Officer Rohini Jain advised businesses to take advantage of AI that is already embedded in the platforms they use, such as those for invoicing, payments or forecasting, rather than adding standalone tools with “uncertain” pricing. “Integrated solutions typically offer better ROI and more predictable costs, such as subscription pricing,” she said.

Fergal Glynn, CMO and AI security advocate of Mindgard, said deploying AI can cost as little as $10,000 for basic projects, while large-scale enterprise systems can run into millions of dollars. Most companies spend between $50,000 and $500,000 for practical use cases like analytics tools or chatbots; smaller firms often pay less by using off-the-shelf AI.

Nicole DiNicola, global vice president of marketing at Smartcat, told PYMNTS that adopting AI doesn’t have to be “all or nothing.”

“Many platforms, including free or low-cost options, make it easy for organizations to start small and scale their adoption over time,” DiNicola said. “Unlike legacy SaaS, which often requires lengthy onboarding, upfront costs, and full-scale deployment to show value, AI can deliver meaningful impact without being fully integrated organization-wide.”

DiNicola pointed to teams embedding AI into workflows and already gaining efficiencies and cost savings. “AI tends to compound in value, but even small-scale adoption can drive clear and measurable improvements.”

A worse outcome would be letting the cost and complexity of AI scare a business into avoiding AI deployment in the first place.

“Inaction is often the more expensive path, even if it’s less obvious upfront,” DiNicola added. “While that delay might feel safe, early adopters are already building momentum, improving processes, learning faster, and expanding their competitive advantage.”

Read more:

How to Choose Between Deploying an AI Chatbot or Agent

Small Business, Big AI: How SMBs Are Leveling the Playing Field With Enterprise Giants

AI in Accounting Services May Level Playing Field for Small Businesses



Source link

AI Research

Intelligence is not artificial | The Catholic Register

Published

on


On our Comment pages, Sr. Helena Burns issues a robust call for a return to “old school” means of acquiring, developing and retaining knowledge in the age of AI.

Traditionalist though she might be in many ways, however, Sr. Burns’ appeal is not simply to revive the alliterative formula of Readin’, Writin’ and Arithmetic. Rather, she urges a return to the lost arts of using libraries, taking notes, listening to wiser heads, and above all using our own brains rather than relying on the post in the machine to explain the world. 

“We can rebuild a talking, thinking, literate, memorizing culture. But it’s a slow build. It always was, always will be, and it starts when you’re a kiddo. Children in school are now saying they don’t want to learn how to read and write because computers will do it for them. They don’t know that they’re surrendering their humanity,” she writes.

Advertisement

The good news is that the much-rumoured surrender seems to be much further off than predicted in the recent frenzy over ChatGPT and its cohorts purportedly being thisclose to taking over the world and doing everything from producing perfect sour grapes to writing editorials. 

In facts, recent reports particularly in the financial press, suggest AI-mania is already plateauing, if not hitting a downward curve. That doesn’t mean it won’t still cause significant disruption in workplaces or in how we navigate the storm-tossed seas of daily life. It doesn’t mean we can simply shrug off the statistic Sr. Burns cites of a reported 47 per cent decline in neural engagement among those who relied on artificial intelligence to help complete an essay versus those who got ink under their fingernails. 

But as techno journalist Asa Fitch reported last week, Meta Platforms has delayed rollout of its next AI iteration, Llama 4 Behemoth, because of engineering failures to significantly improve the previous model. Open AI, meanwhile, overhyped its follow up ChatGPT 5 and saw it effectively flatline in the market.

Business leaders, already sceptical of security and privacy concerns with AI, have hardly been reassured by the “tendency of even the best AI models to occasionally hallucinate wrong answers,” Fitch writes.

More critically, many businesses looking at the allure of AI don’t yet know, in very practical terms, what it can do for their particular sector. We tend to forget that from the “future is now” advent of the Internet, it took the better part of a decade before society began to appreciate its ubiquitous uses.

Advertisement

University of California, San Diego psychology professor Cory Miller points out there even more formidable barriers to broad AI adaptation. Not the least of such obstacles are the requirements for, as Miller says, “enormous hardware, constant access to vast training data, and unsustainable amounts of electrical power (emphasis added).”

How unsustainable? A human brain, Miller writes, “runs on 20 watts of power – less than a lightbulb.”

AI by contrast?

“To match the computational power of a single human brain, a leading AI system would require the same amount of energy that powers the entire city of Dallas. Let that sink in for a second. One lightbulb versus a city of 1.3 million people,” he says. 

The comparison is arithmetically sobering. It’s also ultimately a hallelujah chorus to the glory of creation that is humankind. We exist in a culture awash – it often seems perversely pridefully – in self-underestimation and outright denigration. Oh, to deploy Hamlet’s immortal phrase, what a piece of work is man.

Without question, evil lurks in our darker corners and threatens to beset our best and brightest achievements. But achieve we do as we collectively engage the unique phenomenal 20-watt light bulb brains that are the universal gift from God, our Sovereign Lord and Creator.

In another column in our Comment section, Mary Marrocco illuminates the dynamic of that gift and that engagement, quoting St. Athanasius’ observation that “when we forgot to look up to God, God came down to the low place we’d fixed our gaze on.”

Advertisement

The outcome was the glorious rise of our Holy Mother the Church, whose cycle of liturgical years, year after year, reminds us of who we are, what we are, and to whom we truly belong.

There is not a shred of artificiality in the intelligence of the resulting library (biblio) of the Bible’s books, its Gospels, its Good News. There is only God’s Word, the most extraordinary conversation any child, any human being, could ever be invited to learn from 

A version of this story appeared in the August 31, 2025, issue of The Catholic Register with the headline “Intelligence is not artificial“.



Source link

Continue Reading

AI Research

Has artificial intelligence finally passed the Will Smith spaghetti test? – Sky News

Published

on



Has artificial intelligence finally passed the Will Smith spaghetti test?  Sky News



Source link

Continue Reading

AI Research

AI as a Researcher: First Peer-Reviewed Research Paper Written Without Humans

Published

on


Artificial intelligence has crossed another significant milestone that challenges our understanding of what machines can achieve independently. For the first time in scientific history, an AI system has written a complete research paper that passed peer review at an academic conference without any human assistance in the writing process. This breakthrough could be a fundamental shift in how scientific research might be conducted in the future.

Historic Achievement

A paper produced by The AI Scientist-v2 passed the peer-review process at a workshop in a top international AI conference. The research was submitted to an ICLR 2025 workshop, which is one of the most prestigious venues in machine learning. The paper was generated by an improved version of the original AI Scientist, called The AI Scientist-v2.

The accepted paper, titled “Compositional Regularization: Unexpected Obstacles in Enhancing Neural Network Generalization,” received impressive scores from human reviewers. Of the three papers submitted for review, one received ratings that placed it above the acceptance threshold. This breakthrough is a significant advancement as AI can now participate in the fundamental process of scientific discovery that has been exclusively human for centuries.

The research team from Sakana AI, working with collaborators from the University of British Columbia and the University of Oxford, conducted this experiment. They received institutional review board approval and worked directly with ICLR conference organizers to ensure the experiment followed proper scientific protocols.

How The AI Scientist-v2 Works

The AI Scientist-v2 has achieved this success due to several major advancements over its predecessor. Unlike its predecessor, AI Scientist-v2 eliminates the need for human-authored code templates, can work across diverse machine learning domains, and employs a tree-search methodology to explore multiple research paths simultaneously.

The system operates through an end-to-end process that mirrors how human researchers work. It begins by formulating scientific hypotheses based on the research domain it is assigned to explore. The AI then designs experiments to test these hypotheses, writes the necessary code to conduct the experiments, and executes them automatically.

What makes this system particularly advanced is its use of agentic tree search methodology. This approach allows the AI to explore multiple research directions simultaneously, much like how human researchers might consider various approaches to solving a problem. This involves running experiments via agentic tree search, analyzing results, and generating a paper draft. A dedicated experiment manager agent coordinates this entire process to ensure that the research remains focused and productive.

The system also includes an enhanced AI reviewer component that uses vision-language models to provide feedback on both the content and visual presentation of research findings. This creates an iterative refinement process where the AI can improve its own work based on feedback, similar to how human researchers refine their manuscripts based on colleague input.

What Made This Research Paper Special

The accepted paper focused on a challenging problem in machine learning called compositional generalization. This refers to the ability of neural networks to understand and apply learned concepts in new combinations they have never seen before. The AI Scientist-v2 investigated novel regularization methods that might improve this capability.

Interestingly, the paper also reported negative results. The AI discovered that certain approaches it hypothesized would improve neural network performance actually created unexpected obstacles. In science, negative results are valuable because they prevent other researchers from pursuing unproductive paths and contribute to our understanding of what does not work.

The research followed rigorous scientific standards throughout the process. The AI Scientist-v2 conducted multiple experimental runs to ensure statistical validity, created clear visualizations of its findings, and properly cited relevant previous work. It formatted the entire manuscript according to academic standards and wrote comprehensive discussions of its methodology and findings.

The human researchers who supervised the project conducted their own thorough review of all three generated papers. They found that while the accepted paper was of workshop quality, it contained some technical issues that would prevent acceptance at the main conference track. This honest assessment demonstrates the current limitations while acknowledging the significant progress achieved.

Technical Capabilities and Improvements

The AI Scientist-v2 demonstrates several remarkable technical capabilities that distinguish it from previous automated research systems. The system can work across diverse machine learning domains without requiring pre-written code templates. This flexibility means it can adapt to new research areas and generate original experimental approaches rather than following predetermined patterns.

The tree search methodology is a significant innovation in AI research automation. Rather than pursuing a single research direction, the system can maintain multiple hypotheses simultaneously and allocate computational resources based on the promise each direction shows. This approach mirrors how experienced human researchers often maintain several research threads while focusing most effort on the most promising avenues.

Another crucial improvement is the integration of vision-language models for reviewing and refining the visual elements of research papers. Scientific figures and visualizations are critical for communicating research findings effectively. The AI can now evaluate and improve its own data visualizations iteratively.

The system also demonstrates understanding of scientific writing conventions. It properly structures papers with appropriate sections, maintains consistent terminology throughout manuscripts, and creates logical flow between different parts of the research narrative. The AI shows awareness of how to present methodology, discuss limitations, and contextualize findings within existing literature.

Current Limitations and Challenges

Despite this historic achievement, several important limitations restrict the current capabilities of AI-generated research. The company said that none of its AI-generated studies passed its internal bar for ICLR conference track publication standards. This indicates that while the AI can produce workshop-quality research, reaching the highest tiers of scientific publication remains challenging.

The acceptance rates provide important context for evaluating this achievement. The paper was accepted at a workshop track, which typically has less strict standards than the main conference (60-70% acceptance rate vs. the 20-30% acceptance rates typical of main conference tracks. While this does not diminish the significance of the achievement, it suggests that producing truly groundbreaking research remains beyond current AI capabilities.

The AI Scientist-v2 also demonstrated some weaknesses that human researchers identified during their review process. The system occasionally made citation errors, attributing research findings to incorrect authors or publications. It also struggled with some aspects of experimental design that human experts would have approached differently.

Perhaps most importantly, the AI-generated research focused on incremental improvements rather than paradigm-shifting discoveries. The system appears more capable of conducting thorough investigations within established research frameworks than of proposing entirely new ways of thinking about scientific problems.

The Road Ahead

The successful peer review of AI-generated research is the beginning of a new era in scientific research. As foundation models continue improving, we can expect The AI Scientist and similar systems to produce increasingly sophisticated research that approaches and potentially exceeds human capabilities in many domains.

The research team anticipates that future versions will be capable of producing papers worthy of acceptance at top-tier conferences and journals. The logical progression suggests that AI systems may eventually contribute to breakthrough discoveries in fields ranging from medicine to physics to chemistry.

This development also raises important questions about research ethics and publication standards. The scientific community must develop new norms for handling AI-generated research, including when and how to disclose AI involvement and how to evaluate such work alongside human-generated research.

The transparency demonstrated by the research team in this experiment provides a valuable model for future AI research evaluation. By working openly with conference organizers and subjecting their AI-generated work to the same standards as human research, they have established important precedents for the responsible development of automated research capabilities.

The Bottom Line

The acceptance of an AI-written paper at a leading machine learning workshop is a significant advancement in AI capabilities. While the work is not yet at the level of top-tier conference, it demonstrates a clear trajectory toward AI systems becoming serious contributors to scientific discovery. The challenge now lies not only in advancing technology but also in shaping the ethical and academic frameworks that will govern this new frontier of research.



Source link

Continue Reading

Trending