Connect with us

AI Research

Rushing into genAI? Prepare for budget blowouts and broken promises – Computerworld

Published

on


“What you’re going to find on the input and output tokens is a dramatic difference in price,” Suda said. “The build cost is quite low to get into the game, but once you begin using it, the costs go up. Say you have 400 users to start. By year four, you may have 2,000 users.”

“So, what happens? You’re consuming more and your costs goes up four times by year four,” he said.

“GenAI is not like Google, but some organizations use it like Google — you go into it and ask a question and get an answer. That doesn’t really happen,” Suda continued. “You get an answer and often think, ‘That’s not quite what I wanted.’ And so that makes them want to ask another question. That can multiply your cost quickly.”

The hidden costs that can add up

Thermo Fisher Scientific’s Kwiecien said one cost that hasn’t been considered involves testing. “Every time you ask a question and test it, that’s a cost,” she said. “I’m not just going to load that 500 time,s because that will cost me every time.

“We need to test how often AI gives good answers to common questions like ‘What’s the recruiting process?’ or ‘Where’s my 401(k) info?’” she said. “But each test costs money, so we have to balance accuracy with cost and decide how many times to test to be confident in the results.”

Thermo Fisher is currently using a virtual chatbot from ServiceNow, and hopes to make it more intuitive by adding a genAI layer. As a result, it’s currently eyeing genAI solutions from Microsoft, IBM and others.

Another cost can come with efforts to use genAI in hiring. Amy Ritter, vice president for Talent Acquisition at Thermo Fisher, said the company implemented a genAI-powered hiring app from Phenom to automate parts of its global manufacturing hiring platform. The company then had to invest in job preview videos to show candidates what it’s like to work at Thermo Fisher — covering the environment, required PPE, and key skills — since recruiters weren’t involved early in the process.

The cost of change management is also often overlooked, Ritter said. “We invested time and money visiting sites, engaging leaders, and building buy-in, which paid off with strong adoption at launch,” she said.

Injecting Phenom’s genAI into its HR hiring platform, however, netted big returns, Ritter said. It cut candidate screening time from 16 days to a just 7 minutes. Along with automating interview scheduling, cumulatively Thermo Fisher is saving over 8,000 hours a year in candidate screening, 12,000 hours in scheduling time and filling roles 10% faster, Ritter said.

And there are infrastructure costs — the cost of building out, running and maintaining server farms, including managed service, is also often underestimated, according to AWS’s Hennesey. “One insurance customer had 200 [proofs of concept] running, but couldn’t articulate the expected value — most were just experiments. Our advice: clearly define the problem, align it with organizational goals, and measure expected returns,” he said.

Moving from pilot to production can also be a soft spot for costs, as can shifting from on-prem to the cloud; the latter means new services and pricing models that need to be understood and forecast.

AWS’s Bedrock, Microsoft’s Azure AI Studio, Google’s Cloud Vertex AI, IBM’s Watson.ai and Cohere’s Platform are all fully managed service offerings that allow AI developers to build apps using top foundation models via a single API — no infrastructure management needed. “You pay on a per model, on a per region basis,” Hennesey said. “And, then you have to think about tokens.”

Making a “capacity commitment” to a vendor can cuts costs. So instread of buying capacity “on demand” organizations can make an LLM capacity commitment for a specific amount of time – whether one month or six months – and deliver up to a 60% savings, Hennesey said.

The bottom line: there’s still a lot of uncertainty around the cost of genAI projects because the technology is still in its early days — and still evolving.

“I feel like we’re not getting great answers, because people are unsure how it’s going to be used,” Kwiecien said. “And so it’s hard to understand what your usage may look like in the future, because we can’t tell how fast it’s going to take people to flip to that.

“How fast are we going to get the solutions to really answer the way that we want it to answer?” she said.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

As they face conflicting messages about AI, some advice for educators on how to use it responsibly

Published

on


When it comes to the rapid integration of artificial intelligence into K-12 classrooms, educators are being pulled in two very different directions.

One prevailing media narrative stokes such profound fears about the emerging strengths of artificial intelligence that it could lead one to believe it will soon be “game over” for everything we know about good teaching. At the same time, a sweeping executive order from the White House and tech-forward education policymakers paint AI as “game on” for designing the educational system of the future.

I work closely with educators across the country, and as I’ve discussed AI with many of them this spring and summer, I’ve sensed a classic “approach-avoidance” dilemma — an emotional stalemate in which they’re encouraged to run toward AI’s exciting new capabilities while also made very aware of its risks.

Even as educators are optimistic about AI’s potential, they are cautious and sometimes resistant to it. These conflicting urges to approach and avoid can be paralyzing.

Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

What should responsible educators do? As a learning scientist who has been involved in AI since the 1980s and who conducts nationally funded research on issues related to reading, math and science, I have some ideas.

First, it is essential to keep teaching students core subject matter — and to do that well. Research tells us that students cannot learn critical thinking or deep reasoning in the abstract. They have to reason and critique on the basis of deep understanding of meaningful, important content. Don’t be fooled, for example, by the notion that because AI can do math, we shouldn’t teach math anymore.

We teach students mathematics, reading, science, literature and all the core subjects not only so that they will be well equipped to get a job, but because these are among the greatest, most general and most enduring human accomplishments.

You should use AI when it deepens learning of the instructional core, but you should also ignore AI when it’s a distraction from that core.

Second, don’t limit your view of AI to a focus on either teacher productivity or student answer-getting.

Instead, focus on your school’s “portrait of a graduate” — highlighting skills like collaboration, communication and self-awareness as key attributes that we want to cultivate in students.

Much of what we know in the learning sciences can be brought to life when educators focus on those attributes, and AI holds tremendous potential to enrich those essential skills. Imagine using AI not to deliver ready-made answers, but to help students ask better, more meaningful questions — ones that are both intellectually rigorous and personally relevant.

AI can also support student teams by deepening their collaborative efforts — encouraging the active, social dimensions of learning. And rather than replacing human insight, AI can offer targeted feedback that fuels deeper problem-solving and reflection.

When used thoughtfully, AI becomes a catalyst — not a crutch — for developing the kinds of skills that matter most in today’s world.

In short, keep your focus on great teaching and learning. Ask yourself: How can AI help my students think more deeply, work together more effectively and stay more engaged in their learning?

Related: PROOF POINTS: Teens are looking to AI for information and answers, two surveys show

Third, seek out AI tools and applications that are not just incremental improvements, but let you create teaching and learning opportunities that were impossible to deliver before. And at the same time, look for education technologies that are committed to managing risks around student privacy, inappropriate or wrong content and data security.

Such opportunities for a “responsible breakthrough” will be a bit harder to find in the chaotic marketplace of AI in education, but they are there and worth pursuing. Here’s a hint: They don’t look like popular chatbots, and they may arise not from the largest commercial vendors but from research projects and small startups.

For instance, some educators are exploring screen-free AI tools designed to support early readers in real-time as they work through physical books of their choice. One such tool uses a hand-held pointer with a camera, a tiny computer and an audio speaker — not to provide answers, but to guide students as they sound out words, build comprehension and engage more deeply with the text.

I am reminded: Strong content remains central to learning, and AI, when thoughtfully applied, can enhance — not replace — the interactions between young readers and meaningful texts without introducing new safety concerns.

Thus, thoughtful educators should continue to prioritize core proficiencies like reading, math, science and writing — and using AI only when it helps to develop the skills and abilities prioritized in their desired portrait of a graduate. By adopting ed-tech tools that are focused on novel learning experiences and committed to student safety, educators will lead us to a responsible future for AI in education.

Jeremy Roschelle is the executive director of Digital Promise, a global nonprofit working to expand opportunity for every learner.

Contact the opinion editor at opinion@hechingerreport.org.

This story about AI in the classroom was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.



Source link

Continue Reading

AI Research

Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal – The CSR Journal

Published

on



Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal  The CSR Journal



Source link

Continue Reading

AI Research

OpenAI business to burn $115 billion through 2029 The Information

Published

on


OpenAI CEO Sam Altman walks on the day of a meeting of the White House Task Force on Artificial Intelligence (AI) Education in the East Room at the White House in Washington, D.C., U.S., September 4, 2025.

Brian Snyder | Reuters

OpenAI has sharply raised its projected cash burn through 2029 to $115 billion as it ramps up spending to power the artificial intelligence behind its popular ChatGPT chatbot, The Information reported on Friday.

The new forecast is $80 billion higher than the company previously expected, the news outlet said, without citing a source for the report.

OpenAI, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, some $1.5 billion higher than its projection from earlier this year, the report said.

The company did not immediately respond to Reuters request for comment.

To control its soaring costs, OpenAI will seek to develop its own data center server chips and facilities to power its technology, The Information said.

OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, saying OpenAI plans to use the chip internally rather than make it available to customers.

The company deepened its tie-up with Oracle in July with a planned 4.5-gigawatts of data center capacity, building on its Stargate initiative, a project of up to $500 billion and 10 gigawatts that includes Japanese technology investor SoftBank. OpenAI has also added Alphabet’s Google Cloud among its suppliers for computing capacity.

The company’s cash burn will more than double to over $17 billion next year, $10 billion higher than OpenAI’s earlier projection, with a burn of $35 billion in 2027 and $45 billion in 2028, The Information said.

Read the complete report by The Information here.



Source link

Continue Reading

Trending