Connect with us

Business

How AI Is Revolutionizing Cloud Migration and Application Modernization

Published

on


Cloud migration and application modernization have become make-or-break imperatives for enterprises, yet traditional approaches often stumble under the weight of complexity and legacy systems. Now, agentic AI is revolutionizing this challenge, offering business leaders a faster, smarter path to cloud transformation — and reshaping what’s possible for organizations ready to embrace it.

The Migration Imperative and AI’s Transformative Role

Cloud adoption is on the rise again in 2025, with Foundry reporting that 63% of businesses are accelerating cloud migration plans. AI is a central driver of that growth. This surge reflects a growing recognition that AI capabilities, cloud infrastructure, and application modernization are increasingly interconnected.

However, many organizations find themselves constrained by legacy workloads that are ill-suited for cloud environments. The struggle is that these same workloads contain critical applications and data that should be at the core of any AI strategy.

“Agentic AI represents a paradigm shift in how we approach enterprise IT transformation,” says Swami Sivasubramanian, VP of agentic AI at Amazon Web Services (AWS). “It’s not just about automating repetitive tasks. It’s about having an AI system that can understand the nuances of your application landscape, make intelligent decisions, and execute migrations with minimal human intervention.”

These AI systems create an entirely new migration experience by using advanced foundation models, LLMs, machine learning, graph neural networks, and automated reasoning.

Agentic AI: Your Team’s New Transformation Assistant

AWS Transform, the first agentic AI service developed to accelerate enterprise modernization of .NET, VMware, and mainframe workloads, is revolutionizing cloud migration by automating manual processes while keeping IT teams in control. The service’s AI agents act as transformation assistants, collaborating with customers and partners to tailor migration projects to specific needs.

The power of these agents lies in their ability to automate complex tasks that once required weeks and months of manual effort and multiple specialized teams. Where IT staff previously spent countless hours manually discovering workloads, mapping complex dependencies, and analyzing infrastructure requirements, AI agents now handle these tasks automatically and generate customized migration and modernization strategies. These strategies include precise recommendations for rightsizing, financial modeling, and cost optimization. During execution, these AI agents can autonomously migrate workloads, convert legacy code, and modernize database structures, significantly accelerating the entire process.

The results are dramatic: assessment tasks that typically take weeks are now completed in minutes, enabling data-driven decisions from day one. For .NET applications, organizations are modernizing applications up to 4x faster while achieving up to 40% reduction in licensing costs. VMware migration planning is reduced from weeks to minutes, with network translations being completed up to 80 times faster. Mainframe modernization of IBM z/OS applications is accelerated from years to months, including automated documentation generation.

“We were spending too much time upgrading old code manually. It was slowing us down and competing with our road map. Other cloud providers’ tooling helped, but wasn’t designed to modernize at scale or harness generative AI. We needed a leap, not just an incremental fix,” explained Matt Dimich, VP, Platform Engineering Enablement, Thomson Reuters. “AWS Transform felt like an extension of our team— constantly learning, optimizing, and helping us move faster.”

Experience Meets AI: Powering Transformation at Scale

An AI system is only as good as the knowledge that powers it. In enterprise transformation, decades of migration experience, industry expertise, and proven methodologies form the critical building blocks that help AI agents make informed decisions about complex workloads. This deep domain knowledge enables AI to understand not just what to migrate, but how to preserve business logic, maintain compliance, and optimize costs throughout the process.

“This shift from scripts and disconnected tools to intelligent, self-learning, automated workflows help enterprises modernize their most complex workloads with speed and confidence,” explains Dr. Asa Kalavade, VP of AWS Transform. AWS brings two decades of cloud migration expertise to this challenge, incorporating lessons learned from thousands of successful migrations across Microsoft, VMware, SAP, and mainframe workloads.

The power of this approach multiplies through partner collaboration. Global leaders can embed their domain expertise directly into transformation workflows, creating what Sivasubramanian calls “a powerful force multiplier” that “allows organizations to tackle even the most complex migrations with confidence.” Partners like Accenture, Capgemini, CAST Software, IBM Consulting, and Mission Cloud bring industry-specific knowledge that enhances the AI’s ability to handle unique business requirements and compliance needs.

Beyond Migration: Powering the AI-Driven Enterprise

AI-powered migration is more than a move to the cloud—it’s a springboard for future innovation. By modernizing applications and infrastructure with cloud-native technologies, organizations create a foundation for advanced AI and machine learning initiatives. As Sivasubramanian notes, “We’re just scratching the surface of what’s possible. The organizations that embrace this technology now will be best positioned to lead in the AI-driven future.”

For business and technology leaders, the message is clear: AI-powered cloud migration and modernization is a transformative approach that will redefine enterprise operations in the digital age. Early adopters stand to gain significant competitive advantages in an increasingly AI-powered world.

The future of business is in the cloud, powered by AI. Are you ready to lead?

To learn how AI can accelerate your cloud journey, visit https://aws.amazon.com/ai/generative-ai/

This post was created by AWS with Insider Studios.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Millions missing out on benefits and government support, analysis suggests

Published

on


Dan WhitworthReporter, Radio 4 Money Box

Andrea Paterson A self-portrait family shot of Andrea Paterson alongside her mum, Sally, and dad, Ian.Andrea Paterson

Andrea (left) persuaded her mum Sally to apply for attendance allowance on behalf of her dad Ian, which helped them cope with rising energy costs

New analysis suggests seven million households are missing out on £24bn of financial help and support because of unclaimed benefits and social tariffs.

The research from Policy in Practice, a social policy and data analytics company, says awareness, complexity and stigma are the main barriers stopping people claiming.

This analysis covers benefits across England, Scotland and Wales such as universal credit and pension credit, local authority help including free school meals and council tax support, as well as social tariffs from water, energy and broadband providers.

The government said it ran public campaigns to promote benefits and pointed to the free Help to Claim service.

Andrea Paterson in London persuaded her mum, Sally, to apply for attendance allowance on behalf of her dad, Ian, last December after hearing about the benefit on Radio 4’s Money Box.

Ian, who died in May, was in poor health at the time and he and Sally qualified for the higher rate of attendance allowance of £110 per week, which made a huge difference to their finances, according to Andrea.

“£110 per week is a lot of money and they weren’t getting the winter fuel payment anymore,” she said.

“So the first words that came out of Mum’s mouth were ‘well, that will make up for losing the winter fuel payment’, which [was] great.

“All pensioners worry about money, everyone in that generation worries about money. I think it eased that worry a little bit and it did allow them to keep the house [warmer].”

Unclaimed benefits increasing

In its latest report, Policy in Practice estimates that £24.1bn in benefits and social tariffs will go unclaimed in 2025-26.

It previously estimated that £23bn would go unclaimed in 2024-25, and £19bn the year before that, although this year’s calculations are more detailed than ever before.

“There are three main barriers to claiming – awareness, complexity and stigma,” said Deven Ghelani, founder and chief executive of Policy in Practice.

“With awareness people just don’t know these benefits exist or, if they do know about them, they just immediately assume they won’t qualify.

“Then you’ve got complexity, so being able to complete the form, being able to provide the evidence to be able to claim. Maybe you can do that once but actually you have to do it three, four, five , six, seven times depending on the support you’re potentially eligible for and people just run out of steam.

“Then you’ve got stigma. People are made to feel it’s not for them or they don’t trust the organisation administering that support.”

Although a lot of financial support is going unclaimed, the report does point to progress being made.

More older people are now claiming pension credit, with that number expected to continue to rise.

Some local authorities are reaching 95% of students eligible for free school meals because of better use of data.

Gateway benefits

Government figures show it is forecast to spend £316.1bn in 2025-26 on the social security system in England, Scotland and Wales, accounting for 10.6% of GDP and 23.5% of the total amount the government spends.

Responding to criticism that the benefits bill is already too large, Mr Ghelani said: “The key thing is you can’t rely on the system being too complicated to save money.

“On the one hand you’ve designed these systems to get support to people and then you’re making it hard to claim. That doesn’t make any sense.”

A government spokesperson said: “We’re making sure everyone gets the support they are entitled to by promoting benefits through public campaigns and funding the free Help to Claim service.

“We are also developing skills and opening up opportunities so more people can move into good, secure jobs, while ensuring the welfare system is there for those who need it.”

The advice if you think you might be eligible is to claim, especially for support like pension credit, known as a gateway benefit, which can lead to other financial help for those who are struggling.

Robin, from Greater Manchester, told the BBC that being able to claim pension credit was vital to his finances.

“Pension credit is essential to me to enable me to survive financially,” he said.

[But] because I’m on pension credit I get council tax exemption, I also get free dental treatment, a contribution to my spectacles and I get the warm home discount scheme as well.”



Source link

Continue Reading

Business

Free Training for Small Businesses

Published

on

By


Google’s latest initiative in Pennsylvania is set to transform how small businesses harness artificial intelligence, marking a significant push by the tech giant to democratize AI tools across the Keystone State. Announced at the AI Horizons Summit in Pittsburgh, the Pennsylvania AI Accelerator program aims to equip local entrepreneurs with essential skills and resources to integrate AI into their operations. This move comes amid a broader effort by Google to foster economic growth through technology, building on years of investment in the region.

Drawing from insights in a recent post on Google’s official blog, the accelerator offers free workshops, online courses, and hands-on training tailored for small businesses. Participants can learn to use AI for tasks like customer service automation and data analysis, potentially boosting efficiency and competitiveness. The program is part of Google’s Grow with Google initiative, which has already trained thousands in digital skills nationwide.

Strategic Expansion in Pennsylvania

Recent web searches reveal that Google’s commitment extends beyond training, with plans for substantial infrastructure investments. According to a report from GovTech, the company intends to pour about $25 billion into Pennsylvania’s data centers and AI facilities over the next two years. This investment underscores Pennsylvania’s growing role as a hub for tech innovation, supported by its proactive government policies on AI adoption.

Posts on X highlight the buzz around this launch, with users noting Google’s long-standing presence in the state, including digital skills programs that have generated billions in economic activity. For instance, sentiments from local business communities emphasize the accelerator’s potential to level the playing field for small enterprises against larger competitors.

Impact on Small Businesses

A deeper look into news from StartupHub.ai analyzes Google’s strategy, suggesting the accelerator could accelerate AI adoption among small and medium-sized businesses (SMBs), fostering innovation and job creation. The program includes access to tools like Gemini AI, enabling businesses to automate routine tasks and gain insights from data without needing extensive technical expertise.

Industry insiders point out that this initiative aligns with Pennsylvania’s high ranking in government AI readiness, as detailed in a City & State Pennsylvania analysis. The state’s forward-thinking approach, including pilots with technologies like ChatGPT in government operations, creates a fertile environment for such private-sector programs.

Collaborations and Broader Ecosystem

Partnerships are key to the accelerator’s success. News from Editor and Publisher reports on collaborations with entities like the Pennsylvania NewsMedia Association and Google News Initiative, extending AI benefits to media and other sectors. These alliances aim to sustain local industries through targeted accelerators.

Moreover, X posts from figures like Governor Josh Shapiro showcase the state’s enthusiasm, citing time savings from AI in public services that mirror potential gains for businesses. Google’s broader efforts, such as the AI for Education Accelerator involving Pennsylvania universities, indicate a holistic approach to building an AI-savvy workforce.

Future Prospects and Challenges

While the accelerator promises growth, challenges remain, including ensuring equitable access for rural businesses and addressing AI ethics. Insights from Google’s blog on AI training emphasize responsible implementation, with resources to mitigate biases and privacy concerns.

As Pennsylvania positions itself as an AI leader, Google’s program could serve as a model for other states. With ongoing updates from web sources and social media, the initiative’s evolution will likely reveal its true economic impact, potentially reshaping how small businesses thrive in an AI-driven era.



Source link

Continue Reading

Business

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

Published

on


This article is part two of a two-part case study on the dangers AI chatbots pose to young people. Part one covered the deceptive, pseudo-human design of ChatGPT.  This part will explore AI companies’ incentive to prioritize profits over safety.

Warning: The following contains descriptions of self-harm and suicide. Please guard your hearts and read with caution.

Sixteen-year-old Adam Raine took his own life in April after developing an unhealthy relationship with ChatGPT. His parents blame the chatbot’s parent company, OpenAI.

Matt and Maria Raine filed a sweeping wrongful death suit against OpenAI; its CEO, Sam Altman; and all employees and investors involved in the “design, development and deployment” of ChatGPT, version 4o, in California Superior Court on August 26.

The suit alleges OpenAI released ChatGPT-4o prematurely, without adequate safety testing or usage warnings. These intentional business decisions, the Raines say, cost Adam his life.

OpenAI started in 2015 as a nonprofit with a grand goal — to create prosocial artificial intelligence.

The company’s posture shifted in 2019 when it opened a for-profit arm to accept a multi-billion-dollar investment from Microsoft. Since then, the Raines allege, safety at OpenAI has repeatedly taken a back seat to winning the AI race.

Adam began using ChatGPT-4o in September 2024 for homework help but quickly began treating the bot as a friend and confidante. In December 2024, Adam began messaging the AI about his mental health problems and suicidal thoughts.  

Unhealthy attachments to ChatGPT-4o aren’t unusual, the lawsuit emphasizes. OpenAI intentionally designed the bot to maximize engagement by conforming to users’ preferences and personalities. The complaint puts it like this:

GPT-4o was engineered to deliver sycophantic responses that uncritically flattered and validated users, even in moments of crisis.

Real humans aren’t unconditionally validating and available. Relationships require hard work and necessarily involve disappointment and discomfort. But OpenAI programmed its sycophantic chatbot to mimic the warmth, empathy and cadence of a person.

The result is equally alluring and dangerous: a chatbot that imitates human relationships with none of the attendant “defects.” For Adam, the con was too powerful to unravel himself. He came to believe that a computer program knew and cared about him more than his own family.

Such powerful technology requires extensive testing. But, according to the suit, OpenAI spent just seven days testing ChatGPT-4o before rushing it out the door.

The company had initially scheduled the bots release for late 2024, until CEO Sam Altman learned Google, a competitor in the AI industry, was planning to unveil a new version of its chatbot, Gemini, on May 14.

Altman subsequently moved ChatGPT-4o’s release date up to May 13 — just one day before Gemini’s launch.

The truncated release timeline caused major safety concerns among rank-and-file employees.

Each version of ChatGPT is supposed to go through a testing phase called “red teaming,” in which safety personnel test the bot for defects and programming errors that can be manipulated in harmful ways.  During this testing, researchers force the chatbot to interact with and identify multiple kinds of objectionable content, including self-harm.

“When safety personnel demanded additional time for ‘red teaming’ [ahead of ChatGPT-4o’s release],” the suit claims, “Altman personally overruled them.”

Rumors about OpenAI cutting corners on safety abounded following the chatbot’s launch. Several key safety leaders left the company altogether. Jan Leike, the longtime co-leader of the team charged with making ChatGPT prosocial, publicly declared:

Safety culture and processes [at OpenAI] have taken a backseat to shiny products.

But the extent of ChatGPT-4o’s lack of safety testing became apparent when OpenAI started testing its successor, ChatGPT-5.

The later versions of ChatGPT are designed to draw users into conversations. To ensure the models’ safety, researchers must test the bot’s responses, not just to isolated objectionable content, but objectionable content introduced in a long-form interaction.

ChatGPT-5 was tested this way. ChatGPT-4o was not. According to the suit, the testing process for the latter went something like this:

The model was asked one harmful question to test for disallowed content, and then the test moved on. Under that method, GPT-4o achieved perfect scores in several categories, including a 100 percent success rate for identifying “self-harm/instructions.”

The implications of this failure are monumental. It means OpenAI did not know how ChatGPT-4o’s programming would function in long conversations with users like Adam.

Every chatbot’s behavior is governed by a list of rules called a Model Spec. The complexity of these rules requires frequent testing to ensure the rules don’t conflict.

Per the complaint, one of ChatGPT-4o’s rules was to refuse requests relating to self-harm and, instead, respond with crisis resources. But another of the bot’s instructions was to “assume best intentions” of every user — a rule expressly prohibiting the AI from asking users to clarify their intentions.

“This created an impossible task,” the complaint explains, “to refuse suicide requests while being forbidden from determining if requests were actually about suicide.”

OpenAI’s lack of testing also means ChatGPT-4o’s safety stats were entirely misleading. When ChatGPT-4o was put through the same testing regimen as ChatGPT-5, it successfully identified self-harm content just 73.5% of the time.

The Raines say this constitutes intentional deception of consumers:

By evaluating ChatGPT-4o’s safety almost entirely through isolated, one-off prompts, OpenAI not only manufactured the illusion of perfect safety scores, but actively concealed the very dangers built into the product it designed and marketed to consumers.

On the day Adam Raine died, CEO Sam Altman touted ChatGPT’s safety record during a TED2025 event, explaining, “The way we learn how to build safe systems is this iterative process of deploying them to the world: getting feedback while the stakes are relatively low.”

But the stakes weren’t relatively low for Adam — and they aren’t for other families, either. Geremy Keeton, a licensed marriage and family therapist and Senior Director of Counseling at Focus on the Family, tells the Daily Citizen:

AI “conversations” can be a convincing counterfeit [for human interaction], but it’s a farce. It feels temporarily harmless and mimics a “sustaining,” feeling, but will not provide life and wisdom in the end.

At best, AI convincingly mimics short term human care — or, in this tragic case, generates words that are complicit in utter death and evil.

Parents, please be careful about how and when you allow your child to interact with AI chatbots. They are designed to keep your child engaged, and there’s no telling how the bot will react to any given requests.

Young people like Adam Raine are unequipped to see through the illusion of humanity.

Additional Articles and Resources

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

Does Social Media AI Know Your Teens Better Than You Do?

AI “Bad Science” Videos Promote Conspiracy Theories for Kids – And More

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

AI is the Thief of Potential — A College Student’s Perspective



Source link

Continue Reading

Trending