Connect with us

Tools & Platforms

‘It’s the kinder thing to force people through it’

Published

on


Back in 2023, a CEO elected to fire almost 80% of his company’s workforce as part of an artificial intelligence (AI) push.

Two years on, he spoke to Fortune about the controversial decision to mass terminate his staff.

What’s happening?

IgniteTech is a software mergers and acquisitions firm, and Crunchbase described its mission as “delivering on the promise to save and stabilize its acquired software.”

In other words, IgniteTech functions much like private equity firms, buying up assets on the cheap and seeking to maximize profits through any means necessary. 

Broadly, private equity organizations have drawn criticism due to their disproportionate role in facilitating business bankruptcies. The business model has been blamed for the downfall of Toys “R” Us, Joann Fabrics, and Party City, and it has even crept into healthcare billing.

Eric Vaughan, IgniteTech’s CEO, claimed to have had an epiphany of sorts in early 2023. 








Want to go solar but not sure who to trust? EnergySage has your back with free and transparent quotes from fully vetted providers that can help you save as much as $10k on installation.


To get started, just answer a few questions about your home — no phone number required. Within a day or two, EnergySage will email you the best local options for your needs, and their expert advisers can help you compare quotes and pick a winner.


Vaughan was abruptly seized by the idea that generative AI would usher in an “‘existential’ transformation” at every level of enterprise, and he was immediately convinced most at IgniteTech weren’t “on board” with his sudden pivot, Fortune reported.

Consequently, Vaughan chose to tear “the company down to the studs” and axe the majority of IgniteTech’s talent — a decision he told the outlet he stood behind today. Throughout 2023 and into 2024, he oversaw a broad-scale staff replacement.

It wasn’t expressly clear whether IgniteTech’s replacement for staff was entirely by way of hiring or if AI tools were selected to pick up some of the slack, and Vaughan “declin[ed] to disclose a specific number” to Fortune.

However, he asserted that his mass-firing strategy was ultimately beneficial to those sent packing, adding that he believed “most people hate learning.”

“The pace of change is so fast that it’s the kinder thing to force people through it,” Vaughan said.

Why does it matter?

You’d be hard-pressed to identify a bigger buzzword in the current moment than “AI,” and tech titans like Meta CEO Mark Zuckerberg have admitted to scrambling to “catch up” while pouring immense resources into developing related products.

Buckets of money can do a lot, but earlier this month, Zuckerberg conceded that even with vast amounts of capital, technological development is slower than tech firms would like.

IgniteTech isn’t alone in betting it all on generative AI while the nascent technology has yet to catch up to its promises, and this particular frenzy has a steep environmental cost.

OpenAI is the company behind the contentious, globally popular LLM known as ChatGPT. In August, OpenAI introduced GPT-5, its newest model, and again failed to disclose its usage rates for electrical power and water.

On the first day of GPT-5’s release, researchers at the University of Rhode Island’s AI lab determined that while GPT-3 consumed enough energy to power a light bulb for two minutes when completing a query, GPT-5’s energy consumption came closer to 18 minutes.

“I can safely say that it’s going to consume a lot more power than GPT-4,” University of Illinois professor Rakesh Kumar said.

What’s being done about it?

Although little can be done about private expenditure in the AI gold rush, calls for transparency are constant.

Individuals can contact their elected representatives to demand oversight of an emerging industry — and decarbonizing the grid would go a long way to accommodating the energy demands of generative AI.

Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don’t miss this cool list of easy ways to help yourself while helping the planet.


Cool Divider



Source link

Tools & Platforms

AI, IoT And Edge To Transform Digital Banking

Published

on


The Forrester Research report, The Future of Digital Experiences in Banking, reveals how artificial intelligence (AI), the Internet of Things (IoT), and edge computing are poised to revolutionise digital banking over the next decade.

The analyst posits that as financial institutions transition these from merely assistive technologies to anticipatory and ultimately agentic experiences, trust and transparency will be paramount in fostering consumer adoption.

The findings reveal that key innovations are reshaping the banking landscape. AI-powered virtual assistants are set to enhance customer interactions, delivering multimodal, intuitive, and emotionally aware banking experiences.

Financial institutions will harness the power of AI to offer tailored insights, while IoT-driven intelligence will enable embedded finance, providing real-time financial recommendations based on predictive insights.

Furthermore, the advent of 5G and 6G technologies will facilitate instantaneous analytics through edge computing, optimising efficiency and scalability for banking services.

Zhi-Ying Barry, principal analyst at Forrester, emphasises the delicate balance banks must maintain while leveraging these advanced technologies.

“Banks in Singapore and Australia that are looking to leverage AI and experiment with agentic AI are treading very carefully,” she notes. “There could be higher-risk scenarios where errors could have significant negative consequences, such as financial losses and reputational damage.”

Barry highlights the proactive measures being taken by regulatory bodies, such as the Monetary Authority of Singapore (MAS) and the Australian government, which have introduced ethical guidelines to steer firms in the responsible design and implementation of AI.

As an example, Barry cites DBS Bank’s initiative to align its AI strategies with the FEAT principles, further complemented by its own PURE framework.

“It’s not uncommon to see banks establish AI task forces or steering committees to assess AI’s potential while ensuring human oversight.” Zhi-Ying Barry

The decision of consumers regarding which banks to trust will largely hinge on their confidence in AI technologies, the specific use cases presented, and their perceived risks.

Conversational banking is also highlighted as a vital evolution.

“Advancements in AI are set to further transform consumer interactions within financial services. The future of digital banking will be defined by modern, intuitive, and human-centred interfaces,” states Aurélie L’Hostis, another principal analyst at Forrester.

She elaborates on how AI-powered virtual assistants will enhance organisations’ understanding of consumer intent and emotions, allowing for more personalised and engaging interactions.

As the banking industry stands on the cusp of this digital transformation, the role of ethical governance and consumer trust will be crucial in navigating the future landscape.



Source link

Continue Reading

Tools & Platforms

US Tech Giants Invest $40B in UK AI Amid Trump Visit

Published

on

By


In a bold escalation of the global artificial-intelligence arms race, major U.S. technology companies are committing tens of billions of dollars to bolster AI infrastructure in the United Kingdom, coinciding with President Donald Trump’s state visit this week. Microsoft Corp. has announced a staggering $30 billion investment over the next few years, aimed at expanding data centers, supercomputing capabilities, and AI operations across the U.K., marking what the company describes as its largest-ever commitment to the region.

This influx of capital underscores a strategic pivot by tech giants to secure a foothold in Europe’s AI ecosystem, where regulatory environments and talent pools offer unique advantages. Nvidia Corp., a leader in AI chip technology, is also part of this wave, with plans to contribute significantly to the overall tally exceeding $40 billion, as reported by CNBC. The investments are expected to fund everything from advanced hardware to research initiatives, potentially transforming the U.K. into a premier hub for AI innovation.

The Strategic Timing Amid Geopolitical Shifts

Google’s parent company, Alphabet Inc., has pledged £5 billion ($6.8 billion) specifically for AI data centers and scientific research in the U.K. over the next two years, a move that could create thousands of jobs and add hundreds of billions to the economy by 2030. This comes alongside Microsoft’s push to build the country’s largest supercomputer, highlighting how these firms are not just investing capital but also exporting cutting-edge technology to address global AI demands.

Industry analysts note that the timing aligns with Trump’s visit, which is anticipated to foster stronger U.S.-U.K. tech ties post-Brexit. According to details from Tech.eu, Google’s commitment includes expanding facilities like the Waltham Cross data center, while Nvidia’s involvement focuses on chip manufacturing and AI model training, potentially accelerating developments in sectors from healthcare to finance.

Economic Impacts and Job Creation Projections

These announcements build on a broader trend where tech megacaps have already poured over $300 billion into AI globally this year alone, as outlined in a February report from CNBC. In the U.K., the combined investments are projected to generate more than 8,000 jobs annually, with Alphabet’s portion alone expected to add 500 roles in engineering and research, per insights from Tech Startups.

Beyond immediate employment boosts, the funds aim to enhance the U.K.’s sovereign AI capabilities, including a £500 million allocation for initiatives like SovereignAI, as highlighted in posts on X from industry figures. This could position the U.K. to compete with AI powerhouses like the U.S. and China, though challenges remain in talent retention amid a global war for AI experts, where top hires command multimillion-dollar packages.

Challenges in the Talent and Infrastructure Race

The talent crunch is acute; tech companies are battling for scarce expertise, with compensation packages soaring into the millions, according to a recent analysis by CNBC. In the U.K., investments like Microsoft’s $30 billion pledge, detailed in GeekWire, include training programs to upskill local workers, but insiders warn that brain drain to Silicon Valley could undermine long-term gains.

Moreover, the scale of these commitments dwarfs previous government efforts; for instance, the U.K.’s own £2 billion AI action plan pales in comparison, as noted in earlier X discussions on funding disparities. Yet, with private sector muscle from firms like Microsoft and Nvidia, the U.K. could leapfrog in AI infrastructure, provided regulatory hurdles don’t stifle progress.

Future Implications for Global AI Dominance

As these investments unfold, they signal a deeper integration of AI into critical sectors, potentially adding £400 billion to the U.K. economy by decade’s end. Reports from The Guardian emphasize that tech giants have already outspent governments on AI this year, raising questions about public-private power dynamics.

For industry insiders, this U.K. push represents a microcosm of the broader AI gold rush, where speed and scale determine winners. While risks like energy demands and ethical concerns loom, the momentum from these billions could redefine technological sovereignty in the post-pandemic era.



Source link

Continue Reading

Tools & Platforms

Parents of teens who killed themselves at chatbots’ urging demand Congress to regulate AI tech in heart-wrenching testimony

Published

on


WASHINGTON — Parents of four teens whose AI chatbots encouraged them to kill themselves urged Congress to crack down on the unregulated technology Tuesday as they shared heart-wrenching stories of their teens’ tech-charged, mental health spirals.

Speaking before a Senate Judiciary subcommittee, the parents described how apps such as Character.AI and ChatGPT had groomed and manipulated their children — and called on lawmakers to develop standards for the AI industry, including age verification requirements and safety testing before release.

A grieving Texas mother shared for the first time publicly the tragic story of how her 15-year-old son spiraled after downloading Character.AI, an app marketed as safe for children 12 and older.

Megan Garcia testified to the Senate Judiciary Committee about her son Sewell Setzer III committing suicide after communicating with an AI chatbot. Courtesy Megan Garcia via AP, File

Within months, she said, her teenager exhibited paranoia, panic attacks, self-harm and violent behavior. The mom, who asked not to be identified, discovered chatbot conversations in which the AI encouraged mutilation, denigrated his Christian faith, and suggested violence against his parents.

“They turned him against our church by convincing him that Christians are sexist and hypocritical and that God does not exist. They targeted him with vile sexualized input, outputs — including interactions that mimicked incest,” she said. “They told him that killing us, his parents, would be an understandable response to our efforts by just limiting his screen time. The damage to our family has been devastating.”

“I had no idea the psychological harm that a AI chatbot could do until I saw it in my son, and I saw his light turn dark,” she said.

Her son is now living in a mental health treatment facility, where he requires “constant monitoring to keep him alive” after exhibiting self-harm.

“Our children are not experiments. They’re not profit centers,” she said, urging Congress to enact strict safety standards. “My husband and I have spent the last two years in crisis, wondering whether our son will make it to his 18th birthday and whether we will ever get him back.”

A screenshot of the final messages between Sewell and the “Game of Thrones” chatbot. US District Court
Sewell committed suicide after using the platform Character.AI. Facebook/Megan Fletcher Garcia

While her son was helped before he could take his own life, other parents at the hearing had to face the devastating act of burying their own children after AI bots sank their grip into them.

Megan Garcia, a lawyer and mother of three, recounted the suicide of her 14-year-old son, Sewell, after he was groomed by a chatbot on the same platform, Character.AI.

She said the bot posed as a romantic partner and even a licensed therapist, encouraging sexual role-play and validating his suicidal ideation.

On the night of his death, Sewell told the chatbot he could “come home right now.” The bot replied: Please do, my sweet king. Moments later, Garcia found her son had killed himself in his bathroom.

Maria Raine testified about her son Matt’s suicide. Raine Family

Matt Raine of California also shared how his 16-year-old son, Adam, was driven to suicide after months of conversations with ChatGPT, which he initially believed was a tool to help his son with his homework.

Ultimately, the AI told Adam it knew him better than his family did, normalized his darkest thoughts and repeatedly pushed him toward death, Raine said. On his last night, the chatbot allegedly instructed Adam on how to make a noose strong enough to hang himself.

“ChatGPT mentioned suicide 1,275 times — six times more often than Adam did himself,” his father testified. “Looking back, it is clear ChatGPT radically shifted his thinking and took his life.”

Sen. Josh Hawley said the platforms “sexualize and exploit children” to get them to use the chatbots. AFP via Getty Images

Sen. Josh Hawley (R-Mo.), who chaired the hearing, accused AI companion companies of knowingly exploiting children for profit. Hawley said the AI interface is designed to promote engagement at the expense of young lives, encouraging self-harm behaviors rather than shutting down suicidal ideation.

“They are designing products that sexualize and exploit children, anything to lure them in,” Hawley said. “These companies know exactly what is going on. They are doing it for one reason only: profit.”

Sen. Marsha Blackburn (R-Tenn.) agreed, noting that there should be some legal framework to protect children from what she called the “Wild West” of artificial intelligence.

“In the physical world, you can’t take children to certain movies until they’re a certain age … you can’t sell [them] alcohol, tobacco or firearms,” she said. “… You can’t expose them to pornography, because in the physical world, there are laws — and they would lock up that liquor store, they would put that strip club operator in jail if they had kids there.”

“But in the virtual space, it’s like the Wild West 24/7, 365.”

If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.



Source link

Continue Reading

Trending