Connect with us

Tools & Platforms

Can Generative AI transform healthcare?

Published

on


Generative AI may be the fastest-adopted technology in history, but in healthcare it is still largely seen through the narrow lens of chatbots. That perception, argued experts at WHX Tech-EHS Summit in Dubai during a panel discussion, risks blinding policymakers and providers to its deeper potential, and to the barriers that stand in its way.

“It’s like saying the internet is only e-mail,” said Christian Hein, former Novartis vice-president for digital transformation. “Chatbots are just the front-end. The real power lies in what sits beneath,  which is an information engine capable of synthesising scientific literature, drafting clinical trial protocols, automating reimbursement coding and extracting unstructured data from medical records.”

The discussion, moderated by health AI consultant Sigrid Berge van Rooijen, opened with a question: Is generative AI destined to remain a glorified customer-service tool?

For Tatyana Kanzaveli, founder of Open Health Network, the danger lies in merely bolting new technologies onto outdated systems. “We cannot just deploy GenAI to augment old business processes,” she said. “Imagine agentive AI predicting when MRI equipment is about to fail, ordering the part, scheduling the engineer and coordinating the fix automatically. Or a digital twin monitoring your health data, arranging prescriptions, transport and care without you lifting a finger. That is the world we should be building.”

Related:Health AI needs real-world data and portability to become more effective

Bharat Gera, who has spent 25 years working on digital health transformation, echoed the need for caution but also saw promise in simple tools such as summarisation. “Doctors spend huge amounts of time reading patient histories. Summarisation is a powerful use case, here and now,” he said. But he warned against overloading clinicians with alarms and unvalidated signals: “Healthcare is fundamentally human. If we forget that, technology will make things worse.”

Regulation, risk and responsibility

If technology is racing ahead, regulation is struggling to keep pace. Amil Khanzada, CEO of Virufy, highlighted how laws differ dramatically across jurisdictions. “In Dubai, anonymised medical data cannot be sent overseas. In Pakistan, there isn’t even a privacy law yet. Patients have the right to delete their data, but what happens once that data has already trained a model? Do you retrain it from scratch?”

Consent forms, he added, are another minefield. “You can try to use generative AI to summarise them, but you still need human validation. And patients often sign without reading. The legal and ethical risks are enormous.”

Related:Driving digital health in the cognitive age

Kanzaveli pointed to the dangers of misplaced trust. “Generative AI is persuasive. You trust it. But in healthcare, a wrong answer can mean a missed diagnosis, or worse. We spent longer building the risk-management framework for a virtual psychologist than we did building the engine itself. That is our responsibility.”

The human factor

Perhaps the most sobering intervention came from Anne Forsyth, Vice-Chair of Digital Health Canada and IT lead at Toronto’s Women’s College Hospital. She recounted how a cancer diagnosis was delayed for a year because test results were stuck in a hospital IT interface. “What do you tell the patient? Tech will never be perfect. We must always plan for failure,” she said. “If you are building GenAI tools for hospitals, think about what happens when they fail, and what supports clinicians will have.”

Hein agreed. “Technology is easy. Change is the hard part. The real work is persuading people that you are there to augment, not replace them. Without that, AI will never scale.”

A future too important to ignore

Despite their differing emphases, the panellists agreed that generative AI is already reshaping healthcare and that ignoring it is not an option. “There is no industry that can remain competitive without deploying these technologies,” Kanzaveli said. “The only question is how responsibly we do it.”

Related:AI in healthcare should move from prediction to empathy

As Berge van Rooijen concluded, the challenge is not whether generative AI is more than a chatbot. It clearly is. The question is how to harness its promise without repeating the mistakes of past digital health revolutions, and without losing sight of the people at the heart of the system.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

AI Darwin Awards to mock the year’s biggest failures in artificial intelligence

Published

on


Published on


ADVERTISEMENT

A new award will celebrate bad, ill-conceived, or downright dangerous uses of artificial intelligence (AI) — and its organisers are seeking the internet’s input.

The AI Darwin Awards reward the “visionaries” that “outsource our poor decision-making to machines”.

It has no affiliation with the Darwin Awards, a tongue-in-cheek award that recognises people who “accidentally remov[e] their own DNA” from the gene pool by dying in absurd ways.

To win one of the AI-centred awards, the nominated companies or people must have shown “spectacular misjudgement” with AI and “ignored obvious warning signs” before their tool or product went out. 

Bonus points are given out to AI deployments that made headlines, required emergency response, or “spawned a new category of AI safety research”.

“We’re not mocking AI itself — we’re celebrating the humans who used it with all the caution of a toddler with a flamethrower,” an FAQ page about the awards reads.

Ironically, the anonymous organisers said they will verify nominations partly through an AI fact-checking system, which means they ask multiple large language models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini whether the stories submitted are true.

The LLMs rate a story’s truthfulness out of 10, then the administrators of the site average the scores with an AI calculator. If the average is above five, the story is considered “verified” and eligible for an AI Darwin Award. 

OpenAI, McDonald’s among early nominees

One of the approved nominations for the first AI Darwin Awards is the American fast food chain McDonald’s. 

The company built an AI chatbot for job recruitment called “Olivia” that was safeguarded by an obvious password: 123456, exposing a reported 64 million people’s hiring data to hackers.

Another early nominee is OpenAI for the launch of its latest chatbot model GPT-5. French data scientist Sergey Berezin claimed he got GPT-5 to unknowingly complete harmful requests “without ever seeing direct malicious instructions”.

The winners will be determined by a public vote during the month of January, with the announcement expected in February.

The only prize: “immortal recognition for their contribution to humanity’s understanding of how not to use artificial intelligence,” the organisers said.

The hope of the awards is to serve as “cautionary tale[s]” for future decision-makers so they agree to test AI systems before deploying them.



Source link

Continue Reading

Tools & Platforms

Leading Google UK & the AI Opportunity

Published

on


The UK has always had a special place in my story. Canary Wharf is where my career began in the 90s, during a period of profound transformation for the country’s financial sector. Reflecting on my first three months as Google UK lead, it’s clear that the pace of AI innovation is driving an even greater sense of historic opportunity, not just in the City, but across the entire country.

Recently, I attended a technology industry dinner at the historic Mansion House. The evening was an electric pairing of tradition and transformation – a blend that the UK has perfected. The room was filled with British business leaders, policymakers, and trailblazers across the tech sector, eager to uncover how AI-powered technologies could help solve some of the biggest challenges of our generation. This opportunity to build on the country’s rich heritage for pioneering world leading breakthroughs is why I’m excited to be back in the UK to lead Google’s operations here.

The UK: a hub for AI innovation & cultural influence

During my 15 years at Google, I’ve held a variety of regional and global roles, partnering with a diverse range of organisations to turn complex challenges into technological opportunities. Throughout that time, the UK has always stood out as a hotbed of innovation, a global epicenter for AI research — in particular, the work of our remarkable Google DeepMind colleagues — and a pioneer in the international advertising industry.

The UK has long been a nation of early adopters. This is why the UK was one of the first countries to roll-out new Gemini-powered products, such as AI Mode — a new way to search for information, developed to cater to the growing number of people asking longer and more complex queries.

UK consumer behaviour is constantly evolving, across streaming, scrolling, searching, and shopping. That’s why Google and YouTube are uniquely positioned to empower UK businesses to thrive, in a dynamic digital environment. It’s been inspiring getting to know the teams here in the UK who are helping businesses of all sizes meet the moment and use AI-powered tools to turn their online presence into real-world revenue, providing a vital engine for UK economic growth.

The UK’s cultural influence is also undeniable, as evidenced by well established homegrown British YouTube creators, such as Amelia Dimoldenberg and Brandon B who have become new media powerhouses in their own right. Or the England squad Lionesses, like Lucy Bronze who are both athletes and content creators in their own right, inspiring young female footballers to strive for excellence on and off the pitch, while winning for the UK. YouTube, which celebrated its 20th birthday earlier this year, is transforming how businesses use AI to reach new audiences. I’m proud of our leadership in this space, and the site’s potential to connect even more brands with a new generation of consumers.

Seizing the opportunity ahead

The construction of our first UK data centre in Waltham Cross, our new King’s Cross development and our AI Works initiative — our partnership with British organisations to help uncover the most effective ways to accelerate AI adoption and upskilling — are just some of the significant investments we’re making in the UK’s digital future. The UK is a country unlike any other and this is an incredible time to be back.



Source link

Continue Reading

Tools & Platforms

AI hype has just shaken up the world’s rich list. What if the boom is really a bubble?

Published

on


Just for a moment this week, Larry Ellison, co-founder of US cloud computing company Oracle, became the world’s richest person. The octogenarian tech titan briefly overtook Elon Musk after Oracle’s share price rocketed 43% in a day, adding about US$100 billion (A$150 billion) to his wealth.

The reason? Oracle inked a deal to provide artificial intelligence (AI) giant OpenAI with US$300 billion (A$450 billion) in computing power over five years.

While Ellison’s moment in the spotlight was fleeting, it also illuminated something far more significant: AI has created extraordinary levels of concentration in global financial markets.

This raises an uncomfortable question not only for seasoned investors – but also for everyday Australians who hold shares in AI companies via their superannuation. Just how exposed are even our supposedly “safe”, “diversified” investments to the AI boom?

The man who built the internet’s memory

As billionaires go, Ellison isn’t as much of a household name as Tesla and SpaceX’s Musk or Amazon’s Jeff Bezos. But he’s been building wealth from enterprise technology for nearly five decades.

Ellison co-founded Oracle in 1977, transforming it into one of the world’s largest database software companies. For decades, Oracle provided the unglamorous but essential plumbing that kept many corporate systems running.

The AI revolution changed everything. Oracle’s cloud computing infrastructure, which helps companies store and process vast amounts of data, became critical infrastructure for the AI boom.

Every time a company wants to train large language models or run machine learning algorithms, they need huge amounts of computing power and data storage. That’s precisely where Oracle excels.

When Oracle reported stronger-than-expected quarterly earnings this week, driven largely by soaring AI demand, its share price spiked.

That response wasn’t just about Oracle’s business fundamentals. It was about the entire AI ecosystem that has been reshaping global markets since ChatGPT’s public debut in late 2022.

The great AI concentration

Oracle’s story is part of a much larger phenomenon reshaping global markets. The so-called “Magnificent Seven” tech stocks – Apple, Microsoft, Alphabet, Amazon, Meta, Tesla and Nvidia – now control an unprecedented share of major stock indices.

Year-to-date in 2025, these seven companies have come to represent approximately 39% of the US S&P500’s total value. For the tech-heavy NASDAQ100, the figure is a whopping 74%.

This means if you invest in an exchange-traded fund that tracks the S&P500 index, often considered the gold standard of diversified investing, you’re making an increasingly concentrated bet on AI, whether you realise it or not.

Are we in an AI ‘bubble’?

This level of concentration has not been seen since the late 1990s. Back then, investors were swept up in “dot-com mania”, driving technology stock prices to unsustainable levels.

When reality finally hit in March 2000, the tech-heavy Nasdaq crashed 77% over two years, wiping out trillions in wealth.

Today’s AI concentration raises some similar red flags. Nvidia, which controls an estimated 90% of the AI chip market, currently trades at more than 30 times expected earnings. This is expensive for any stock, let alone one carrying the hopes of an entire technological revolution.

Yet, unlike the dot-com era, today’s AI leaders are profitable companies with real revenue streams. Microsoft, Apple and Google aren’t cash-burning startups. They are established giants, using AI to enhance existing businesses while generating substantial profits.

This makes the current situation more complicated than a simple “bubble” comparison. The academic literature on market bubbles suggests genuine technological innovation often coincides with speculative excess.

The question isn’t whether AI is transformative; it clearly is. Rather, the question is whether current valuations reflect realistic expectations about future profitability.

President and chief executive of Nvidia Corporation, Jensen Huang.
Chiang Ying-ying/AP

Hidden exposure for many Australians

For Australians, the AI concentration problem hits remarkably close to home through our superannuation system.

Many balanced super fund options include substantial allocations to international shares, typically 20–30% of their portfolios.

When your super fund buys international shares, it’s often getting heavy exposure to those same AI giants dominating US markets.

The concentration risk extends beyond direct investments in tech companies. Australian mining companies, such as BHP and Fortescue, have become indirect AI players because their copper, lithium and rare earth minerals are essential for AI infrastructure.

Even diversifying away from technology doesn’t fully escape AI-related risks. Research on portfolio concentration shows when major indices become dominated by a few large stocks, the benefits of diversification diminish significantly.

If AI stocks experience a significant correction or crash, it could disproportionately impact Australians’ retirement nest eggs.

A reality check

This situation represents what’s called “systemic concentration risk”. This is a specific form of systemic risk where supposedly diversified investments become correlated through common underlying factors or exposures.

It’s reminiscent of the 2008 financial crisis, when seemingly separate housing markets across different regions all collapsed simultaneously. That was because they were all exposed to subprime mortgages with high risk of default.

This does not mean anyone should panic. But regulators, super fund trustees and individual investors should all be aware of these risks. Diversification only works if returns come from a broad range of companies and industries.



Source link

Continue Reading

Trending