Connect with us

AI Insights

A billion-dollar bet on artificial intelligence is about to hit reality

Published

on


Open this photo in gallery:

Microsoft cut 15,000 jobs this year while investing US$80-billion on machinery that powers AI.Dado Ruvic/Reuters

In 2023, ChatGPT burst onto the scene, marking a new era for artificial intelligence and fuelling expectations that AI would reinvent everything. Proponents enthused it would lift global GDP by trillions, create entirely new industries and transform how we work. CEOs promised it. Politicians echoed it. Analysts modelled it into their long-range forecasts.

But a different story is emerging: Amazon, Google, Meta, Intel and Cisco are all announcing deep layoffs. These aren’t cyclical trims. They’re a reallocation of capital. Human labour is being cut to fund machine infrastructure. Microsoft cut 15,000 jobs this year while doubling down on AI, investing eye-popping amounts. And they’re doing it despite growing investor unease about whether this massive spending will actually pay off.

That’s not hyperbole. Microsoft is investing US$80-billion this year on what the tech world calls “compute” – a term that includes GPUs, data centres, cloud platforms and the electricity to power them. Capital is moving from people to the physical machinery that makes AI run.

The underlying bet is that machines will eventually deliver what humans can’t: scale, speed and 24/7 output. But that future still hasn’t arrived, and in many cases, may never look like the version we are being sold today.

Is AI helping workers and improving productivity or just creating more work?

Right now, AI largely means large language models (LLMs): ChatGPT, Claude, Gemini and others. These tools generate remarkably fluent responses and can sound almost human. But here’s the thing: They’re not actually thinking. They’re not reasoning. They don’t understand the words they’re using.

LLMs work through pattern prediction. They are, in effect, highly advanced autocomplete engines predicting the next most likely word in a sentence based on what they’ve seen in their training data. That’s it. The illusion of intelligence is compelling, but it’s just that: an illusion.

Apple’s AI research team recently made this point in a paper warning that LLMs give users the illusion of thought. The models appear intelligent because they’ve learned to mimic the way humans speak. But there’s no internal model of logic or truth – just statistical guesses that sound right. Users supply the meaning; the machine doesn’t have one. After all, just because a parrot can mimic your words doesn’t mean it understands them.

And yet, on the back of this illusion, companies are making some of the biggest capital allocation decisions in a generation.

Recent funding rounds valued OpenAI as high as US$300-billion. A US$12-billion to US$13-billion revenue projection for 2025, tripling year over year, means investors assign it a revenue multiple as high as 25 times. For context, Cisco traded at a similar multiple during the dot-com peak – before losing more than 75 per cent of its value in the crash that followed.

Is today’s AI boom bigger than the dotcom bubble?

OpenAI’s own forecast calls for revenue to reach US$125-billion by 2029, which would compress its multiple dramatically. But that kind of 10-times revenue growth in just four years would be unprecedented – even among the giants. Major tech firms like Apple, Google, Amazon and Microsoft – highly profitable and market-dominant – typically grow revenue by two to four times over four-year periods once they reach scale. None of them has pulled off a 10-times leap at that stage. If OpenAI does it, it will be the exception that proves a very high-risk rule.

And, unlike those giants, OpenAI isn’t diversified across product lines or industries. It has one core product and a cost structure that scales with usage. Margins are still a moving target. Meanwhile, rivals like Anthropic, xAI, DeepMind and fast-moving open-source players are gaining ground. China’s DeepSeek recently released a model that claims to match GPT-4’s performance at a fraction of the cost. That raises a bigger question: How defensible are the market share, margins and valuations OpenAI is currently commanding?

User results on the ground remain mixed. More than 80 per cent of businesses using AI technology are not yet seeing significant earnings gains, according to a new report by McKinsey & Company published in June. Most deployments are pilots or prototypes. Gartner estimates only 30 per cent make it past testing. RAND puts the failure rate closer to 80 per cent.

Still, the spending keeps rising. Over the next few years, the sector is expected to invest more than US$1-trillion in AI infrastructure. Microsoft, Amazon and Google are racing to build, on the assumption that once it’s in place, the payoff will come.

In the AI revolution, universities are up against the wall

That payoff, however, continues to underwhelm. MIT economist and Nobel Laureate Daron Acemoglu estimates AI may lift U.S. GDP by just 1.1 per cent to 1.6 per cent over a decade, translating to annual productivity gains of 0.05 per cent. Helpful, yes. But nowhere near the level implied by current valuations.

The often-cited PwC report that says AI could add US$15.7-trillion to global GDP by 2030 includes both productivity gains and new consumer demand. But that number assumes smooth deployment across industries, rapid scaling and minimal resistance. The reality has been slower and far more complex.

Long Island Iced Tea’s stock price rose by 432 per cent after the company announced its name change to “Long Blockchain Corp.,” despite not launching any blockchain product. Similarly, Meta’s $60-billion investment in the metaverse offers a cautionary tale. The rebrand, the restructuring, the wave of corporate FOMO – it all led to a platform that most people simply didn’t want. Virtual reality headset sales peaked briefly in 2021–22, then slumped. The company quietly walked back its vision and pushed hard into the next hype cycle – AI.

AI is not a gimmick. But it’s also not magic. It’s an impressive tool being asked to carry an entire economic transformation before it’s ready. Most of the real value, when it does come, will likely come from narrow tools that complement – not replace – human capability.

Sam Sivarajan is a keynote speaker, independent wealth management consultant and author of three books on investing and decision-making. His forthcoming book will explore how to thrive in a world of uncertainty.



Source link

AI Insights

Artificial Intelligence Stocks To Follow Now – August 30th – MarketBeat

Published

on



Artificial Intelligence Stocks To Follow Now – August 30th  MarketBeat



Source link

Continue Reading

AI Insights

Detects heart disease in 15 seconds: AI-powered stethoscope developed in Britain

Published

on


A team of researchers confirms the potential of a new type of stethoscope; specialists say that the new device, with the help of artificial intelligence, can save more lives. This is reported by UNN with reference to Euronews and Spiegel.

Details

Artificial intelligence is increasingly penetrating various sectors and changing what previously seemed untouchable. A classic situation, unshakable for many decades during a patient’s visit to the doctor: the doctor holds a stethoscope to the patient’s chest and listens, among other things, to the heartbeat.

For reference

The hearing aid, invented in 1816 by the French physician René Laennec, has been a symbol of medicine for centuries. Always around the neck, this “simple” instrument has accompanied generations of doctors, performing the indispensable mission of examining the heart and lungs.

With the help of an AI-stethoscope, everything is probably better now… Or faster

According to new data, researchers from the United Kingdom have developed an artificial intelligence (AI)-based stethoscope. The new device, according to scientists, can detect three heart diseases in just 15 seconds.

Note:

  • smart stethoscopes (which transmit body sounds to software) are relatively new;
  • researchers are now exploring the potential of such devices.

And now a wonderful new update has been announced, which, by the way, was recently presented at the annual congress of the European Society of Cardiology in Madrid.

How the examination was conducted using the new technology

The study was conducted by Imperial College London and Imperial College Healthcare NHS Trust.

The goal is to confirm that innovative stethoscopes can detect heart failure, heart valve disease, and cardiac arrhythmias significantly better than traditional methods.

For the study, over 12,000 patients from 96 medical facilities were examined using AI stethoscopes from the American company Eko Health. Their data was then compared with data from patients from 109 medical facilities where this technology was not used.

“Doctor in your pocket”: UK launches AI-powered medical chatbot25.06.25, 21:14 • 3852 views

According to the study, people with heart failure were 2.33 times more likely to be diagnosed with the disease within the next twelve months than those in the control group.

  • atrial fibrillation, which can increase the risk of stroke, was detected in the AI-assisted group even 3.5 times more often;
  • heart valve diseases were detected 1.9 times more often within twelve months.

Comment

Our study shows that it is now possible to detect three heart diseases in one session

– said Nicholas Peters, lead researcher at Imperial College London and consultant cardiologist at Imperial College Healthcare NHS Trust.

Addition

The possibility of an undesirable effect in unfortunate circumstances cannot be ruled out.

Healthy people, for example, may be mistakenly diagnosed with heart problems.

Researchers emphasize:

The AI-based stethoscope should not be used for routine examinations, but only for patients who already have a suspected heart condition.

Recall

Researchers at Cambridge University have found that common aspirin can enhance the immune system’s ability to fight cancer spread.

AI-powered Parkinson’s disease treatment: Scientists propose an updated method03.04.25, 16:17 • 11342 views



Source link

Continue Reading

AI Insights

The good and bad of machine learning | Artificial intelligence (AI)

Published

on


Imogen West-Knights is absolutely right about us losing our brain power to the artificial intelligence bots (ChatGPT has its uses, but I still hate it – and I’ll tell you why). I too believe creative imagination is a muscle, which needs its exercise. She is also right that it can revolutionise scientific endeavour. My field of weather forecasting will soon be revolutionised by machine learning – a type of AI – where we recognise enough past weather patterns so that we can predict what weather will be coming. But writing best-man speeches, leaving speeches for work colleagues, letters to a dear friend? Do we really want to dissolve into brain-lazy folk who use AI to be the understudy to our own emotions?

If I say “I love you” to someone, would they like to hear it from me or a bot? There is also another concern: AI output has no audit trail, no clues to its source. Its source is the wild west. Anyone – good, bad, indifferent – can feed into it, program it, bias it. If, as you say Imogen, you do end up in the woods in an “analogue manner” with your ability to think intact, I’ll happily join you. Hopefully others will too.
Murray Dale
Hayle, Cornwall

Imogen West-Knights shares her hatred of offloading to ChatGPT the tasks that make us human. And while I share her concerns (I couldn’t have put them better myself), there is an additional one that troubles me: if students go through their entire school lives with this all-knowing and all-solving technology at their fingertips, how will their critical thinking skills develop?

Students in literature class are not given books such as The Great Gatsby so they can regurgitate the plot 20 years later at a dinner, but rather so that they can understand the interconnection of class disparities, wealth and the social atmosphere after the first world war, and so they can trace parallels with the present day.

They learn multivariable calculus not because they will need it to buy groceries but to make their brains strong and malleable, so that grasping and implementing new concepts and ideas will become easier, whatever the subject.

And they don’t learn history so they can repeat over and over “Victoria 1837, Edward VII 1901, George V 1910, Edward VIII 1936, George VI 1936, Elizabeth II 1952”, but to understand how sequences of events have led to wars, legislative changes and economic crises, and can do so again.

Technologies that make work easier have always been seductive, and always will be. AI usage is already rampant in secondary schools and universities.

But as ChatGPT turns three years old in a few months, preschoolers are also starting to go to kindergartens. And I wonder how in the years to come we will ensure that their answer to everything is not “I will ask ChatGPT.”
Ignacio Landivar
Berlin, Germany

Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.



Source link

Continue Reading

Trending