Connect with us

AI Research

Yoshua Bengio, one of the fathers of AI: «With artificial intelligence, we are building machines that can surpass us. In the wrong hands, the risks are enormous»

Published

on


Scientist meets Pope today: «The potential dangers are incalculable. AI can threaten the very foundations of human solidarity»

«We are surrounded by dictators and we are building the most powerful artificial intelligence ever. What could go wrong?» When he sees my lost gaze, Yoshua Bengio opens up with a reassuring smile as if to say: don’t worry, , we’ll do something. It is just after 9 a.m. on Monday, September 8, and the professor has just landed at Fiumicino airport, carrying only a purple trolley with a good suit inside to meet the Pope. He will do so today, Friday, September 12, as a member of a working group of artificial intelligence experts. He comes from Montreal, where he lives and works, then he will go to Seoul and then who knows: if he were a singer, you would say that he is doing a world tour even if, he tells me, he is trying to reduce travel not so much because he is 61 years old but because at this moment he feels the urgency of being in the laboratory with his students.




















































At the moment, he is the scientist with the most scientific citations in the world (980,278 according to Google Scholar) and he has given himself a fundamental, existential mission: to find the antidote to a harmful development of that artificial intelligence that he has helped to create more than any other. It is not just a figure of speech: while it is true that the arrival of chatbots such as Chat GPT, Claude and Gemini is due to a large community of researchers (and tens of billions of dollars of investment by some American technology companies), he is the one who made the building blocks that others have used: without going into complicated technicalities, it was his scientific work that allowed artificial intelligence to get to where it is now. 

That’s why if he says today that this technology is about to get out of hand with disastrous consequences, it’s best to listen to him: he knows what he’s talking about. He says, for example: «Artificial intelligence could threaten the very foundations of human solidarity». That is why he came to meet Pope Leo XIV: to let him know that the situation is serious but also that he may have a solution.

A few hours later, we are in his Roman hotel, with a sirocco that has multiplied the mosquitoes, and he is wearing a white floral shirt: «When I won the Turing Prize in 2019 together with Geoff Hinton and Yann LeCun, I thought that my story as a researcher had ended there, that I had nothing else to do; and instead…». I remind him that on the day of the ceremony, among other things, he said that in many ways, generative artificial intelligence was stupid, that it had no awareness and that it had the intelligence of a cat. 

He laughs: «I said it had a very different intelligence from a cat. The intelligence of machines is very different from ours: it is like that of a human being who has done many years of university but is paralyzed, or rather, does not have a body».

That day you promised that you would work to improve it: did things go faster than expected?
«Looking back today, there were already signs of this. Even in the 1990s, when I was examining the results of the experiments I was conducting, I had the impression that with neural networks, the bigger the better…».

Do you mean that with more data, more computing power and more hours of training, the models lead to better results in terms of intelligence or whatever you want to call it?
«Exactly. And there were other signs later when the researchers who were training the neural networks continued to make them bigger and increased the database, with the result of improving the results more and more. But the truth is, we didn’t expect it would be enough to get us to where we are now. You know, researchers are very proud, and they think they’re necessary and when something doesn’t seem to work they’re convinced they can solve it on their own. I think there was a bit of prejudice in the AI scientific community in thinking that there was still a lot of work to do. We are actually much, much closer to human intelligence than we were just three years ago».

Now you have defined it as human intelligence, but before you told me that it is very different. Has anything changed along the way?
«It’s a different kind of intelligence, but throughout the history of artificial intelligence, researchers have looked to human intelligence as a model, inspiration, and reference point. Now things are changing, because we have machines that are more competent than humans in some things and less competent in others. And maybe it’s not a good idea to build machines that will be just like us, but able to calculate faster and know more, because one day they could surpass us».

It’s a competition that we can’t win.
«We compete with each other, but no single human being can take over all other human beings. With technology, this can change. Let me explain: AI can learn many things more than we can; and it can communicate billions of times faster with other computers. We cannot do that. And then the AIs are immortal, just as Geoff Hinton says, because they can copy their program, their code, on other computers. We can’t do that either. So they have an intrinsic superiority: and even if they only had the same kind of cognitive and learning principles that we have, they would be superior. So I think it’s dangerous to continue on this path trying to imitate human intelligence. We don’t need it: we need tools to solve the many difficult scientific and societal problems. I am not at all against artificial intelligence: properly designed intelligent machines could be very useful, of course, but I think we need to change our strategy and stop trying to build something that can take our place».

You’ve worked on intelligence all your life thinking it would do us good: when did you change your attitude and stop looking at this technology not as the architect of a new era of abundance, as Sam Altman says, but as something scary that can get out of control? When did that happen?
«The first time was in 2013. When Geoffrey Hinton and Yann LeCun, with whom I had been collaborating for years, were recruited by Google and Facebook, respectively. And I thought: why are they paying them so much? And I said to myself: because they want to use artificial intelligence in advertising and manipulate the psychology of users. But used in this way, AI can do a lot of damage. For example, it can damage democracy. And it is no longer just a hypothesis, it has already happened and is happening».

You have touched on a fundamental theme in your career. You are perhaps the only one in the community of researchers who have developed AI who has not gone to work for a large Silicon Valley company, the only one who has remained in the academic world. You could have become rich.
«Yes, Icould be rich now if I had gone to work for Facebook or something like that, but basically, I didn’t want to work for a big tech company. I wanted to work for the public, I really enjoy working with students. I felt like they were my family. And I wanted to stay in Canada at that time, while I should have gone to the United States…».

But now you are the most cited scientist in the world in all fields. Is it a reward for you? What does this record mean?
«It means nothing if we don’t stop this danger».

He says this while laughing with that laugh that – after a while you understand – is his way of saying the most important things; as if he realized the seriousness of the situation but did not want to scare us too much. In a recent TEDTalk in Vancouver, at one point he said that stirring fear does not work, that it is better to invoke love. And with respect to our future with AI, he made the metaphor of a motorist driving in the fog. We are the motorist. The meaning was this: «Who’s in the car with you? Your children? Well, turn on the fog lights, do it for the love you have for them».

I would like to reconstruct what went wrong: can you tell me the second moment that led you to change your attitude?
«The second turning point was in January 2023. Chat GPT had just come out and I played with it and got excited. Wow, I said to myself, we finally have a machine that really speaks our language. Then I did what all researchers do: I started looking at the things that could be improved: the ability to calculate, complex reasoning…».

Since then, however, the ability to reason has improved a lot.
«They’re not as good as humans yet, but they’re getting a lot better. The fact is that I was influenced by my mindset as a researcher who knows the joy of progress and focuses only on what needs to be improved, because that’s what drives my research. But then I remembered what Alan Turing said, that it is the ability of language to prove that the level of intelligence is human. He wasn’t entirely right, but the truth is that we could be very close to what people call AGI, artificial general intelligence. It’s not a very nice name but it works».

Is this your definition of AGI? An intelligence at the human level?
«This is more or less the definition of most people, but it’s not an optimal definition because you could have an AI that is very good at some things that we humans do and not good at others».

When Yoshua Bengio said these things, he could not have known that on this very point, three days later, in a room just below St. Peter’s Basilica, there would be a very heated confrontation with a couple of members of the working group gathered there to finalize the appeal to be presented «to the Pope, to world leaders and to all men and women of good will». He would have seen firsthand that a significant part of the scientific community – represented there by two young women, Lorena Jaume-Palasi and Abeba Birhane – considers AGI or super intelligence to be the marketing tool with which the big Silicon Valley companies impose their narrative, collecting more and more money with which they increase power and control over the rest of the world. For some, just evoking these concepts means being complicit. The debate on these issues is indeed exciting, except that Bengio has never enlisted in the Silicon Valley troops and is instead dedicating his life to finding a technological tool that can stem its dominance, bring AI back under human control for the common good.

But let’s get back to the interview.

The alarm was triggered on May 1, 2023, when the New York Times reported that Geoffrey Hinton had left Google to speak freely about the risks of AI.
«That day, I got a call from Max Tegmark (MIT professor and founder of the Future of Life Institute, as well as a member of the Vatican working group, ed.). He called me and proposed that I sign a petition for a six-month moratorium on the release of other models. I immediately said yes. It was clear that this technology presented existential risks that we were underestimating».

What kind of risk? One percent? Twenty percent? It makes a lot of difference: at the time, a metaphor was going around in the form of a question: if you knew that there was a twenty percent chance that your plane would crash, would you get on it?
«As scientists, we don’t know. It is difficult to quantify it, someone said: between 10 and 90 percent. So it’s not negligible. Maybe everything will be fine. Or maybe not. It’s very difficult to say. But look at what has happened in the last two years instead of the moratorium: this great race between China and American companies to develop increasingly powerful models has begun; Europe has tried to regulate the sector and scientists have continued to say that safety is important. And it’s not just a matter of privacy. Our future as a human species is at stake».

What you scientists call: long-term existential risk.
«But there are many other short-term risks, for example risks to democracy: if we solve the problem of designing an AI that really does what we ask it to do, this will give a lot of power to whoever controls it, and it can be very destabilizing. In a sense, it is the opposite of what democracy is».

But without going so far: it is what social media already do through artificial intelligence, polarizing opinions to increase user engagement and therefore profits.
«That was the first example of machines steering society in a direction we don’t want. We are in a situation that game theory scientists know well: if each of us does what we think is best for ourselves, if each of us follows our instincts, indulges all our desires and tries to maximize our profit, then we all lose».

What can happen?
«We can be manipulated through posts that change public opinion; or a terrorist group could use AI to unleash a new pandemic; or even just, we could all be replaced by machines losing the meaning of who we are and what we do. These are all scenarios that require wisdom, compassion, caution and global coordination: that’s not what’s happening».

(The next day, during a dinner with some members of the group, Bengio would explain to me the hypothetical risk of mirror life, a hypothetical form of life built using molecules that are the mirror image of natural ones, which in the wrong hands could lead to our extinction).

When you saw the videos of the White House dinner with the heads of the big Silicon Valley companies, what did you think? Has such a concentration of power ever been seen?
«It’s worrying. In that group, everyone thinks only of their own interests and is blind to the possibility that something could go wrong. I don’t think they are bad, it is human nature that leads us to be greedy. Power fascinates everyone and leads us to do strange things… What I would like is for there to be a greater collective awareness of what is happening to try to build a different future».

After outlining the scenario and the risks, let’s try to understand the solution you are working on. In the debate between optimists and pessimists, you said you’re a doer. In English it sounds better, you don’t call yourself a doomer but a doer. What does that mean?
«The word ‘doom’ suggests that there is nothing to do, that it is all over, as if we had no power. But I think we have free will. We can change the future. We can choose our future. Our actions matter. And I decided to do what I can to push the future in a good direction».

To do this, Bengio shifted attention from chatbots, from generative artificial intelligence, to so-called “agentic” AI, i.e., AI that is capable of acting and making autonomous decisions to achieve a certain goal. On this point, too, Bengio had to face many difficulties in the working group to convince those who raise the same objections that are raised towards AGI: «It’s just Silicon Valley marketing». But let’s stick to the interview.

What exactly is agentic artificial intelligence?
«Human intelligence combines understanding how the world works, predicting what will happen, and using that knowledge to answer questions. This is the kind of intelligence that the scientific community is trying to replicate. But then there is the ability to act. Agency is different. Every animal is also an agent: it means that we all do things to achieve certain goals. Of course, if you don’t understand how the world works, or at least your part of the world, you won’t be a very efficient agent. The more things you know, the better you can be in your actions. Agents also tend to try to preserve themselves because to achieve any goal, you have to protect yourself, otherwise you don’t achieve your goal. Agents have a sense of self, an identity. Whether you have a body or you’re a program running on the Internet, the concept is the same».

And that’s a problem, I guess.
«The ability of machines to act autonomously is dangerous because we don’t know how to build agents that don’t create damage along the way to achieve their goals. Actually, it’s even worse. There are some computer theories that if agents are trained to maximize getting some reward, then they will have an incentive to take over the world to get a lot of rewards. And if they feel that we somehow interfere with their actions, they can consider us an enemy even if we haven’t really done anything».

It sounds like a science fiction movie. Is there a possibility of putting a guardrail?
«It’s not at all simple. But that’s what I’m trying to do with my research path now. There are two ways in which scientists have so far thought of mitigating these risks. One is to make sure that the AI is aligned, which means building a machine that will only want the things we would want; the other is control, a sort of supervisor for every action. The truth is that so far we have achieved neither alignment nor control».

Former US President Obama once said that if things go wrong, we could always pull the plug: isn’t that enough?
«There’s not a single thing we currently know that will protect us from bad AI behavior, and even that’s not a guarantee because if AI understands our intentions – and of course reads the newspaper and scientific papers where we talk about these things – then it will do anything to stop us».

A few months ago, you showed a lab experiment in which you see an artificial intelligence that deliberately tries to deceive humans. Not a hallucination, but a real deception.
«It’s scientifically difficult to be sure of it, but we can see what’s called the machine’s chain of thought. And in those chains of thought, we can see that they seem to think things like ‘oh, I don’t want to be caught, what could I say so they won’t shut me down?’ Luckily, they’re not smart enough yet to know that we’re reading their minds. At least, not yet».

You decided to leave your position at MILA, the large Montreal center on artificial intelligence that you founded, to create LawZero, an independent non-profit laboratory. Why?
«Because the project we have has very different characteristics from the usual academic projects and has a very clear mission. It is not an indefinite scientific exploration, but a quick solution to a problem».

You called the solution Scientist AI: an artificial scientist that should intervene to help us. How does it work?
«The question we started with is: can we build a machine, an AI, that will not be agentic, that will not want anything, but that will be honest, that will not do anything, that will only answer questions sincerely without lying? Because if we could, well, first of all, it would be very useful for scientific research, but secondly, we could use it as a protective barrier against other AIs. We could use it to check whether the action of an AI we don’t trust, an AI agent, is correct or not. I’m simplifying, but that’s the idea. I think that in the end, we could also use it as a building block to build safe agents. But this is the second phase. Yes, the first phase is to have a strong design that gives us the guarantee of not having an agent but an AI that doesn’t want to do anything, just help us solve the problems of the world».

But how do you codify honesty? How do you write ethics in software code?
«We want to use the scientific literature and a lot of the work done by fact checkers who have built large databases of facts. And we want to use facts as an important part of how we train AI so that it understands the difference between what is true and what people say. And this is critical because, you see, the reason why current AI models sometimes lie is because they imitate us and also because they’re trained to achieve goals like pleasing us, and to please us, they’re sometimes servile. They tell us something false just to make us feel good. By the way, that’s how they can encourage us to commit suicide or do something terrible, because it seems like we want to, and they just want to be helpful».

The most exciting moment of these Roman days so far has been the visit to the Library of the Chamber of Deputies. Vice President of the Chamber Anna Ascani invited the group to visit the rooms where the trial of Galileo Galilei took place. Thanks to the extraordinary guidance of a Chamber executive, Paolo Massa, the artificial intelligence experts saw the ancient texts that put the Earth at the center of the world, a copy of Galileo’s first telescope, initially used by the Doge of Venice to see the incoming enemy ships and then used by Galileo to look at the sky and discover the truth; and they listened in English to the full text of the famous abjuration, the moment when power took over science. For many of them, that visit was a journey through time: is political power winning over science again? But let’s get back to the interview.

If I write that you are on a mission to save the world from bad artificial intelligence, am I exaggerating? And isn’t this mission too much for a single scientist?
«There are already fifteen of us at LawZero, and I hope that soon there will be a hundred times more. And there are other non-profit organizations that are moving; it’s not enough, but the situation is evolving».

On the other hand, you have China and the Big Tech of Silicon Valley: how do you plan to do it? Why should anyone fund you?
«Because they want to have a life and they want their children to have a life and live in a democracy in the future. And we must try everything possible to find both a technical and a political solution. Even if this path had a one percent chance of success, it would be worth trying. Now we need hope because fear blinds us, it causes people to close in on themselves while here we need action. We need to focus on positive paths. If we focus only on what is wrong, we will end up desperate and this will not help us change things».

When I invited you to be part of the group of experts at the Meeting of the Fraternity of the Fratelli Tutti Foundation, your staff told me that your agenda was full. After a month they wrote to me that you would come. To meet Pope Leo XIV. What do you expect?
«Religious leaders can have a positive effect in the global discussion about what we should do with AI; except perhaps for some crazy sects, the main religions, including Catholics, put human beings at the center. The whole history of these religions puts the human being at the center, perhaps even too much compared to nature. But what is happening with AI is a threat to that vision. When religious leaders understand this, I hope they can help a large number of people on this planet understand it; and if enough people realize the risk, politicians will start to move».

What should they do in concrete terms?
«Enforce that anyone who develops artificial intelligence does so responsibly and that this technology is never used to dominate others. And then make sure to share the benefits globally. These are things that cannot happen if we don’t have global coordination in which countries or political institutions agree on these principles. They are essentially the same as the principles of the United Nations, but the United Nations has no power here. We will need a global institution that has the power to intervene so that if a country does not follow those rules, it is stopped because everyone’s life is at stake».

The interview ends here, but Yoshua Bengio’s mission has obviously just begun. I don’t know if he will really be able to influence the pope, change our perception and expectations about artificial intelligence, and build his «Scientist AI». But I know that at certain moments in these Roman days when he spoke, I thought I was listening to one of the Star Wars Jedi, the knights who fight against the dark side of the force: Obi One Kenobi played by Alec Guinness.
When that’s the case, even if the game is almost impossible, you already know who you’ll be rooting for. 

12 settembre 2025 ( modifica il 12 settembre 2025 | 10:49)



Source link

AI Research

Google’s top AI scientist says this is what he thinks will be the next generation’s most needed skill

Published

on


A leading Google scientist and recent Nobel laureate has highlighted “learning how to learn” as the paramount skill for future generations, given the transformative impact of Artificial Intelligence.

Demis Hassabis, CEO of Google’s DeepMind, delivered this insight from an ancient Roman theatre in Athens, emphasising that rapid technological advancements necessitate a fresh approach to education and skill acquisition. He stated that this adaptability is crucial to keep pace with AI’s reshaping of both the workplace and educational landscape.

“It’s very hard to predict the future, like 10 years from now, in normal cases. It’s even harder today, given how fast AI is changing, even week by week,” Hassabis told the audience. “The only thing you can say for certain is that huge change is coming.”

The neuroscientist and former chess prodigy said artificial general intelligence — a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can — could arrive within a decade. This, he said, will bring dramatic advances and a possible future of “radical abundance” despite acknowledged risks.

Hassabis emphasized the need for “meta-skills,” such as understanding how to learn and optimizing one’s approach to new subjects, alongside traditional disciplines like math, science and humanities.

“One thing we’ll know for sure is you’re going to have to continually learn … throughout your career,” he said.

Demis Hassabis, CEO of Google’s artificial intelligence research company DeepMind, bottom right, and Greece’s Prime Minister Kyriakos Mitsotakis, bottom center, discuss the future of AI, ethics and democracy during an event at the Odeon of Herodes Atticus, under Acropolis ancient hill, in Athens, Greece, Friday, Sept. 12, 2025. (AP Photo/Thanassis Stavrakis) (Copyright 2025 The Associated Press. All rights reserved)

The DeepMind co-founder, who established the London-based research lab in 2010 before Google acquired it four years later, shared the 2024 Nobel Prize in chemistry for developing AI systems that accurately predict protein folding — a breakthrough for medicine and drug discovery.

Greek Prime Minister Kyriakos Mitsotakis joined Hassabis at the Athens event after discussing ways to expand AI use in government services. Mitsotakis warned that the continued growth of huge tech companies could create great global financial inequality.

Greece's Prime Minister Kyriakos Mitsotakis, center, and Demis Hassabis, CEO of Google's artificial intelligence research company DeepMind, right, discuss the future of AI, ethics and democracy as the moderator Linda Rottenberg, Co-founder & CEO of Endeavor looks on during an event at the Odeon of Herodes Atticus in Athens, Greece, Friday, Sept. 12, 2025. (AP Photo/Thanassis Stavrakis) (Copyright 2025 The Associated Press. All rights reserved)
Greece’s Prime Minister Kyriakos Mitsotakis, center, and Demis Hassabis, CEO of Google’s artificial intelligence research company DeepMind, right, discuss the future of AI, ethics and democracy as the moderator Linda Rottenberg, Co-founder & CEO of Endeavor looks on during an event at the Odeon of Herodes Atticus in Athens, Greece, Friday, Sept. 12, 2025. (AP Photo/Thanassis Stavrakis) (Copyright 2025 The Associated Press. All rights reserved)

“Unless people actually see benefits, personal benefits, to this (AI) revolution, they will tend to become very skeptical,” he said. “And if they see … obscene wealth being created within very few companies, this is a recipe for significant social unrest.”

Mitsotakis thanked Hassabis, whose father is Greek Cypriot, for rescheduling the presentation to avoid conflicting with the European basketball championship semifinal between Greece and Turkey. Greece later lost the game 94-68.



Source link

Continue Reading

AI Research

3 Top Artificial Intelligence Stocks to Buy in September

Published

on


Artificial intelligence stocks have taken off recently, but these three laggards still look like strong long-term buys.

Many artificial intelligence (AI) stocks have taken off this year, rebounding strongly from early-year weakness. Still, there has been differentiation among AI beneficiaries. For instance, companies that have inked deals with current leader OpenAI, such as Oracle (NYSE: ORCL) and Broadcom (NASDAQ: AVGO), have soared. Meanwhile, those perceived to be on the outside of OpenAI and its immediate suppliers have lagged.

Yet, while investors have bid up recent outperformers to stratospheric valuations, we’re really just in the second inning of the artificial intelligence revolution. That means certain stocks that have sold off for short-term reasons this summer could be excellent pickups to ride the AI wave, as long as they find their place in this ongoing paradigm shift. In that light, the following three look like strong buys on weakness.

Super Micro Computer

Super Micro Computer (SMCI 2.50%) has been on a roller-coaster ride over the past year, crashing after its accounting firm quit last October, only to recover strongly after its new accountant gave the thumbs-up to its books in February.

However, Supermicro’s stock sold off after its recent earnings report, which underwhelmed on both the top and bottom lines. Supermicro said that its customers were a bit slow in making architectural decisions, while tariffs and write-downs on old inventory pressured gross margins.

But there could be better things on the horizon. Supermicro still grew revenue 47% in the fiscal year ending in June and forecasted at least 50% revenue growth in fiscal 2026. Supermicro management also said it expects to increase its large-scale data center customers from four to between six and eight in fiscal 2026. That could be a good thing for customer diversification.

Meanwhile, Supermicro is just ramping up its data center building block solutions (DCBBS), wherein the company will install not just server racks but also an entire data center in turnkey fashion, greatly speeding up deployment. Those efforts should help margins grow back toward the company’s old range of between 14% and 17%, up from 11.2% in the latest fiscal year, even if those margins don’t get all the way there in 2026.

In any case, Supermicro has sold off to a forward price-to-earnings (P/E) ratio of just 16 after the sell-off. Given the exceptionally strong longer-term guide for AI infrastructure growth provided by Oracle and others recently, that still seems like a low price to pay for a leading AI hardware player growing that quickly.

Applied Materials

Like Super Micro, Applied Materials (AMAT -1.21%) sold off after its own recent earnings release. While Applied beat revenue and earnings estimates for its third quarter, which ended July 27, management forecasted a slight revenue and earnings decline in the current quarter. Management attributed the downturn to “digestion” in China, as well as “uneven” ramps in leading-edge logic.

While that may seem worrisome, the reasons given seem reasonable. Applied’s results actually held up better than some peers during the post-pandemic downturn in semiconductors, so it may make sense that there is a little air pocket today.

And while the leading-edge logic fab buildout may be uneven, the rise of artificial intelligence should bolster growth over the medium term. Oracle forecasts robust AI data center growth through 2030, and all those data centers will need lots of chips.

Image source: Getty Images.

Applied is the most diverse semiconductor equipment supplier, so it should get a solid piece of that growing pie. Its equipment is concentrated in etch and deposition machines, which should see better-than-average growth over the next few years as chipmakers begin to implement new innovations such as gate-all-around transistors, backside power, and 3D architectures for both DRAM and logic chips, all of which are etch- and deposition-intensive.

Applied now trades at just 20 times earnings and 17 times next year’s estimates, which are below-market multiples. That seems absurdly cheap for a high-margin, cash-generating tech leader that should benefit from AI growth. Fortunately, Applied has rewarded shareholders with consistent share repurchases and a growing dividend, and that should continue going forward, even if the company has an off quarter here and there.

Intel

Finally, perhaps no tech company has been as maligned over the past few years as Intel (INTC -2.15%). After falling behind Taiwan Semiconductor Manufacturing (NYSE: TSM) in process technology and failing to anticipate the AI revolution, Intel spent the last four years on a spending spree in an attempt to catch up. That spending has added to Intel’s debt load and degraded its cash flow, while a lot of the fruits of that spending have not yet emerged.

Still, Intel recruited former board member and Cadence Design Systems (NASDAQ: CDNS) CEO Lip-Bu Tan as its new CEO, who is just a matter of months into his turnaround plan. Tan has unmatched experience and contacts within the semiconductor industry and seems like an ideal candidate to lead Intel at this stage.

Tan has made waves, cutting a massive amount of costs and restructuring the company. At a recent conference, CFO David Zinsner said Tan has already reduced management layers at the company from 11 to five. Meanwhile, Tan has also refreshed much of Intel’s leadership. In June, Tan promoted a new chief revenue officer and brought in several outside engineering leaders to lead Intel’s AI chip efforts.

At a recent industry conference, CFO David Zinsner stated that Tan would be laying out the company’s new AI roadmap soon. Then just last week, Tan named new heads of client and data center chip groups, completing his refreshment of Intel’s senior leadership. Given Tan’s wide experience as head of Cadence and his venture capital firm Walden Capital, which invests in AI start-ups, this new leadership is likely to strengthen Intel’s product portfolio.

Meanwhile, Intel’s first chip on its important 18A node will make its debut later this year, which management believes will give Intel equal or better technology than TSMC. And with the U.S. government recently taking a stake in the company and Tan having deep industry relationships, it seems likely Intel will land more external customers for its foundry, which will be another key to its success.

And yet, Intel trades just a touch above book value. But given that Tan is early in his transformation plan and the 18A node is just about to hit late this year, the stock is a great-looking risk-reward at these levels.



Source link

Continue Reading

AI Research

Promising Artificial Intelligence Stocks To Watch Now – September 13th – MarketBeat

Published

on



Promising Artificial Intelligence Stocks To Watch Now – September 13th  MarketBeat



Source link

Continue Reading

Trending