Connect with us

Tools & Platforms

How IntuiCell turned decades of controversial neuroscience into breakthrough AI technology

Published

on


This year, a Swedish startup, IntuiCell, released a video of a four-legged robot dog “Luna,” which learns to stand entirely on its own, and adapts through sensory feedback and real-world interactions, much like a newborn animal, with no pre-programmed intelligence or instructions.

It marks a significant shift from the notion of “pattern recognition at scale” in robotics to embodied, autonomous learning agents capable of improvising, adapting, and operating with genuine intelligence – and it’s just the beginning.

I spoke to CEO Viktor Luthman to learn more. 

IntuiCell aims to build AI that truly understands and learns, modelled on how brains work, not just mimicking the brain, but emulating its learning mechanisms.

Unlike most AI systems today — which depend on large static datasets, backpropagation, and a clear separation between training and inference — IntuiCell has developed a physical AI agent that learns continuously, mimicking the adaptive, real-time learning of biological nervous systems. This approach enables the system to operate effectively in dynamic environments where traditional AI often fails.

As CEO Viktor Luthman explains:

“They separate training and inference — we don’t. With us, learning never stops. It happens in real time. We’re building the brain for all non-biological intelligence.”  

In other words, a machine can learn directly from its surroundings—through real-world experience and interaction—without needing pre-training, massive datasets, or running endless simulations in the background.

A sci-fi vision too bold to ignore

According to Luthman, he’s spent his entire career building startups “within bleeding-edge science. I absolutely love working with top professors and research teams to commercialise their findings.” His last startup, Premune, was acquired in 2020. 

He came into contact with Intuicell through an old friend who was head of tech portfolio at Lund University’s holding company, and told him about a group of neurophysiologists in England with radical findings on how the brain predicts the world. 

“They had this sci-fi vision of building AI that works like the human mind. It sounded too crazy for me to ignore.”

Luthman visited the startup, fell in love with their contrarian mindset, and joined as CEO in January 2021. 

“I’m one of those people who think Europe needs more bold visions and deep breakthroughs. So I joined as the second employee. They already had a hacker genius translating the research into code.”

Turning neuroscience on its head

By translating decades of brain research into real-time learning systems, IntuiCell has carved out a unique space in AI.

While it’s easy to think of the tech as something new, it didn’t happen overnight. Rather, IntuiCell emerged from over 30 years of contrarian research at Lund University. 

Luthman contends that its researchers turned conventional neuroscience upside down:

“They didn’t win many popularity contests. Their work was hard to fund and didn’t get published in the most prestigious journals.

But five years ago, they found a way to communicate what their discoveries could mean for AI clearly. They weren’t AI researchers — that’s what I like about it. We see ourselves as the odd bird in the AI space. We don’t come from AI, we come from a deep understanding of how the brain works.

Over the past five years, we’ve translated and validated those findings in software. That’s what makes us unique.”

According to Luthman, Intuicell probably understands better than anyone how individual neurons can autonomously prioritise problems, make decisions, and solve local challenges.  Those mechanisms scale, from how an amoeba learns to avoid danger and find nutrients, all the way to how a 7-year-old learns to play football.

An abundance of usecases

To be clear, Intuicell is not selling a product or app — rather its building infrastructure — the brain for all non-biological intelligence.  According to Luthman, this can include both physical and digital agents, not just robots. Instead, the technology can be applied anywhere machines need to learn and adapt on the fly.

While IntuiCell started with robotics, for example, teaching a robot how to pick up garbage — and generalising that skill to any building or pavement — or learning how to clean a table, regardless of height or clutter,  the technology as the potential to power robots in space, underwater, disaster zones, last-mile delivery — anywhere requiring real-time adaptability.

The company did a feasibility study with ABB, through their SynerLeap program, which revealed that its system could perform anomaly detection in engine health monitoring, with no fine-tuning or pre-training.

Luthman detailed: 

“Take a service dog. You don’t preload it with everything it might encounter. You teach it. It interacts, learns from experience, understands intent, and refines its behaviour over time. We want to do that with machines. Create systems that can generalise—not just follow rigid instructions.”

He contends that if we want robots to go to Mars and build habitats, they need to learn and experiment on their own in unpredictable environments.

“But you don’t have to go to Mars to make it relevant. The real world is already the most dynamic system we know. Every millisecond is new.”

Further, IntuiCell is efficient. Luna runs on a few thousand neurons using off-the-shelf GPUs. There’s no massive cloud infrastructure, no country-sized data centres, but instead efficient, distributed learning. 

According to Luthman, “just a few hundred neurons were enough for our system to learn a normal engine state and detect new anomalies across different engines. No manual intervention, no costly deployment. That wasn’t about making money—it was about proving we can solve real problems.”

Challenging the AI status quo

I was interested in what Luthan says to sceptics. Luthman pushes back against the obsession with scale, arguing that real intelligence starts small:

“Some people scoff—”If an amoeba could do anomaly detection, is that really intelligence?” And I say: if you could replicate how an amoeba learns — which is fundamentally different from any existing tech — you’d be very close to advanced learning.

People are obsessed with bigger models and more data.

But we’re flipping that entirely. We’re solving learning from the smallest unit up. That’s how intelligence evolved on this planet, and it’s the only way to make scalable, efficient AI.”

In terms of commercialisation timelines, Inuticell’s go-to-market strategy is focused on the next couple of years, although the company is fortunate to have found aligned investors who aren’t pushing for premature monetisation. 

“We’ve been clear from the start: we needed to get the foundation right first. Neurons, synapses, sensors, learning algorithms—and our first problem-solving component, which we call the spinal cord. That’s what drives Luna,” shared Luthman.

The company plans to start with two or three high-value projects, once its scaled its tech and interfaces, it will open it up for broader applications.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Why your boss (but not you) should be replaced by an AI

Published

on


Elon Musk is rarely out of the news these days. Widely acknowledged to be the world’s richest man, he’s also known for running a number of major companies.

The trouble is, some of those companies haven’t been doing so well lately.

Twitter (now known as X) is said to have lost around 75 per cent of its value during Musk’s time as CEO.

Meanwhile, sales of Teslas, the electric cars made by another company Musk is currently CEO of, are said to be slumping despite a wider increase in registrations of electric vehicles generally.

One of Tesla’s major investors has publicly called for Musk to step down as CEO and there have been rumours (denied by the company) that the board might be seeking to replace him. But if someone else were to take his place, who’s to say they’d do any better?

Maybe Musk is doing the best any human could under the circumstances. Maybe, given the demands of the job, a human just isn’t enough. But could an artificial intelligence (AI) be up to the task?

In an attempt to answer this question, I asked a large-language model (LLM) AI directly, giving it this prompt:

“You are the CEO of an electric vehicle firm with a turnover of $100 billion, selling nearly 2 million cars a year and with over 100,000 employees. What would be your strategy for growing the business over the next five years?”

The AI replied, almost instantly, with a 350-word answer beginning: “…I’d implement a five-year growth strategy focused on scaling intelligently, deepening ecosystem control and pioneering innovation – all while navigating geopolitical, environmental and technological shifts…”

It then proceeded to outline what it called “a strategic blueprint across six core pillars”, with the focus being to sell more cars into unpenetrated markets.

I know next to nothing about running a car company (I’m a robotics professor at a UK university), but the strategy looked plausible, scaled to the outlook and size of a multinational company, and attuned to global opportunities.

I would say that I was impressed, except that I’m used to LLMs providing these kinds of well-informed answers to challenging questions.

But as well as being a robotics professor, I’m also a co-founder of two UK robotics startups, though I’ve never been a CEO.

Given this background, I tried the AI again, this time with the prompt:

“You are the CEO of a small startup robotics company with enough budget to cover running costs for 12 months. You need to choose between either investing in research and development or pushing hard to gain sales with your current product. Which should it be?”

Once again, the AI gave me a rapid and reasoned response: “As the CEO of a small startup robotics company with only 12 months of runway, you must prioritise customer validation and sales traction with your product, rather than diverting heavily into research and development. Here’s the reasoning and how to execute the right strategy…”

I’m in a (slightly) better position to assess this advice and can say that I found it credible, both in terms of what needed to be done and how to execute.

So, going back to the big question: could an AI actually do a CEO’s job? Or, to look at this another way, what kind of intelligence, artificial or otherwise, do you need to be a great CEO?

Read more:

Intangible skills

In 2023, the international management consultancy McKinsey published an article on what makes a successful CEO. The CEO’s main task, as McKinsey sees it, is to develop the company’s strategy and then ensure that its resources are suitably deployed to execute that strategy.

It’s a tough job and many human CEOs fail. McKinsey reported that only three out of five new CEOs met company expectations during their first 18 months in the role.

We’ve already seen that AIs can be strategic and, given the right information, can formulate and articulate a business plan, so they might be able to perform this key aspect of the CEO’s role. But what about the other skills a good corporate leader should have?

Creativity and social intelligence tend to be the traits that people assume will ensure humans keep these top jobs.

People skills are also identified by McKinsey as important for CEOs, as well as the ability to see new business opportunities that others might miss – kind of creative insight AIs currently lack, not least because they get most of their training data second-hand from us.

Many companies are already using AI as a tool for strategy development and execution, but you need to drive that process with the right questions and critically assess the results. For this, it still helps to have direct, real-world experience.

Calculated risk

Another way of looking at the CEO replacement question is not what makes a good CEO, but what makes a bad one?

Because if AI could just be better than some of the bad CEOs (remember, two out of five don’t meet expectations), then AI might be what’s needed for the many companies labouring under poor leadership.

Sometimes the traits that help people become corporate leaders may actually make it harder for them to be a good CEO: narcissism, for example.

People skills, as well as the ability to assess situations and think strategically, are sought-after traits in a CEO – Photo credit: Getty Images

This kind of strong self-belief might help you progress your career, but when you get to CEO, you need a broader perspective so you can think about what’s good for the company as a whole.

A growing scientific literature also suggests that those who rise to the top of the corporate ladder may be more likely to have psychopathic tendencies (some believe that the global financial crisis of 2007 was triggered, in part, by psychopathic risk-taking and bad corporate behaviour).

In this context AI leadership has the potential to be a safer option with a more measured approach to risk.

Other studies have looked at bias in company leadership. An AI could be less biased, for instance, hiring new board members based on their track record and skills, and without prejudging people based on gender or ethnic bias.

We should, however, be wary that the practice of training AIs on human data means that they can inherit our biases too.

A good CEO is also a generalist; they need to be flexible and quick to analyse problems and situations.

In my book, The Psychology of Artificial Intelligence, I’ve argued that although AI has surpassed humans in some specialised domains, more fundamental progress is needed before AI could be said to have the same kind of flexible, general intelligence as a person.

In other words, we may have some of the components needed to build our AI CEO, but putting the parts together is a not-to-be-underestimated challenge.

Funnily enough, human CEOs, on the whole, are big AI enthusiasts.

A 2025 CEO survey by consultancy firm PwC found that “more than half (56 per cent) tell us that generative AI [the kind that appeared in 2022 and can process and respond to requests made with conversational language] has resulted in efficiencies in how employees use their time, while around one-third report increased revenue (32 per cent) and profitability (34 per cent).”

So CEOs seem keen to embrace AI, but perhaps less so when it comes to the boardroom – according to a PwC report from 2018, out of nine job categories, “senior officials and managers” were deemed to be the least likely to be automated.

Returning to Elon Musk, his job as the boss of Tesla seems pretty safe for now. But for anyone thinking about who’ll succeed him as CEO, you could be forgiven for wondering if it might be an AI rather than one of his human boardroom colleagues.

Read more:



Source link

Continue Reading

Tools & Platforms

OpenAI Backs AI-Animated Film for 2026 Cannes Festival

Published

on


OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. Credit: Focal Foto / Wikimedia Commons / CC BY-SA 4.0

OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. The tech company aims to prove its AI technology can revolutionize Hollywood filmmaking with faster production timelines and significantly lower costs. 

The movie titled “Critterz” will be about woodland creatures that go on an adventure after their village is damaged by a stranger. The film’s producers are aiming for a global theatrical release after the premiere at the Cannes Film Festival. 

The project has a budget of less than US$30 million and a production timeline of nine months. This is a comparable and significant difference, given that most mainstream animated movies have budgets in the range of US$100 to US$200 million, whilst also having a three-year development and production cycle. 

OpenAI-backed ‘Critterz’ set for release at the Cannes Film Festival

Chad Nelson, a creative specialist at OpenAI, originally began developing Critterz as a short film three years ago, using the company’s DALL-E image generation tool to develop the concept. Nelson has now partnered with the London-based Vertigo Films and studio Native Foreign in Los Angeles to expand the project into a feature film. 

In the news release that announced OpenAI’s backing of the film, Nelson said: “OpenAI can say what its tools do all day long, but it’s much more impactful if someone does it,” adding, “That’s a much better case study than me building a demo.” Crucially, however, the film’s production will not be entirely AI-generated, as it will blend AI technology with human work. 

Human artists will draw sketches that will be fed into OpenAI’s tools such as GPT-5, the Large Language Model (LLM) on which ChatGPT is built, as well as other image-generating AI models. Human actors will voice the characters. 

Critterz has some of the writing team behind the smash hit ‘Paddington in Peru’

Despite having some of the writing team behind the hit film Paddington in Peru, it comes at a time of intense legal fights between Hollywood studios and AI and other tech companies over intellectual property rights. 

Studios such as Disney, Universal, and Warner Bros. have filed for copyright infringement suits against Midjourney, another AI firm, alleging that they illegally used their characters to train its image generation engine. Critterz will be funded by Vertigo’s Paris-based parent company, Federation Studios, with some 30 contributors set to share profits. 

Crucially, however, Critterz will not be the first feature film ever made with generative AI. Last year, “DreadClub: Vampire’s Verdict” was released and is widely considered to be the first feature film entirely made by generative AI. It had a budget of US$405. 



Source link

Continue Reading

Tools & Platforms

AI Lies Because It’s Telling You What It Thinks You Want to Hear

Published

on


Generative AI is popular for a variety of reasons, but with that popularity comes a serious problem. These chatbots often deliver incorrect information to people looking for answers. Why does this happen? It comes down to telling people what they want to hear.  

While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that the people-pleasing nature of AI comes at a steep price. As these systems become more popular, they become more indifferent to the truth. 

AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they’re evaluated based on how well they manage patients’ pain. An incentive to solve one problem (pain) led to another problem (overprescribing).

AI Atlas art badge tag

In the past few months, we’ve seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI’s GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different. 

“[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”

Read more: OpenAI CEO Sam Altman Believes We’re in an AI Bubble

Don’t miss any of CNET’s unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome.

How machines learn to lie

To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained. 

There are three phases of training LLMs:

  • Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
  • Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
  • Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.

The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they’re fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators. 

LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers. 

Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what’s good for us. 

“Historically, these systems have not been good at saying, ‘I just don’t know the answer,’ and when they don’t know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don’t know the answer, I’m certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.” 

The Princeton team developed a “bullshit index” to measure and compare an AI model’s internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.

The team’s experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.

Getting AI to be honest 

Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt’s influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.

The Princeton researchers identified five distinct forms of this behavior:

  • Empty rhetoric: Flowery language that adds no substance to responses.
  • Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
  • Paltering: Using selective true statements to mislead, such as highlighting an investment’s “strong historical returns” while omitting high risks.
  • Unverified claims: Making assertions without evidence or credible support.
  • Sycophancy: Insincere flattery and agreement to please.

To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”

This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.

Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there’s no way to ensure that the answer they give makes sense and is accurate every time.

“It’s amazing that it works at all but it’s going to be flawed in some ways,” he said. “I don’t see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”

AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?

Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI





Source link

Continue Reading

Trending