Connect with us

AI Research

The people who think AI might become conscious

Published

on


Pallab Ghosh profile image
BBC A robot lying on its back and staring up with a background of binary codeBBC

Listen to this article.

I step into the booth with some trepidation. I am about to be subjected to strobe lighting while music plays – as part of a research project trying to understand what makes us truly human.

It’s an experience that brings to mind the test in the science fiction film Bladerunner, designed to distinguish humans from artificially created beings posing as humans.

Could I be a robot from the future and not know it? Would I pass the test?

The researchers assure me that this is not actually what this experiment is about. The device that they call the “Dreamachine”, after the public programme of the same name, is designed to study how the human brain generates our conscious experiences of the world.

As the strobing begins, and even though my eyes are closed, I see swirling two-dimensional geometric patterns. It’s like jumping into a kaleidoscope, with constantly shifting triangles, pentagons and octagons. The colours are vivid, intense and ever-changing: pinks, magentas and turquoise hues, glowing like neon lights.

The “Dreamachine” brings the brain’s inner activity to the surface with flashing lights, aiming to explore how our thought processes work.

Pallab sitting in the 'Dreamachine' soundproofed booth, his eyes closed and wearing headphones, with a strobe light aimed at him

Pallab trying the ‘Dreamachine’, which aims to find out how we create our conscious experiences of the world

The images I’m seeing are unique to my own inner world and unique to myself, according to the researchers. They believe these patterns can shed light on consciousness itself.

They hear me whisper: “It’s lovely, absolutely lovely. It’s like flying through my own mind!”

The “Dreamachine”, at Sussex University’s Centre for Consciousness Science, is just one of many new research projects across the world investigating human consciousness: the part of our minds that enables us to be self-aware, to think and feel and make independent decisions about the world.

By learning the nature of consciousness, researchers hope to better understand what’s happening within the silicon brains of artificial intelligence. Some believe that AI systems will soon become independently conscious, if they haven’t already.

But what really is consciousness, and how close is AI to gaining it? And could the belief that AI might be conscious itself fundamentally change humans in the next few decades?

From science fiction to reality

The idea of machines with their own minds has long been explored in science fiction. Worries about AI stretch back nearly a hundred years to the film Metropolis, in which a robot impersonates a real woman.

A fear of machines becoming conscious and posing a threat to humans is explored in the 1968 film 2001: A Space Odyssey, when the HAL 9000 computer attacks astronauts onboard its spaceship. And in the final Mission Impossible film, which has just been released, the world is threatened by a powerful rogue AI, described by one character as a “self-aware, self-learning, truth-eating digital parasite”.

LMPC via Getty Images A poster for the film Metropolis, showing the head of a robotLMPC via Getty Images

Released in 1927, Fritz Lang’s Metropolis foresaw the struggle between humans and technology

But quite recently, in the real world there has been a rapid tipping point in thinking on machine consciousness, where credible voices have become concerned that this is no longer the stuff of science fiction.

The sudden shift has been prompted by the success of so-called large language models (LLMs), which can be accessed through apps on our phones such as Gemini and Chat GPT. The ability of the latest generation of LLMs to have plausible, free-flowing conversations has surprised even their designers and some of the leading experts in the field.

There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious.

Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as “blindly optimistic and driven by human exceptionalism”.

“We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general, for example in animals.”

So what actually is consciousness?

The short answer is that no-one knows. That’s clear from the good-natured but robust arguments among Prof Seth’s own team of young AI specialists, computing experts, neuroscientists and philosophers, who are trying to answer one of the biggest questions in science and philosophy.

While there are many differing views at the consciousness research centre, the scientists are unified in their method: to break this big problem down into lots of smaller ones in a series of research projects, which includes the Dreamachine.

Just as the search to find the “spark of life” that made inanimate objects come alive was abandoned in the 19th Century in favour of identifying how individual parts of living systems worked, the Sussex team is now adopting the same approach to consciousness.

A brain scan

Researchers are studying the brain in attempts to better understand consciousness

They hope to identify patterns of brain activity that explain various properties of conscious experiences, such as changes in electrical signals or blood flow to different regions. The goal is to go beyond looking for mere correlations between brain activity and consciousness, and try to come up with explanations for its individual components.

Prof Seth, the author of a book on consciousness, Being You, worries that we may be rushing headlong into a society that is being rapidly reshaped by the sheer pace of technological change without sufficient knowledge about the science, or thought about the consequences.

“We take it as if the future has already been written; that there is an inevitable march to a superhuman replacement,” he says.

“We did not have these conversations enough with the rise of social media, much to our collective detriment. But with AI, it is not too late. We can decide what we want.”

Is AI consciousness already here?

But there are some in the tech sector who believe that the AI in our computers and phones may already be conscious, and we should treat them as such.

Google suspended software engineer Blake Lemoine in 2022, after he argued that AI chatbots could feel things and potentially suffer.

In November 2024, an AI welfare officer for Anthropic, Kyle Fish, co-authored a report suggesting that AI consciousness was a realistic possibility in the near future. He recently told The New York Times that he also believed that there was a small (15%) chance that chatbots are already conscious.

One reason he thinks it possible is that no-one, not even the people who developed these systems, knows exactly how they work. That’s worrying, says Prof Murray Shanahan, principal scientist at Google DeepMind and emeritus professor in AI at Imperial College, London.

“We don’t actually understand very well the way in which LLMs work internally, and that is some cause for concern,” he tells the BBC.

According to Prof Shanahan, it’s important for tech firms to get a proper understanding of the systems they’re building – and researchers are looking at that as a matter of urgency.

“We are in a strange position of building these extremely complex things, where we don’t have a good theory of exactly how they achieve the remarkable things they are achieving,” he says. “So having a better understanding of how they work will enable us to steer them in the direction we want and to ensure that they are safe.”

‘The next stage in humanity’s evolution’

The prevailing view in the tech sector is that LLMs are not currently conscious in the way we experience the world, and probably not in any way at all. But that is something that the married couple Profs Lenore and Manuel Blum, both emeritus professors at Carnegie Mellon University in Pittsburgh, Pennsylvania, believe will change, possibly quite soon.

According to the Blums, that could happen as AI and LLMs have more live sensory inputs from the real world, such as vision and touch, by connecting cameras and haptic sensors (related to touch) to AI systems. They are developing a computer model that constructs its own internal language called Brainish to enable this additional sensory data to be processed, attempting to replicate the processes that go on in the brain.

Getty Images A still from the film 2001: A Space Odyssey, showing an astronaut walking along a corridorGetty Images

Films like 2001: A Space Odyssey have warned about the dangers of sentient computers

“We think Brainish can solve the problem of consciousness as we know it,” Lenore tells the BBC. “AI consciousness is inevitable.”

Manuel chips in enthusiastically with an impish grin, saying that the new systems that he too firmly believes will emerge will be the “next stage in humanity’s evolution”.

Conscious robots, he believes, “are our progeny. Down the road, machines like these will be entities that will be on Earth and maybe on other planets when we are no longer around”.

David Chalmers – Professor of Philosophy and Neural Science at New York University – defined the distinction between real and apparent consciousness at a conference in Tucson, Arizona in 1994. He laid out the “hard problem” of working out how and why any of the complex operations of brains give rise to conscious experience, such as our emotional response when we hear a nightingale sing.

Prof Chalmers says that he is open to the possibility of the hard problem being solved.

“The ideal outcome would be one where humanity shares in this new intelligence bonanza,” he tells the BBC. “Maybe our brains are augmented by AI systems.”

On the sci-fi implications of that, he wryly observes: “In my profession, there is a fine line between science fiction and philosophy”.

‘Meat-based computers’

Prof Seth, however, is exploring the idea that true consciousness can only be realised by living systems.

“A strong case can be made that it isn’t computation that is sufficient for consciousness but being alive,” he says.

“In brains, unlike computers, it’s hard to separate what they do from what they are.” Without this separation, he argues, it’s difficult to believe that brains “are simply meat-based computers”.

Close-up of nerve cells

Companies such as Cortical Systems are working with ‘organoids’ made up of nerve cells

And if Prof Seth’s intuition about life being important is on the right track, the most likely technology will not be made of silicon run on computer code, but will rather consist of tiny collections of nerve cells the size of lentil grains that are currently being grown in labs.

Called “mini-brains” in media reports, they are referred to as “cerebral organoids” by the scientific community, which uses them to research how the brain works, and for drug testing.

One Australian firm, Cortical Labs, in Melbourne, has even developed a system of nerve cells in a dish that can play the 1972 sports video game Pong. Although it is a far cry from a conscious system, the so-called “brain in a dish” is spooky as it moves a paddle up and down a screen to bat back a pixelated ball.

Some experts feel that if consciousness is to emerge, it is most likely to be from larger, more advanced versions of these living tissue systems.

Cortical Labs monitors their electrical activity for any signals that could conceivably be anything like the emergence of consciousness.

The firm’s chief scientific and operating officer, Dr Brett Kagan is mindful that any emerging uncontrollable intelligence might have priorities that “are not aligned with ours”. In which case, he says, half-jokingly, that possible organoid overlords would be easier to defeat because “there is always bleach” to pour over the fragile neurons.

Returning to a more solemn tone, he says the small but significant threat of artificial consciousness is something he’d like the big players in the field to focus on more as part of serious attempts to advance our scientific understanding – but says that “unfortunately, we don’t see any earnest efforts in this space”.

The illusion of consciousness

The more immediate problem, though, could be how the illusion of machines being conscious affects us.

In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers.

“It will mean that we trust these things more, share more data with them and be more open to persuasion.”

But the greater risk from the illusion of consciousness is a “moral corrosion”, he says.

“It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives” – meaning that we might have compassion for robots, but care less for other humans.

And that could fundamentally alter us, according to Prof Shanahan.

“Increasingly human relationships are going to be replicated in AI relationships, they will be used as teachers, friends, adversaries in computer games and even romantic partners. Whether that is a good or bad thing, I don’t know, but it is going to happen, and we are not going to be able to prevent it”.

Top picture credit: Getty Images

BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. And we showcase thought-provoking content from across BBC Sounds and iPlayer too. You can send us your feedback on the InDepth section by clicking on the button below.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Space technology: Lithuania’s promising space start-ups

Published

on


MaryLou Costa

Technology Reporter

Reporting fromVilnius, Lithuania
Astrolight A technician works with lasers at Astrolight's labAstrolight

Astrolight is developing a laser-based communications system

I’m led through a series of concrete corridors at Vilnius University, Lithuania; the murals give a Soviet-era vibe, and it seems an unlikely location for a high-tech lab working on a laser communication system.

But that’s where you’ll find the headquarters of Astrolight, a six-year-old Lithuanian space-tech start-up that has just raised €2.8m ($2.3m; £2.4m) to build what it calls an “optical data highway”.

You could think of the tech as invisible internet cables, designed to link up satellites with Earth.

With 70,000 satellites expected to launch in the next five years, it’s a market with a lot of potential.

The company hopes to be part of a shift from traditional radio frequency-based communication, to faster, more secure and higher-bandwidth laser technology.

Astrolight’s space laser technology could have defence applications as well, which is timely given Russia’s current aggressive attitude towards its neighbours.

Astrolight is already part of Nato’s Diana project (Defence Innovation Accelerator for the North Atlantic), an incubator, set up in 2023 to apply civilian technology to defence challenges.

In Astrolight’s case, Nato is keen to leverage its fast, hack-proof laser communications to transmit crucial intelligence in defence operations – something the Lithuanian Navy is already doing.

It approached Astrolight three years ago looking for a laser that would allow ships to communicate during radio silence.

“So we said, ‘all right – we know how to do it for space. It looks like we can do it also for terrestrial applications’,” recalls Astrolight co-founder and CEO Laurynas Maciulis, who’s based in Lithuania’s capital, Vilnius.

For the military his company’s tech is attractive, as the laser system is difficult to intercept or jam.

​​It’s also about “low detectability”, Mr Maciulis adds:

“If you turn on your radio transmitter in Ukraine, you’re immediately becoming a target, because it’s easy to track. So with this technology, because the information travels in a very narrow laser beam, it’s very difficult to detect.”

Astrolight An Astrolight laser points towards the sky with telescopes in the backgroundAstrolight

Astrolight’s system is difficult to detect or jam

Worth about £2.5bn, Lithuania’s defence budget is small when you compare it to larger countries like the UK, which spends around £54bn a year.

But if you look at defence spending as a percentage of GDP, then Lithuania is spending more than many bigger countries.

Around 3% of its GDP is spent on defence, and that’s set to rise to 5.5%. By comparison, UK defence spending is worth 2.5% of GDP.

Recognised for its strength in niche technologies like Astrolight’s lasers, 30% of Lithuania’s space projects have received EU funding, compared with the EU national average of 17%.

“Space technology is rapidly becoming an increasingly integrated element of Lithuania’s broader defence and resilience strategy,” says Invest Lithuania’s Šarūnas Genys, who is the body’s head of manufacturing sector, and defence sector expert.

Space tech can often have civilian and military uses.

Mr Genys gives the example of Lithuanian life sciences firm Delta Biosciences, which is preparing a mission to the International Space Station to test radiation-resistant medical compounds.

“While developed for spaceflight, these innovations could also support special operations forces operating in high-radiation environments,” he says.

He adds that Vilnius-based Kongsberg NanoAvionics has secured a major contract to manufacture hundreds of satellites.

“While primarily commercial, such infrastructure has inherent dual-use potential supporting encrypted communications and real-time intelligence, surveillance, and reconnaissance across NATO’s eastern flank,” says Mr Genys.

BlackSwan Space Tomas Malinauskas with a moustache and in front of bookshelves.BlackSwan Space

Lithuania should invest in its domestic space tech says Tomas Malinauskas

Going hand in hand with Astrolight’s laser technology is the autonomous satellite navigation system fellow Lithuanian space-tech start-up Blackswan Space has developed.

Blackswan Space’s “vision based navigation system” allows satellites to be programmed and repositioned independently of a human based at a ground control centre who, its founders say, won’t be able to keep up with the sheer volume of satellites launching in the coming years.

In a defence environment, the same technology can be used to remotely destroy an enemy satellite, as well as to train soldiers by creating battle simulations.

But the sales pitch to the Lithuanian military hasn’t necessarily been straightforward, acknowledges Tomas Malinauskas, Blackswan Space’s chief commercial officer.

He’s also concerned that government funding for the sector isn’t matching the level of innovation coming out of it.

He points out that instead of spending $300m on a US-made drone, the government could invest in a constellation of small satellites.

“Build your own capability for communication and intelligence gathering of enemy countries, rather than a drone that is going to be shot down in the first two hours of a conflict,” argues Mr Malinauskas, also based in Vilnius.

“It would be a big boost for our small space community, but as well, it would be a long-term, sustainable value-add for the future of the Lithuanian military.”

Space Hub LT Blonde haired Eglė Elena Šataitė in a pin-striped jacketSpace Hub LT

Eglė Elena Šataitė leads a government agency supporting space tech

Eglė Elena Šataitė is the head of Space Hub LT, a Vilnius-based agency supporting space companies as part of Lithuania’s government-funded Innovation Agency.

“Our government is, of course, aware of the reality of where we live, and that we have to invest more in security and defence – and we have to admit that space technologies are the ones that are enabling defence technologies,” says Ms Šataitė.

The country’s Minister for Economy and Innovation, Lukas Savickas, says he understands Mr Malinauskas’ concern and is looking at government spending on developing space tech.

“Space technology is one of the highest added-value creating sectors, as it is known for its horizontality; many space-based solutions go in line with biotech, AI, new materials, optics, ICT and other fields of innovation,” says Mr Savickas.

Whatever happens with government funding, the Lithuanian appetite for innovation remains strong.

“We always have to prove to others that we belong on the global stage,” says Dominykas Milasius, co-founder of Delta Biosciences.

“And everything we do is also geopolitical… we have to build up critical value offerings, sciences and other critical technologies, to make our allies understand that it’s probably good to protect Lithuania.”

More Technology of Business



Source link

Continue Reading

AI Research

How Is AI Changing The Way Students Learn At Business School?

Published

on


Artificial intelligence is the skill set that employers increasingly want from future hires. Find out how b-schools are equipping students to use AI

In 2025, AI is rapidly reshaping future careers. According to GMAC’s latest Corporate Recruiters Survey, global employers predict that knowledge of AI tools will be the fastest growing essential skill for new business hires over the next five years. 

Business students are already seeing AI’s value. More than three-quarters of business schools have already integrated AI into their curricula—from essay writing to personal tutoring, career guidance to soft-skill development.

BusinessBecause hears from current business students about how AI is reshaping the business school learning experience.

The benefits and drawbacks of using AI for essay writing

Many business school students are gaining firsthand experience of using AI to assist their academic work. At Rotterdam School of Management, Erasmus University in the Netherlands, students are required to use AI tools when submitting essays, alongside a log of their interactions.

“I was quite surprised when we were explicitly instructed to use AI for an assignment,” said Lara Harfner, who is studying International Business Administration (IBA) at RSM. “I liked the idea. But at the same time, I wondered what we would be graded on, since it was technically the AI generating the essay.”

Lara decided to approach this task as if she were writing the essay herself. She began by prompting the AI to brainstorm around the topic, research areas using academic studies and build an outline, before asking it to write a full draft.

However, during this process Lara encountered several problems. The AI-generated sources were either non-existent or inappropriate, and the tool had to be explicitly instructed on which concepts to focus on. It tended to be too broad, touching on many ideas without thoroughly analyzing any of them.

“In the end, I felt noticeably less connected to the content,” Lara says. “It didn’t feel like I was the actual author, which made me feel less responsible for the essay, even though it was still my name on the assignment.”

Despite the result sounding more polished, Lara thought she could have produced a better essay on her own with minimal AI support. What’s more, the grades she received on the AI-related assignments were below her usual average. “To me, that shows that AI is a great support tool, but it can’t produce high-quality academic work on its own.”

AI-concerned employers who took part in the Corporate Recruiters Survey echo this finding, stating that they would rather GME graduates use AI as a strategic partner in learning and strategy, than as a source for more and faster content.


How business students use AI as a personal tutor

Daniel Carvalho, a Global Online MBA student, also frequently uses AI in his academic assignments, something encouraged by his professors at Porto Business School (PBS).

However, Daniel treats AI as a personal tutor, asking it to explain complex topics in simple terms and deepen the explanation. On top of this, he uses it for brainstorming ideas, summarizing case studies, drafting presentations and exploring different points of view.

“My MBA experience has shown me how AI, when used thoughtfully, can significantly boost productivity and effectiveness,” he says.

Perhaps one of the most interesting ways Daniel uses AI is by turning course material into a personal podcast. “I convert text-based materials into audio using text-to-speech tools, and create podcast-style recaps to review content in a more conversational and engaging way. This allows me to listen to the materials on the go—in the car or at the gym.”

While studying his financial management course, Daniel even built a custom GPT using course materials. Much like a personal tutor, it would ask him questions about the material, validate his understanding, and explain any questions he got wrong. “This helped reinforce my knowledge so effectively that I was able to correctly answer all multiple-choice questions in the final exam,” he explains.

Similarly, at Villanova School of Business in the US, Master of Science in Business Analytics and AI (MSBAi) students are building personalized AI bots with distinct personalities. Students embed reference materials into the bot which then shape how the bot responds to questions. 

“The focus of the program is to apply these analytics and AI skills to improve business results and career outcomes,” says Nathan Coates, MSBAi faculty director at the school. “Employers are increasingly looking for knowledge and skills for leveraging GenAI within business processes. Students in our program learn how AI systems work, what their limitations are, and what they can do better than existing solutions.”


The common limitations of using AI for academic work

Kristiina Esop, who is studying a doctorate in Business Administration and Management at Estonian Business School, agrees that AI in education must always be used critically and with intention. She warns students should always be aware of AI’s limitations.

Kristiina currently uses AI tools to explore different scenarios, synthesize large volumes of information, and detect emerging debates—all of which are essential for her work both academically and professionally.

However, she cautions that AI tools are not 100% accurate. Kristiina once asked ChatGPT to map actors in circular economy governance, and it returned a neat, simplified diagram that ignored important aspects. “That felt like a red flag,” she says. “It reminded me that complexity can’t always be flattened into clean logic. If something feels too easy, too certain—that’s when it is probably time to ask better questions.”

To avoid this problem, Kristiina combines the tools with critical thinking and contextual reading, and connects the findings back to the core questions in her research. “I assess the relevance and depth of the sources carefully,” she says. “AI can widen the lens, but I still need to focus it myself.”

She believes such critical thinking when using AI is essential. “Knowing when to question AI-generated outputs, when to dig deeper, and when to disregard a suggestion entirely is what builds intellectual maturity and decision-making capacity,” she says.

This is also what Wharton management professor Ethan Mollick, author of Co Intelligence: Living and Working with AI and co-director of the Generative AI Lab believes. He says the best way to work with [generative AI] is to treat it like a person. “So you’re in this interesting trap,” he says. “Treat it like a person and you’re 90% of the way there. At the same time, you have to remember you are dealing with a software process.”

Hult International Business School, too, expects its students to use AI in a balanced way, encouraging them to think critically about when and how to use it. For example, Rafael Martínez Quiles, a Master’s in Business Analytics student at Hult, uses AI as a second set of eyes to review his thinking. 

“I develop my logic from scratch, then use AI to catch potential issues or suggest improvements,” he explains. “This controlled, feedback-oriented approach strengthens both the final product and my own learning.”

At Hult, students engage with AI to solve complex, real-world challenges as part of the curriculum. “Practical business projects at Hult showed me that AI is only powerful when used with real understanding,” says Rafael. “It doesn’t replace creativity or business acumen, it supports it.”

As vice president of Hult’s AI Society, N-AIble, Rafael has seen this mindset in action. The society’s members explore AI ethically, using it to augment their work, not automate it. “These experiences have made me even more confident and excited about applying AI in the real world,” he says.


The AI learning tools students are using to improve understanding

In other business schools, AI is being used to offer faculty a second pair of hands. Nazarbayev University Graduate School of Business has recently introduced an ‘AI Jockey’. Appearing live on a second screen next to the lecturer’s slides, this AI tool acts as a second teacher, providing real-time clarifications, offering alternate examples, challenging assumptions, and deepening explanations. 

“Students gain access to instant, tailored explanations that complement the lecture, enhancing understanding and engagement,” says Dr Tom Vinaimont, assistant professor of finance, Nazarbayev University Graduate School of Business, who uses the AI jockey in his teaching. 

Rather than replacing the instructor, the AI enhances the learning experience by adding an interactive, AI-driven layer to traditional teaching, transforming learning into a more dynamic, responsive experience.

“The AI Jockey model encourages students to think critically about information, question the validity of AI outputs, and build essential AI literacy. It helps students not only keep pace with technological change but also prepares them to lead in an AI-integrated world by co-creating knowledge in real time,” says Dr Vinaimont.


How AI can be used to encourage critical thinking among students

So, if you’re looking to impress potential employers, learning to work with AI while a student is a good place to start. But simply using AI tools isn’t enough. You must think critically, solve problems creatively and be aware of AI’s limitations. 

Most of all, you must be adaptable. GMAC’s new AI-powered tool, Advancery, helps you find graduate business programs tailored to your career goals, with AI-readiness in mind.

After all, working with AI is a skill in itself. And in 2025, it is a valuable one.



Source link

Continue Reading

AI Research

The new frontier of medical malpractice

Published

on


Although the beginnings of modern artificial intelligence (AI) can be traced
as far back as 1956, modern generative AI, the most famous example of which is
arguably ChatGPT, only began emerging in 2019. For better or worse, the steady
rise of generative AI has increasingly impacted the medical field. At this time, AI has begun to advance in a way that creates
potential liability…



Source link

Continue Reading

Trending