I step into the booth with some trepidation. I am about to be subjected to strobe lighting while music plays – as part of a research project trying to understand what makes us truly human.
It’s an experience that brings to mind the test in the science fiction film Bladerunner, designed to distinguish humans from artificially created beings posing as humans.
Could I be a robot from the future and not know it? Would I pass the test?
The researchers assure me that this is not actually what this experiment is about. The device that they call the “Dreamachine”, after the public programme of the same name, is designed to study how the human brain generates our conscious experiences of the world.
As the strobing begins, and even though my eyes are closed, I see swirling two-dimensional geometric patterns. It’s like jumping into a kaleidoscope, with constantly shifting triangles, pentagons and octagons. The colours are vivid, intense and ever-changing: pinks, magentas and turquoise hues, glowing like neon lights.
The “Dreamachine” brings the brain’s inner activity to the surface with flashing lights, aiming to explore how our thought processes work.
Pallab trying the ‘Dreamachine’, which aims to find out how we create our conscious experiences of the world
The images I’m seeing are unique to my own inner world and unique to myself, according to the researchers. They believe these patterns can shed light on consciousness itself.
They hear me whisper: “It’s lovely, absolutely lovely. It’s like flying through my own mind!”
The “Dreamachine”, at Sussex University’s Centre for Consciousness Science, is just one of many new research projects across the world investigating human consciousness: the part of our minds that enables us to be self-aware, to think and feel and make independent decisions about the world.
By learning the nature of consciousness, researchers hope to better understand what’s happening within the silicon brains of artificial intelligence. Some believe that AI systems will soon become independently conscious, if they haven’t already.
But what really is consciousness, and how close is AI to gaining it? And could the belief that AI might be conscious itself fundamentally change humans in the next few decades?
From science fiction to reality
The idea of machines with their own minds has long been explored in science fiction. Worries about AI stretch back nearly a hundred years to the film Metropolis, in which a robot impersonates a real woman.
A fear of machines becoming conscious and posing a threat to humans is explored in the 1968 film 2001: A Space Odyssey, when the HAL 9000 computer attacks astronauts onboard its spaceship. And in the final Mission Impossible film, which has just been released, the world is threatened by a powerful rogue AI, described by one character as a “self-aware, self-learning, truth-eating digital parasite”.
LMPC via Getty Images
Released in 1927, Fritz Lang’s Metropolis foresaw the struggle between humans and technology
But quite recently, in the real world there has been a rapid tipping point in thinking on machine consciousness, where credible voices have become concerned that this is no longer the stuff of science fiction.
The sudden shift has been prompted by the success of so-called large language models (LLMs), which can be accessed through apps on our phones such as Gemini and Chat GPT. The ability of the latest generation of LLMs to have plausible, free-flowing conversations has surprised even their designers and some of the leading experts in the field.
There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious.
Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as “blindly optimistic and driven by human exceptionalism”.
“We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general, for example in animals.”
So what actually is consciousness?
The short answer is that no-one knows. That’s clear from the good-natured but robust arguments among Prof Seth’s own team of young AI specialists, computing experts, neuroscientists and philosophers, who are trying to answer one of the biggest questions in science and philosophy.
While there are many differing views at the consciousness research centre, the scientists are unified in their method: to break this big problem down into lots of smaller ones in a series of research projects, which includes the Dreamachine.
Just as the search to find the “spark of life” that made inanimate objects come alive was abandoned in the 19th Century in favour of identifying how individual parts of living systems worked, the Sussex team is now adopting the same approach to consciousness.
Researchers are studying the brain in attempts to better understand consciousness
They hope to identify patterns of brain activity that explain various properties of conscious experiences, such as changes in electrical signals or blood flow to different regions. The goal is to go beyond looking for mere correlations between brain activity and consciousness, and try to come up with explanations for its individual components.
Prof Seth, the author of a book on consciousness, Being You, worries that we may be rushing headlong into a society that is being rapidly reshaped by the sheer pace of technological change without sufficient knowledge about the science, or thought about the consequences.
“We take it as if the future has already been written; that there is an inevitable march to a superhuman replacement,” he says.
“We did not have these conversations enough with the rise of social media, much to our collective detriment. But with AI, it is not too late. We can decide what we want.”
Is AI consciousness already here?
But there are some in the tech sector who believe that the AI in our computers and phones may already be conscious, and we should treat them as such.
Google suspended software engineer Blake Lemoine in 2022, after he argued that AI chatbots could feel things and potentially suffer.
In November 2024, an AI welfare officer for Anthropic, Kyle Fish, co-authored a report suggesting that AI consciousness was a realistic possibility in the near future. He recently told The New York Times that he also believed that there was a small (15%) chance that chatbots are already conscious.
One reason he thinks it possible is that no-one, not even the people who developed these systems, knows exactly how they work. That’s worrying, says Prof Murray Shanahan, principal scientist at Google DeepMind and emeritus professor in AI at Imperial College, London.
“We don’t actually understand very well the way in which LLMs work internally, and that is some cause for concern,” he tells the BBC.
According to Prof Shanahan, it’s important for tech firms to get a proper understanding of the systems they’re building – and researchers are looking at that as a matter of urgency.
“We are in a strange position of building these extremely complex things, where we don’t have a good theory of exactly how they achieve the remarkable things they are achieving,” he says. “So having a better understanding of how they work will enable us to steer them in the direction we want and to ensure that they are safe.”
‘The next stage in humanity’s evolution’
The prevailing view in the tech sector is that LLMs are not currently conscious in the way we experience the world, and probably not in any way at all. But that is something that the married couple Profs Lenore and Manuel Blum, both emeritus professors at Carnegie Mellon University in Pittsburgh, Pennsylvania, believe will change, possibly quite soon.
According to the Blums, that could happen as AI and LLMs have more live sensory inputs from the real world, such as vision and touch, by connecting cameras and haptic sensors (related to touch) to AI systems. They are developing a computer model that constructs its own internal language called Brainish to enable this additional sensory data to be processed, attempting to replicate the processes that go on in the brain.
Getty Images
Films like 2001: A Space Odyssey have warned about the dangers of sentient computers
“We think Brainish can solve the problem of consciousness as we know it,” Lenore tells the BBC. “AI consciousness is inevitable.”
Manuel chips in enthusiastically with an impish grin, saying that the new systems that he too firmly believes will emerge will be the “next stage in humanity’s evolution”.
Conscious robots, he believes, “are our progeny. Down the road, machines like these will be entities that will be on Earth and maybe on other planets when we are no longer around”.
David Chalmers – Professor of Philosophy and Neural Science at New York University – defined the distinction between real and apparent consciousness at a conference in Tucson, Arizona in 1994. He laid out the “hard problem” of working out how and why any of the complex operations of brains give rise to conscious experience, such as our emotional response when we hear a nightingale sing.
Prof Chalmers says that he is open to the possibility of the hard problem being solved.
“The ideal outcome would be one where humanity shares in this new intelligence bonanza,” he tells the BBC. “Maybe our brains are augmented by AI systems.”
On the sci-fi implications of that, he wryly observes: “In my profession, there is a fine line between science fiction and philosophy”.
‘Meat-based computers’
Prof Seth, however, is exploring the idea that true consciousness can only be realised by living systems.
“A strong case can be made that it isn’t computation that is sufficient for consciousness but being alive,” he says.
“In brains, unlike computers, it’s hard to separate what they do from what they are.” Without this separation, he argues, it’s difficult to believe that brains “are simply meat-based computers”.
Companies such as Cortical Systems are working with ‘organoids’ made up of nerve cells
And if Prof Seth’s intuition about life being important is on the right track, the most likely technology will not be made of silicon run on computer code, but will rather consist of tiny collections of nerve cells the size of lentil grains that are currently being grown in labs.
Called “mini-brains” in media reports, they are referred to as “cerebral organoids” by the scientific community, which uses them to research how the brain works, and for drug testing.
One Australian firm, Cortical Labs, in Melbourne, has even developed a system of nerve cells in a dish that can play the 1972 sports video game Pong. Although it is a far cry from a conscious system, the so-called “brain in a dish” is spooky as it moves a paddle up and down a screen to bat back a pixelated ball.
Some experts feel that if consciousness is to emerge, it is most likely to be from larger, more advanced versions of these living tissue systems.
Cortical Labs monitors their electrical activity for any signals that could conceivably be anything like the emergence of consciousness.
The firm’s chief scientific and operating officer, Dr Brett Kagan is mindful that any emerging uncontrollable intelligence might have priorities that “are not aligned with ours”. In which case, he says, half-jokingly, that possible organoid overlords would be easier to defeat because “there is always bleach” to pour over the fragile neurons.
Returning to a more solemn tone, he says the small but significant threat of artificial consciousness is something he’d like the big players in the field to focus on more as part of serious attempts to advance our scientific understanding – but says that “unfortunately, we don’t see any earnest efforts in this space”.
The illusion of consciousness
The more immediate problem, though, could be how the illusion of machines being conscious affects us.
In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers.
“It will mean that we trust these things more, share more data with them and be more open to persuasion.”
But the greater risk from the illusion of consciousness is a “moral corrosion”, he says.
“It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives” – meaning that we might have compassion for robots, but care less for other humans.
And that could fundamentally alter us, according to Prof Shanahan.
“Increasingly human relationships are going to be replicated in AI relationships, they will be used as teachers, friends, adversaries in computer games and even romantic partners. Whether that is a good or bad thing, I don’t know, but it is going to happen, and we are not going to be able to prevent it”.
Top picture credit: Getty Images
BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. And we showcase thought-provoking content from across BBC Sounds and iPlayer too. You can send us your feedback on the InDepth section by clicking on the button below.
A new study shows that the language used to prompt AI chatbots can steer them toward different cultural mindsets, even when the question stays the same. Researchers at MIT and Tongji University found that large language models like OpenAI’s GPT and China’s ERNIE change their tone and reasoning depending on whether they’re responding in English or Chinese.
The results indicate that these systems translate language while also reflecting cultural patterns. These patterns appear in how the models provide advice, interpret logic, and handle questions related to social behavior.
Same Question, Different Outlook
The team tested both GPT and ERNIE by running identical tasks in English and Chinese. Across dozens of prompts, they found that when GPT answered in Chinese, it leaned more toward community-driven values and context-based reasoning. In English, its responses tilted toward individualism and sharper logic.
Take social orientation, for instance. In Chinese, GPT was more likely to favor group loyalty and shared goals. In English, it shifted toward personal independence and self-expression. These patterns matched well-documented cultural divides between East and West.
When it came to reasoning, the shift continued. The Chinese version of GPT gave answers that accounted for context, uncertainty, and change over time. It also offered more flexible interpretations, often responding with ranges or multiple options instead of just one answer. In contrast, the English version stuck to direct logic and clearly defined outcomes.
No Nudging Needed
What’s striking is that these shifts occurred without any cultural instructions. The researchers didn’t tell the models to act more “Western” or “Eastern.” They simply changed the input language. That alone was enough to flip the models’ behavior, almost like switching glasses and seeing the world in a new shade.
To check how strong this effect was, the researchers repeated each task more than 100 times. They tweaked prompt formats, varied the examples, and even changed gender pronouns. No matter what they adjusted, the cultural patterns held steady.
Real-World Impact
The study didn’t stop at lab tests. In a separate exercise, GPT was asked to choose between two ad slogans, one that stressed personal benefit, another that highlighted family values. When the prompt came in Chinese, GPT picked the group-centered slogan most of the time. In English, it leaned toward the one focused on the individual.
This might sound small, but it shows how language choice can guide the model’s output in ways that ripple into marketing, decision-making, and even education. People using AI tools in one language may get very different advice than someone asking the same question in another.
Can You Steer It?
The researchers also tested a workaround. They added cultural prompts, telling GPT to imagine itself as a person raised in a specific country. That small nudge helped the model shift its tone, even in English, suggesting that cultural context can be dialed up or down depending on how the prompt is framed.
Why It Matters
The findings concern how language affects the way AI models present information. Differences in response patterns suggest that the input language influences how content is structured and interpreted. As AI tools become more integrated into routine tasks and decision-making processes, language-based variations in output may influence user choices over time.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Indonesia’s Mount Lewotobi Laki-laki has begun erupting again – at one point shooting an ash cloud 18km (11mi) into the sky – as residents flee their homes once more.
There have been no reports of casualties since Monday morning, when the volcano on the island of Flores began spewing ash and lava again. Authorities have placed it on the highest alert level since an earlier round of eruptions three weeks ago.
At least 24 flights to and from the neighbouring resort island of Bali were cancelled on Monday, though some flights had resumed by Tuesday morning.
The initial column of hot clouds that rose at 11:05 (03:05 GMT) Monday was the volcano’s highest since November, said geology agency chief Muhammad Wafid.
“An eruption of that size certainly carries a higher potential for danger, including its impact on aviation,” Wafid told The Associated Press.
Monday’s eruption, which was accompanied by a thunderous roar, led authorities to enlarge the exclusion zone to a 7km radius from the central vent. They also warned of potential lahar floods – a type of mud or debris flow of volcanic materials – if heavy rain occurs.
The twin-peaked volcano erupted again at 19:30 on Monday, sending ash clouds and lava up to 13km into the air. It erupted a third time at 05:53 on Tuesday at a reduced intensity.
Videos shared overnight show glowing red lava spurting from the volcano’s peaks as residents get into cars and buses to flee.
More than 4,000 people have been evacuated from the area so far, according to the local disaster management agency.
Residents who have stayed put are facing a shortage of water, food and masks, local authorities say.
“As the eruption continues, with several secondary explosions and ash clouds drifting westward and northward, the affected communities who have not been relocated… require focused emergency response efforts,” say Paulus Sony Sang Tukan, who leads the Pululera village, about 8km from Lewotobi Laki-laki.
“Water is still available, but there’s concern about its cleanliness and whether it has been contaminated, since our entire area was blanketed in thick volcanic ash during yesterday’s [eruptions],” he said.
Indonesia sits on the Pacific “Ring of Fire” where tectonic plates collide, causing frequent volcanic activity as well as earthquakes.
Lewotobi Laki-laki has erupted multiple times this year – no casualties have been reported so far.
As tools such as ChatGPT, Copilot and other generative artificial intelligence (AI) systems become part of everyday workflows, more companies are looking for employees who can answer “yes” to this question. In other words, people who can prompt effectively, think with AI, and use it to boost productivity.
In fact, in a growing number of roles, being “AI fluent” is quickly becoming as important as being proficient in office software once was.
But we’ve all had that moment when we’ve asked an AI chatbot a question and received what feels like the most generic, surface level answer. The problem isn’t the AI – you just haven’t given it enough to work with.
Think of it this way. During training, the AI will have “read” virtually everything on the internet. But because it makes predictions, it will give you the most probable, most common response. Without specific guidance, it’s like walking into a restaurant and asking for something good. You’ll likely get the chicken.
Your solution lies in understanding that AI systems excel at adapting to context, but you have to provide it. So how exactly do you do that?
Crafting better prompts
You may have heard the term “prompt engineering”. It might sound like you need to design some kind of technical script to get results.
To get the most out of your AI conversations, it’s important that you convey a few basics about what you want, and how you want it. Our approach follows the acronym CATS – context, angle, task and style.
Context means providing the setting and background information the AI needs. Instead of asking “How do I write a proposal?” try “I’m a nonprofit director writing a grant proposal to a foundation that funds environmental education programs for urban schools”. Upload relevant documents, explain your constraints, and describe your specific situation.
Angle (or attitude) leverages AI’s strength in role-playing and perspective-taking. Rather than getting a neutral response, specify the attitude you want. For example, “Act as a critical peer reviewer and identify weaknesses in my argument” or “Take the perspective of a supportive mentor helping me improve this draft”.
Task is specifically about what you actually want the AI to do. “Help me with my presentation” is vague. But “Give me three ways to make my opening slide more engaging for an audience of small business owners” is actionable.
Style harnesses AI’s ability to adapt to different formats and audiences. Specify whether you want a formal report, a casual email, bullet points for executives, or an explanation suitable for teenagers. Tell the AI what voice you want to use – for example, a formal academic style, technical, engaging or conversational.
In a growing number of roles, being able to use AI is quickly becoming as important as being proficient in office software once was. Shutterstock
Context is everything
Besides crafting a clear, effective prompt, you can also focus on managing the surrounding information – that is to say on “context engineering”. Context engineering refers to everything that surrounds the prompt.
That means thinking about the environment and information the AI has access to: its memory function, instructions leading up to the task, prior conversation history, documents you upload, or examples of what good output looks like.
You should think about prompting as a conversation. If you’re not happy with the first response, push for more, ask for changes, or provide more clarifying information.
Don’t expect the AI to give a ready-made response. Instead, use it to trigger your own thinking. If you feel the AI has produced a lot of good material but you get stuck, copy the best parts into a fresh session and ask it to summarise and continue from there.
Always retain your professional distance and remind yourself that you are the only thinking part in this relationship. And always make sure to check the accuracy of anything an AI produces – errors are increasingly common.
AI systems are remarkably capable, but they need you – and human intelligence – to bridge the gap between their vast generic knowledge and your particular situation. Give them enough context to work with, and they might surprise you with how helpful they can be.