Connect with us

AI Research

I’ve been researching generative AI for years, and I’m tired of the consciousness debate. Humans barely understand our own

Published

on


In 2022, a Google engineer claimed one of the company’s AIs was sentient. He lost his job, but the story stuck. For a brief moment, questions of machine consciousness spilled out of science fiction and into the headlines.

Now, in 2025, the debate has returned. As the release of GPT-5 was overshadowed by public nostalgia for GPT-4o, it was everyday users who began acting as if these systems were more than their makers intended. Into this moment stepped another tech giant: Mustafa Suleyman, CEO of Microsoft Research, declaring loud and clear on his blog that AI is not, and will never be, conscious.

At first glance, it sounds like common sense. Machines obviously aren’t conscious. Why not make that abundantly clear?

Because it isn’t true.

The hard fact is that we do not understand consciousness. Not in humans, not in animals, and not in machines. Theories abound, but the reality is that no one can explain exactly what consciousness is, let alone how to measure it. To state with certainty that AI can never be conscious is not science, it isn’t caution. It’s overconfidence, and in this case, a thinly veiled agenda. 

If AI can’t ever be conscious, then companies building it have nothing to answer for. No unsettling questions. No ethics debates. No pressure. Surely, it would be nice if we could claim with full confidence that the consciousness question is not relevant to AI. But convenience doesn’t make it true.

What troubles me most is the tone. These pronouncements aren’t just misleading, they’re also infantilizing. As if the public can’t handle complexity. It is as though we must be shielded from ambiguity, spoon-fed tidy certainties instead of being trusted with reality.

Yes, people falling in love with and marrying chatbots or preferring AI companions to human ones is concerning. It unveils a deeper pattern of loneliness and disconnection. This is a social and psychological challenge in its own right, and one we should take seriously. The rise of digital companions reveals how hungry people are for connection.

But the real issue isn’t that some people believe AI might be conscious. The deeper problem is our growing overreliance on technology in general—an addiction that stretches back long before the current debate on machine consciousness. From social media feeds to video games targeting children, technology has a long history of prioritizing engagement and fostering addiction, with no regard for the well-being of its users. 

But technological dysfunction won’t be solved by feeding people false assurances about what machines can or cannot be. If anything, denial only obscures the urgency of confronting our dependence head-on. 

We need to learn to live with uncertainty. Because uncertainty is the reality of this moment. 

Suleyman did add an important caveat: our attention should be on the beings we already know are conscious—humans, animals, the living world. On this point, I couldn’t agree more. But look at our record. Billions of animals endure extreme suffering in factory farms on a daily basis. Forests are flattened for profit, numerous species gone extinct. And in the age of AI, the use case most celebrated by investors is replacing human labor.

The pattern is clear. Again and again, we minimize the experiences of those who aren’t like us, those we would benefit from exploiting. We claim animals don’t suffer all that much or simply turn a blind eye. We treat nature as expendable. We routinely devalue people whose exploitation benefits our economic system. Now, we rush to declare that AI will never be conscious. Same playbook, new page.

So no, we shouldn’t blindly trust the builders of AI to tell us what is and isn’t conscious, any more than we should trust meat factories to tell us about the experience of cows. 

The reality is messier. AI may never be conscious. It may surprise us. We cannot say for certain. And we might not be able to tell whether it is conscious even if it does happen. And that is the point.

For a long time, I avoided this topic. Consciousness felt too slippery, too strange. But I’ve come to see that acknowledging our uncertainty is not a weakness. It is a strength.

Because in an era of false certainties, honesty about the unknown may be the most radical truth we have.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

AI Research

Exclusive | Cyberport may use Chinese GPUs at Hong Kong supercomputing hub to cut reliance on Nvidia

Published

on


Cyberport may add some graphics processing units (GPUs) made in China to its Artificial Intelligence Supercomputing Centre in Hong Kong, as the government-run incubator seeks to reduce its reliance on Nvidia chips amid worsening China-US relations, its chief executive said.

Cyberport has bought four GPUs made by four different mainland Chinese chipmakers and has been testing them at its AI lab to gauge which ones to adopt in the expanding facilities, Rocky Cheng Chung-ngam said in an interview with the Post on Friday. The park has been weighing the use of Chinese GPUs since it first began installing Nvidia chips last year, he said.

“At that time, China-US relations were already quite strained, so relying solely on [Nvidia] was no longer an option,” Cheng said. “That is why we felt that for any new procurement, we should in any case include some from the mainland.”

Cyberport’s AI supercomputing centre, established in December with its first phase offering 1,300 petaflops of computing power, will deliver another 1,700 petaflops by the end of this year, with all 3,000 petaflops currently relying on Nvidia’s H800 chips, he added.

Cyberport CEO Rocky Cheng Chung-ngam on September 12, 2025. Photo: Jonathan Wong

As all four Chinese solutions offer similar performance, Cyberport would take cost into account when determining which ones to order, according to Cheng, declining to name the suppliers.



Source link

Continue Reading

AI Research

Why do AI chatbots use so much energy?

Published

on


In recent years, ChatGPT has exploded in popularity, with nearly 200 million users pumping a total of over a billion prompts into the app every day. These prompts may seem to complete requests out of thin air.

But behind the scenes, artificial intelligence (AI) chatbots are using a massive amount of energy. In 2023, data centers, which are used to train and process AI, were responsible for 4.4% of electricity use in the United States. Across the world, these centers make up around 1.5% of global energy consumption. These numbers are expected to skyrocket, at least doubling by 2030 as the demand for AI grows.



Source link

Continue Reading

AI Research

AI Transformation (AX) using artificial intelligence (AI) is spreading throughout the domestic finan..

Published

on


Getty Images Bank

AI Transformation (AX) using artificial intelligence (AI) is spreading throughout the domestic financial sector. Beyond simple digital transformation (DX), the strategy is to internalize AI across organizations and services to achieve management efficiency, work automation, and customer experience innovation at the same time. Financial companies are moving the judgment that it will be difficult to survive unless they raise their AI capabilities across the company in an environment where regulations and competition are intensifying. AX’s core is internal process innovation and customer service differentiation. AI can reduce costs and secure speed by quickly and accurately handling existing human-dependent tasks such as loan review, risk management, investment product recommendation, and internal counseling support.

At customer contact points, high-quality counseling is provided 24 hours a day through AI bankers, voice robots, and customized chatbots to increase financial service satisfaction. Industry sources say, “AX is not just a matter of technology, but a structural change that determines financial companies’ competitiveness and crisis response.”

First of all, major domestic banks and financial holding companies began to introduce in-house AI assistant and private large language model (LLM), establish a dedicated organization, and establish an AI governance system at the level of all affiliates. It is trying to automate internal work and differentiate customer services at the same time by establishing a strategic center at the group company level or introducing collaboration tools and AI platforms throughout the company.

KB Financial Group has established a ‘KB AI strategy’ and a ‘KB AI agent roadmap’ to introduce more than 250 AI agents to 39 core business areas of the group. It has established the ‘KB GenAI Portal’ for the first time in the financial sector to create an environment in which all executives and employees can utilize and develop AI without coding, and through this, it is efficiently changing work productivity and how they work.

Shinhan Financial Group is increasing work productivity with cloud-based collaboration tools (M365+Copilot) and introducing AI to the site by affiliates. Shinhan Bank placed Generative AI bankers at the window through the “AI Branch,” and in the application “SOL,” “AI Investment Mate” provides customized information to customers through card news.

사진설명

Hana Bank is operating a “foreign exchange company AI departure prediction system” using its foreign exchange expertise. It is a structure that analyzes 253 variables based on past transaction data to calculate the possibility of suspension of transactions and automatically guides branches to help preemptively respond.

Woori Financial Group established an AI strategy center within the holding under the leadership of Chairman Lim Jong-ryong and deployed AI-only organizations to all affiliates, including banks, cards, securities, and insurance.

Internet banks are trying to differentiate themselves by focusing on interactive search and calculation machines, forgery and alteration detection, customized recommendations, and spreading in-house AI culture. As there is no offline sales network, it is actively strengthening customer contact AI innovation such as app and mobile counseling.

Kakao Bank has upgraded its AI organization to a group and has more than 500 dedicated personnel. K-Bank achieved a 100% recognition rate with its identification card recognition solution using AI, and started to set standards by publishing papers to academia. Toss Bank uses AI to determine ID forgery and alteration (99.5% accuracy), automate mass document optical character recognition (OCR), convert counseling voice letters (STT), and build its own financial-specific language model.

Insurance companies are increasing accuracy, approval rate, and processing speed by introducing AI in the entire process of risk assessment, underwriting, and insurance payment. Due to the nature of the insurance industry, the effect of using AI is remarkable as the screening and payment process is long and complex.

Samsung Fire & Marine Insurance has more than halved the proportion of manpower review by automating the cancer diagnosis and surgical benefit review process through ‘AI medical review’. The machine learning-based “Long-Term Insurance Sickness Screening System” raised the approval rate from 71% to 90% and secured patents.

Industry experts view this AI transformation as a paradigm shift in the financial industry, not just the introduction of technology. It is necessary to create new added value and customer experiences beyond cost reduction and efficiency through AI. In particular, it is evaluated that the differentiation of financial companies will be strengthened only when AI and data are directly connected to resolving customer inconveniences.

However, preparing for ethical, security, and accountability issues is considered an essential task as much as the speed of AI’s spread. Failure to manage risks such as the impact of large language models on financial decision-making, personal information protection, and algorithmic bias can lead to loss of trust. This means that the process of developing accumulated experiences into industrial standards through small experiments is of paramount importance.

[Reporter Lee Soyeon]



Source link

Continue Reading

Trending