Ethics & Policy
Quantum Entanglement, Superposition, ASSC, TSC, AI Ethics, and Subjective Experience

The 28th annual meeting of the Association for the Scientific Study of Consciousness [ASSC] was held in Crete, Greece, from July 6-9, 2025. The Science of Consciousness [TSC] 31st annual meeting was held from July 6-11, 2025, in Barcelona, Spain. The Festival of Consciousness [FoC] was also held in Barcelona, from July 11-13, 2025.
But nobody heard anything.
There was one press release from a company that presented at the TSC. But nothing much about the events in the media [in English, Greek, or Spanish]. There were lots of sessions, talks, and so forth, but nothing serious enough to stoke wider interest. If, at least, two of the biggest conferences in a supposedly important field were held in the same month, and what followed was contiguous silence, it is either that the field is already in irreparable oblivion, or in pre-reality, or the conferences should have been called off since they have nothing useful to show.
This is a case study in what they all seemed to ignore. If your research work is dim or has no promise, you may wither away, albeit feigning activities. Nobody is interested in anything phenomenological or the dumpster of insufferable terms that have been overloaded with consciousness research.
Mental health, safety and alignment
There are parents with children who were victims of deepfake images at school. There are colleges where students are cheating with AI. There are families that have suffered ransom trauma from deepfake audio. There are situations of addiction by some people whose loved ones do not know what to do. There are horror experiences some families have had because an AI chatbot nudged a member into an irreversible decision.
There are mental health issues that some people have sought answers to, that the DSM-5-TR didn’t do much to explicate. There are side effects of psychiatric medications that are so devastating, but little else loved ones can do. There are loneliness and emptiness issues that are personal crises in the lives of many, driving them to extremes. There is just so much around the brain, mind, and now AI, where answers are sought in very credible ways.
Many of these are in reports. So, what should a conference — that is within the study of the mind or brain — do? At least try to find how to answer or postulate in ways that can be meaningful to lives. But what have these conferences done? Nothing meaningful.
Outdated theories
The so-called leading theories of consciousness: Integrated Information Theory [IIT] and Global Workspace Theory [GWT], are 100% worthless. Not 50%, not 80%. 100% infinitely worthless. IIT is around 21 years old. GWT is around 37 years old. Either or both cannot explain one mental state. Just one. From inception to date. None can say what the human mind is or how the brain organizes information. If AI is able to access human emotions, including sycophancy, the theories cannot say why the human mind is susceptible.
Yet, these theories are in competition in what is called ARC-COGITATE. Like scientists are testing useless theories and screaming no one knows how consciousness or the brain works, as a license for nonsense. They don’t have to stop or be told to do so, but their irrelevance stinks so badly, nobody wants anything to do with them.
The mind [or whatever is responsible for memory, emotions, feelings, others] and consciousness are correlated with the brain. Empirical evidence in neuroscience has shown that neurons and their signals are involved. What theory of neurons and their signals can be used to explain the mind and consciousness? If anything else is proposed, how does it rule out neurons and their signals?
Quantum entanglement and quantum superposition
This is all that any serious consciousness science research should be asking. Some people are saying quantum entanglement and quantum superposition. How does it explain or rule out neurons or the signals of neurons for functions and their attributes? There is a microtubule added to that is so comical, it reflects how some people think that the reputation of science should subsume dirt. No one cares about your quantum contests if someone is trying to resolve mood disorders.
AI ethics research is one area where the philosophical aspect of consciousness may have found relevance. They should have been able to propose answers that colleges would use to discourage AI cheating, as well as displays that AI chatbot companies would use to discourage it, as a fair effort. But nothing like that.
“College is for learning. Learning relays the mind to solve problems. Understanding is a key necessity. Assignments in school contribute to this process. If the mind does not use some of its sequences for learning, it may not be able to solve some problems or understand some complexities”. Say a message like that is displayed for certain prompts, like those for assignments or applications, and so forth, it may not mean many would stop, but it could contribute to inputs that would let some have the courage to hold back.
Consciousness and sentience research
Consciousness and sentience research have plummeted into the abyss. The field no longer has the credibility to be called science. Consciousness is not subjective experience if subjectivity is not the only thing that goes with experiences. Any experience [cold, pain, delight, language, and so forth] that can be subjective has to be in attention or less than attention. The priority given to pain is attention, not simply subjectivity, so to speak. Experiences may also go with intent, for when to speak or not, or get Tylenol for pain, or avoid the source, or to get a jacket against the cold. Subjectivity qualifies experiences, like attention [or less] and intent. Walking can be subjective and be in attention as well.
This means that subjectivity and other qualifiers are present wherever functions are mechanized. This rules out neural correlates of consciousness somewhere or that the cortex is responsible for consciousness and not the cerebellum, since qualifiers apply to functions everywhere in the brain. It is like saying no one knows how consciousness works, but we are sure it is only in the cortex. But how about the functions of the cerebellum, are they never subjective, or never experienced? The brain does not also make predictions. If anyone says the brain does, just ask how and what components? This refutes predictive coding, processing, and prediction error. Controlled hallucination is a hollow flimflam.
Entanglement in neuroscience
There is little point in rebutting the dogma of the consciousness people, since what they have left is their sunken ship. In the 2020s, it is no longer relevant to be seeking what it is like to be a bat. Because how far would that help, if known? Sense [or memory] of being, property of self, and knowledge of existence [which are likely answers to the bat question] can be explained by the attributes and interactions of electrical and chemical configurators [theorizing that signals are not for neural communication but the basis of functions].
Human consciousness can be defined, conceptually, as the interaction of the electrical and chemical configurators, in sets — in clusters of neurons — with their features, grading those interactions into functions and experiences.
Simply, for functions to occur, electrical and chemical configurators, in sets, have to interact. However, attributes for those interactions are obtained by the states of electrical and chemical configurators at the time of the interactions.
These can be used to explain mental states, addictions, design warning systems for AI chatbot usage, develop AI ethics models, prospect states of consciousness, and so forth.
So, sets of electrical configurators often have momentary states or statuses at which they interact [or strike] at sets of chemical configurators, which also have momentary states or statuses. So, if, for example, in a set, electrical configurators split, with some going ahead, it is in that state that they interact, initially, before the incoming ones follow, which may or may not interact the same way or at the same destination [or set of chemical configurators]. If a set [of chemical configurators] has more volumes of one of the constituents [chemical configurators], more than the others, it is in that state too that they are interacted with.
This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN does not agree or disagree with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.
Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration.
Ethics & Policy
Heron Financial brings out AI ethics committee and training programme

Heron Financial has unveiled an artificial intelligence (AI) ethics committee and student cohort.
The mortgage and protection broker’s newly formed AI ethics committee will give employees the opportunity to offer input on how AI is utilised, which ensures that innovation will be balanced with transparency, fairness and accountability.
Heron Financial said the AI training will be a “combination of technical learning with ethical oversight” to ensure the “responsible integration” of the technology into Everglades, its digital platform.
As part of its AI student cohort, a number of members from different teams will be upskilled with an AI development programme.
Matt Coulson (pictured), the firm’s co-founder, said: “I’m proud to be launching our first AI learning and development cohort. We see AI not just as a way to enhance our operations, but as an opportunity to enhance our people.
“By investing in their skills, we’re giving them knowledge they can use in their work and daily lives. Human and technology centricity are part of our core values; it’s only natural we bring the two together.”
How we built a limited company proposition around brokers’ needs
Sponsored by BM Solutions
‘It’s about combining technology and human expertise to raise the bar’
Alongside this, Heron Financial’s Everglades platform will be embedded with AI-driven innovations comprising an AI reporting assistant, an AI adviser assistant and a prediction model, which it said uses basic customer details to predict the likelihood of them gaining a mortgage.
The platform aims to support prospective and existing homeowners by simplifying processes, as well as providing a referral route for clients and streamlining the sales-to-mortgage process to help sales teams and housebuilders.
Also in the pipeline is a new generation of AI ‘agents’ within Everglades, which Heron Financial said will further automate and optimise the whole adviser journey.
Coulson added: “We expect these developments to save hours in the adviser servicing journey, enabling our advisers to spend more time with clients and less time on admin. It’s about combining technology and human expertise to raise the bar for the whole industry.”
At the beginning of the year, Coulson discussed the firm’s green retrofit process in an interview with Mortgage Solutions.
Ethics & Policy
Starting my AI and higher education seminar

Greetings from early September, when fall classes have begun. Today I’d like to share information about one of my seminars as part of my long-running practice of being open about my teaching.
It’s in Georgetown University’s Learning, Design, and Technology graduate program. I’m team teaching it with LDT’s founding director, professor Eddie Maloney, who first designed and led the class last year, along with some great guest presenters. The subject is the impact of AI on higher education.
Every week students dive into one topic, from pedagogy to criticism, changes in politics and economics to using LLMs to craft simulations. During the semester students lead discussions and also teach the class about a particular AI technology. Each student will also tackle Georgetown’s Applied Artificial Intelligence Microcredential.
Midjourney-created image of AI impacting a university.
Almost all of the readings are online. We have two books scheduled: B. Mairéad Pratschke, Generative AI and Education and De Kai, Raising AI: An Essential Guide to Parenting Our Future.
Here’s the syllabus:
Introduction to AI in Higher Education:
Overview of AI, its history, and current applications in academia
Signing up for tech sessions (45 min max) (pair up) and discussion leading spots
Delving deeper into LLMs
Guest speakers: Molly Chehak and Ella Csarno.
Readings:
- AI Tools in Society & Cognitive Offloading
- MIT study, Your Brain on ChatGPT. (overview, since the study is 120 pages)
- Allie K. Miller, Practical Guide to Prompting ChatGPT 5 Differently
Macro Impacts: Economics, Culture, Politics
A broad look at AI’s societal effects—on labor, governance, and policy—with a focus on emerging regulatory frameworks and debates about automation and democracy.
Readings:
Optional: Daron Acemoglu, “The Simple Macroeconomics of AI”
Institutional Responses
This week, we will also examine how colleges and universities are responding structurally to the rise of AI, from policy and pedagogy to strategic planning and public communication.
Reading:
How Colleges and Universities Are Grappling with AI
We consider AI’s influence on teaching and learning through institutional case studies and applied practices, with guest insights on faculty development and student experience.
Guest speaker: Eddie Watson on the AAC&U Institute on AI, Pedagogy, and the Curriculum.
Reading:
Pedagogy, Conversations, and Anthropomorphism
Through simulations and classroom case studies, we examine the pedagogical potential and ethical complications of human-AI interaction, academic integrity, and AI as a “conversational partner.”
Readings:
AI and Ethics/Humanities
This session explores ethical and philosophical questions around AI development and deployment, drawing on work in the humanities, global ethics, and human-centered design.
Guest speaker: De Kai
Readings: selections from Raising AI: An Essential Guide to Parenting Our Future (TBD)
Critiques of AI
We engage critical perspectives on AI, focusing on algorithmic bias, epistemology, and the political economy of data, while challenging dominant narratives of inevitability and neutrality.
Readings:
Human-AI Learning
This week considers how humans and AI collaborate for learning, and what this partnership means for workforce development, education, and a sense of lifelong fulfillment.
Guest Speaker: Dewey Murdick
Readings: TBD
Agentic Possibilities
A close look at emerging AI systems and agents, with attention to autonomy, instructional design, and how educational tools are integrating generative AI features.
Reading
- Pratschke, Generative AI and Education, chapters 5-6
Future Possibilities
We explore visions for the future of AI in higher education, including utopian and dystopian framings, and ask how ethical leadership and equity might shape what comes next.
Readings:
One week with topic reserved for emerging issues
Topic, materials, and exercises to be determined by instructors and students.
Student final presentations
I’m very excited to be teaching it.
Related
Ethics & Policy
A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action

A policy roundtable in Tokyo and a workshop in Bangkok deepened the dialogue between Southeast Asia and the OECD, fostering collaboration on AI governance across countries, sectors, and policy communities.
A dialogue strengthened by two dynamics
Southeast Asia is rapidly emerging as a vibrant hub for Artificial Intelligence. From Indonesia and Thailand to Singapore, and Viet Nam, governments have launched national AI strategies, and ASEAN published a Guide on AI Governance and Ethics to promote consistency of AI frameworks across jurisdictions.
At the same time, OECD’s engagement with Southeast Asia is strengthening. The 2024 Ministerial Council Meeting highlighted the region as a priority for OECD global relations, coinciding with the tenth anniversary of the Southeast Asia Regional Programme (SEARP) and the initiation of the accession processes for Indonesia and Thailand.
Together, these dynamics open new avenues for practical cooperation on trustworthy, safe and secure AI.
In 2025, this momentum translated into two concrete engagement initiatives: a policy roundtable in Tokyo in May and a co-creation workshop for the OECD AI Policy Toolkit in Bangkok in August. Both events shaped regional dialogues on AI governance and helped to bridge the gap between technical expertise and policy design.
Japan actively supported both initiatives, demonstrating a strong commitment to regional AI governance. At the OECD SEARP Regional Forum in Bangkok, Japan expressed hope that AI would become a new pillar of OECD–Southeast Asia cooperation, highlighting the Tokyo Policy Roundtable on AI as the first of many such initiatives. Subsequently, Japan supported the co-creation workshop in Bangkok in August, helping to ensure a regional focus and high-level engagement across Southeast Asia.
The Tokyo roundtable enabled discussions on AI in agriculture, natural disaster management, national strategies and more
On 26 May 2025, the OECD and its Tokyo Office held a regional policy roundtable, bringing together over 80 experts and policymakers from Japan, Korea, Southeast Asia, and the ASEAN Secretariat, with many more joining online. The event highlighted the importance of linking technical expertise with policy to ensure AI delivers benefits responsibly, drawing on the OECD AI Principles and Policy Observatory. Speakers from ASEAN, Thailand, and Singapore shared progress on implementing their national AI strategies,
AI’s potential came into focus through two powerful examples. In agriculture, it can speed up crop breeding and enable precision farming, while in disaster management, digital twins built from satellite and telecoms data can strengthen early warnings and damage assessments. As climate change intensifies agricultural stress and natural hazards, these cases demonstrate how AI can deliver real societal benefits—while underscoring the need for robust governance and regional cooperation, supported by OECD initiatives such as the upcoming AI Policy Toolkit.
The OECD presented activities in international AI governance, including the AI Policy Observatory, the AI Incidents and Hazards Monitor and the Reporting Framework for the G7 Hiroshima Code of Conduct for Developers of Advanced AI systems.
Bangkok co-creation workshop: testing the OECD AI Policy Toolkit
Following the Tokyo roundtable, the OECD, supported by the Foreign Ministries of Japan and Thailand, hosted the first co-creation workshop for the OECD AI Policy Toolkit on 6 August 2025 in Bangkok. Twenty senior policymakers and AI experts from across the region contributed regional perspectives to shape the tool, which will feature a self-assessment module to identify priorities and gaps, alongside a repository of proven policies and practices. The initiative, led by Costa Rica as Chair of the 2025 OECD Ministerial Council Meeting, has already gained strong backing from governments and organisations, including Japan’s Ministry of Internal Affairs and Communications.
Hands-on discussion on key challenges and practical solutions
The co-creation workshop provided a space for participants to work in breakout groups and discuss concrete challenges, explore practical solutions and design effective AI policies in key domains.
Participants identified several pressing challenges for AI governance in Southeast Asia. Designing public funding programmes for AI research and development remains difficult in an environment where technology evolves faster than policy cycles, while the need for large-scale investment continues to grow.
The scarcity of high-quality, local-language data, weak governance frameworks, limited data-sharing mechanisms, and reliance on foreign compute providers further constrain progress, alongside the shortage of locally developed AI models tailored to sectoral needs.
Participants also focused on labour market transformation, digital divides, and the need to advance AI literacy across all levels of society – from citizens to policymakers – to foster awareness of both opportunities and risks.
Participants showcased promising national initiatives, from responsible data-sharing frameworks and investment incentives for data centres and venture capital, to sectoral data platforms and local-language large language models. Countries are also rolling out capacity-building programmes to strengthen AI adoption and oversight, while seeking the right balance between regulation and innovation to foster trustworthy AI.
Across groups, participants agreed on the need to strengthen engagement, foster collaboration, and create enabling conditions for the practical deployment of AI, capacity building, and knowledge sharing.
The instruments discussed during the workshop will feed into the Toolkit’s policy repository, enabling other countries to draw on these experiences and adapt them to their national contexts.
Taking AI governance from global guidance to local practice
The Tokyo roundtable and Bangkok workshop were key milestones in building a regional dialogue on AI governance with Southeast Asian countries. By combining policy frameworks with technical demonstrations, the discussions focused on turning international guidance into practical, locally tailored measures. Southeast Asia is already shaping how global principles translate into action, and with continued collaboration, the OECD AI Policy Toolkit will provide governments in the region—and beyond—with concrete tools to design and implement trustworthy AI.
The authors would like to thank the team members who contributed to the success of these projects: Hugo Lavigne, Luis Aranda, Lucia Russo, Celine Caira, Kasumi Sugimoto, Julia Carro, Nikolas Schmidt and Takako Kitahara.
The post A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action appeared first on OECD.AI.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi