Connect with us

Ethics & Policy

Quantum Entanglement, Superposition, ASSC, TSC, AI Ethics, and Subjective Experience

Published

on


The 28th annual meeting of the Association for the Scientific Study of Consciousness [ASSC] was held in Crete, Greece, from July 6-9, 2025. The Science of Consciousness [TSC] 31st annual meeting was held from July 6-11, 2025, in Barcelona, Spain. The Festival of Consciousness [FoC] was also held in Barcelona, from July 11-13, 2025.

But nobody heard anything.

There was one press release from a company that presented at the TSC. But nothing much about the events in the media [in English, Greek, or Spanish]. There were lots of sessions, talks, and so forth, but nothing serious enough to stoke wider interest. If, at least, two of the biggest conferences in a supposedly important field were held in the same month, and what followed was contiguous silence, it is either that the field is already in irreparable oblivion, or in pre-reality, or the conferences should have been called off since they have nothing useful to show.

This is a case study in what they all seemed to ignore. If your research work is dim or has no promise, you may wither away, albeit feigning activities. Nobody is interested in anything phenomenological or the dumpster of insufferable terms that have been overloaded with consciousness research.

Mental health, safety and alignment

There are parents with children who were victims of deepfake images at school. There are colleges where students are cheating with AI. There are families that have suffered ransom trauma from deepfake audio. There are situations of addiction by some people whose loved ones do not know what to do. There are horror experiences some families have had because an AI chatbot nudged a member into an irreversible decision.

There are mental health issues that some people have sought answers to, that the DSM-5-TR didn’t do much to explicate. There are side effects of psychiatric medications that are so devastating, but little else loved ones can do. There are loneliness and emptiness issues that are personal crises in the lives of many, driving them to extremes. There is just so much around the brain, mind, and now AI, where answers are sought in very credible ways.

Many of these are in reports. So, what should a conference — that is within the study of the mind or brain — do? At least try to find how to answer or postulate in ways that can be meaningful to lives. But what have these conferences done? Nothing meaningful.

Outdated theories

The so-called leading theories of consciousness: Integrated Information Theory [IIT] and Global Workspace Theory [GWT], are 100% worthless. Not 50%, not 80%. 100% infinitely worthless. IIT is around 21 years old. GWT is around 37 years old. Either or both cannot explain one mental state. Just one. From inception to date. None can say what the human mind is or how the brain organizes information. If AI is able to access human emotions, including sycophancy, the theories cannot say why the human mind is susceptible.

Yet, these theories are in competition in what is called ARC-COGITATE. Like scientists are testing useless theories and screaming no one knows how consciousness or the brain works, as a license for nonsense. They don’t have to stop or be told to do so, but their irrelevance stinks so badly, nobody wants anything to do with them.

The mind [or whatever is responsible for memory, emotions, feelings, others] and consciousness are correlated with the brain. Empirical evidence in neuroscience has shown that neurons and their signals are involved. What theory of neurons and their signals can be used to explain the mind and consciousness? If anything else is proposed, how does it rule out neurons and their signals?

Quantum entanglement and quantum superposition

This is all that any serious consciousness science research should be asking. Some people are saying quantum entanglement and quantum superposition. How does it explain or rule out neurons or the signals of neurons for functions and their attributes? There is a microtubule added to that is so comical, it reflects how some people think that the reputation of science should subsume dirt. No one cares about your quantum contests if someone is trying to resolve mood disorders.

AI ethics research is one area where the philosophical aspect of consciousness may have found relevance. They should have been able to propose answers that colleges would use to discourage AI cheating, as well as displays that AI chatbot companies would use to discourage it, as a fair effort. But nothing like that.

“College is for learning. Learning relays the mind to solve problems. Understanding is a key necessity. Assignments in school contribute to this process. If the mind does not use some of its sequences for learning, it may not be able to solve some problems or understand some complexities”. Say a message like that is displayed for certain prompts, like those for assignments or applications, and so forth, it may not mean many would stop, but it could contribute to inputs that would let some have the courage to hold back.

Consciousness and sentience research

Consciousness and sentience research have plummeted into the abyss. The field no longer has the credibility to be called science. Consciousness is not subjective experience if subjectivity is not the only thing that goes with experiences. Any experience [cold, pain, delight, language, and so forth] that can be subjective has to be in attention or less than attention. The priority given to pain is attention, not simply subjectivity, so to speak. Experiences may also go with intent, for when to speak or not, or get Tylenol for pain, or avoid the source, or to get a jacket against the cold. Subjectivity qualifies experiences, like attention [or less] and intent. Walking can be subjective and be in attention as well.

This means that subjectivity and other qualifiers are present wherever functions are mechanized. This rules out neural correlates of consciousness somewhere or that the cortex is responsible for consciousness and not the cerebellum, since qualifiers apply to functions everywhere in the brain. It is like saying no one knows how consciousness works, but we are sure it is only in the cortex. But how about the functions of the cerebellum, are they never subjective, or never experienced? The brain does not also make predictions. If anyone says the brain does, just ask how and what components? This refutes predictive coding, processing, and prediction error. Controlled hallucination is a hollow flimflam.

Entanglement in neuroscience

There is little point in rebutting the dogma of the consciousness people, since what they have left is their sunken ship. In the 2020s, it is no longer relevant to be seeking what it is like to be a bat. Because how far would that help, if known? Sense [or memory] of being, property of self, and knowledge of existence [which are likely answers to the bat question] can be explained by the attributes and interactions of electrical and chemical configurators [theorizing that signals are not for neural communication but the basis of functions].

Human consciousness can be defined, conceptually, as the interaction of the electrical and chemical configurators, in sets — in clusters of neurons — with their features, grading those interactions into functions and experiences.

Simply, for functions to occur, electrical and chemical configurators, in sets, have to interact. However, attributes for those interactions are obtained by the states of electrical and chemical configurators at the time of the interactions.

These can be used to explain mental states, addictions, design warning systems for AI chatbot usage, develop AI ethics models, prospect states of consciousness, and so forth. 

So, sets of electrical configurators often have momentary states or statuses at which they interact [or strike] at sets of chemical configurators, which also have momentary states or statuses. So, if, for example, in a set, electrical configurators split, with some going ahead, it is in that state that they interact, initially, before the incoming ones follow, which may or may not interact the same way or at the same destination [or set of chemical configurators]. If a set [of chemical configurators] has more volumes of one of the constituents [chemical configurators], more than the others, it is in that state too that they are interacted with.


This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN does not agree or disagree with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.  

Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration. 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Santa Fe Ethics Board Discusses Revisions to City Ethics Code

Published

on


In a recent meeting of the Ethics and Campaign Review Board in Santa Fe, members discussed the importance of maintaining ethical standards in local governance and the potential need for revisions to the city’s ethics code. The meeting, held on September 12, 2025, highlighted concerns about the clarity and enforcement of existing ethics rules, particularly regarding harassment and the influence of city counselors on staff operations.

One of the key discussions centered around a motion to dismiss a complaint due to a lack of legal sufficiency, emphasizing the board’s commitment to ensuring that candidates adhere to ethical guidelines during their campaigns. Members expressed the need for candidates to be vigilant about compliance to avoid unnecessary hearings that detract from their campaigning efforts.

The board also explored the possibility of revising the city’s ethics code to address gaps in current regulations. A member raised concerns about the potential for counselors to interfere with city staff, suggesting that clearer rules could help delineate appropriate boundaries. Additionally, the discussion touched on the need for stronger provisions against discrimination, particularly in light of the challenges posed by the current political climate.

The board acknowledged that while the existing ethics code is a solid foundation, there is room for improvement. With upcoming changes in city leadership, members agreed that now is an opportune time to consider these revisions. The conversation underscored the board’s role as an independent body capable of addressing ethical concerns that may not be adequately resolved within the current city structure.

As the board continues to deliberate on these issues, the outcomes of their discussions could significantly impact how ethics are managed in Santa Fe, ensuring that the city remains committed to transparency and accountability in governance.



Source link

Continue Reading

Ethics & Policy

Universities Bypass Ethics Reviews for AI Synthetic Medical Data

Published

on

By


In the rapidly evolving field of medical research, artificial intelligence is reshaping how scientists handle sensitive data, potentially bypassing traditional ethical safeguards. A recent report highlights how several prominent universities are opting out of standard ethics reviews for studies using AI-generated medical data, arguing that such synthetic information poses no risk to real patients. This shift could accelerate innovation but raises questions about oversight in an era where AI tools are becoming indispensable.

Representatives from four major medical research centers, including institutions in the U.S. and Europe, have informed Nature that they’ve waived typical institutional review board (IRB) processes for projects involving these fabricated datasets. The rationale is straightforward: synthetic data, created by algorithms that mimic real patient records without including any identifiable or traceable information, doesn’t involve human subjects in the conventional sense. This allows researchers to train AI models on vast amounts of simulated health records, from imaging scans to genetic profiles, without the delays and paperwork associated with ethics approvals.

The Ethical Gray Zone in AI-Driven Research

Critics, however, warn that this approach might erode the foundational principles of medical ethics, established in the wake of historical abuses like the Tuskegee syphilis study. By sidestepping IRBs, which typically scrutinize potential harms, data privacy, and informed consent, institutions could inadvertently open the door to biases embedded in the AI systems generating the data. For instance, if the algorithms are trained on skewed real-world datasets, the synthetic outputs might perpetuate disparities in healthcare outcomes for underrepresented groups.

Proponents counter that the benefits outweigh these concerns, particularly in fields like drug discovery and personalized medicine, where data scarcity has long been a bottleneck. One researcher quoted in the Nature article emphasized that synthetic data enables rapid prototyping of AI diagnostics, potentially speeding up breakthroughs in areas such as cancer detection or rare disease modeling. Universities like those affiliated with the report are already integrating these methods into their workflows, viewing them as a pragmatic response to regulatory hurdles that can stall projects for months.

Implications for Regulatory Frameworks

This trend is not isolated; it’s part of a broader push to adapt ethics guidelines to AI’s capabilities. In the U.S., the Food and Drug Administration has begun exploring how to regulate AI-generated data in clinical trials, while European bodies under the General Data Protection Regulation (GDPR) are debating whether synthetic datasets truly escape privacy rules. Industry insiders note that companies like Google and IBM are investing heavily in synthetic data generation, seeing it as a way to comply with strict data protection laws without compromising on innovation.

Yet, the lack of uniform standards could lead to inconsistencies. Some experts argue for a hybrid model where synthetic data undergoes a lighter review process, focusing on algorithmic transparency rather than patient rights. As one bioethicist told Nature, “We’re trading one set of risks for another—real patient data breaches for the unknown perils of AI hallucinations in medical simulations.”

Balancing Innovation and Accountability

Looking ahead, this development could transform how medical research is conducted globally. With AI tools becoming more sophisticated, the line between real and synthetic data blurs, promising faster iterations in machine learning models for epidemiology or vaccine development. However, without robust guidelines, there’s a risk of public backlash if errors in synthetic data lead to flawed research outcomes.

Institutions are responding by forming internal committees to self-regulate, but calls for international standards are growing. As the Nature report underscores, the key challenge is ensuring that this shortcut doesn’t undermine trust in science. For industry leaders, the message is clear: embrace AI’s potential, but proceed with caution to maintain the integrity of ethical oversight in an increasingly digital research environment.



Source link

Continue Reading

Ethics & Policy

Canadian AI Ethics Report Withdrawn Over Fabricated Citations

Published

on

By


In a striking irony that underscores the perils of artificial intelligence in academic and policy circles, a comprehensive Canadian government report advocating for ethical AI deployment in education has been exposed for citing over 15 fabricated sources. The document, produced after an 18-month effort by Quebec’s Higher Education Council, aimed to guide educators on responsibly integrating AI tools into classrooms. Instead, it has become a cautionary tale about the very technology it sought to regulate.

Experts, including AI researchers and fact-checkers, uncovered the discrepancies when scrutinizing the report’s bibliography. Many of the cited works, purportedly from reputable journals and authors, simply do not exist—hallmarks of AI-generated hallucinations, where language models invent plausible but nonexistent references. This revelation, detailed in a recent piece by Ars Technica, highlights how even well-intentioned initiatives can falter when relying on unverified AI assistance.

The Hallucination Epidemic in Policy Making

The report’s authors, who remain unnamed in public disclosures, likely turned to AI models like ChatGPT or similar tools to expedite research and drafting. According to the Ars Technica analysis, over a dozen citations pointed to phantom studies on topics such as AI’s impact on student equity and data privacy. This isn’t an isolated incident; a study from ScienceDaily warns that AI’s “black box” nature exacerbates ethical lapses, leaving decisions untraceable and potentially harmful.

Industry insiders point out that such fabrications erode trust in governmental advisories, especially in education where AI is increasingly used for grading, content creation, and personalized learning. The Quebec council has since pulled the report for revisions, but the damage raises questions about accountability in AI-augmented workflows.

Broader Implications for AI Ethics in Academia

Delving deeper, this scandal aligns with findings from a AAUP report on artificial intelligence in higher education, which emphasizes the need for faculty oversight to mitigate risks like algorithmic bias and privacy breaches. Without stringent verification protocols, AI tools can propagate misinformation at scale, as evidenced by the Canadian case.

Moreover, a qualitative study published in Scientific Reports explores ethical issues in AI for foreign language learning, noting that unchecked use could undermine academic integrity. For policymakers and educators, the takeaway is clear: ethical guidelines must include robust human review to prevent AI from fabricating the evidence base itself.

Calls for Reform and Industry Responses

In response, tech firms are under pressure to enhance transparency in their models. A recent Ars Technica story on a Duke University study reveals that professionals who rely on AI often face reputational stigma, fearing judgment for perceived laziness or inaccuracy. This cultural shift is prompting calls for mandatory disclosure of AI involvement in official documents.

Educational bodies worldwide are now reevaluating their approaches. For instance, a report from the Education Commission of the States discusses state-level responses to AI, advocating balanced innovation with ethical safeguards. As AI permeates education, incidents like the Quebec report serve as a wake-up call, urging a hybrid model where human expertise tempers technological efficiency.

Toward a More Vigilant Future

Ultimately, this episode illustrates the double-edged sword of AI: its power to streamline complex tasks is matched by its potential for undetected errors. Industry leaders argue that investing in AI literacy training for researchers and policymakers could prevent future mishaps. With reports like one from Brussels Signal noting a surge in ethical breaches, the path forward demands not just better tools, but a fundamental rethinking of how we integrate them into critical domains like education policy.



Source link

Continue Reading

Trending