AI Insights
Designing Artificial Consciousness from Natural Intelligence
Dr. Karl Friston is a distinguished computational psychiatrist, neuroscientist, and pioneer of modern neuroimaging and, now, AI. He is a leading expert on intelligence, natural as well as artificial. I have followed his work as he and his team uncover the principles underlying mind, brain, and behavior based on the laws of physics, probability, causality and neuroscience.
In the interview that follows, we dive into the current artificial intelligence landscape, discussing what existing models can and can’t do, and then peer into the divining glass to see how true artificial consciousness might look and how it may begin to emerge.
Current AI Landscape and Biological Computing
GHB: Broadly speaking, what are the current forms of AI and ML, and how do they fall short when it comes to matching natural intelligence? Do you have any thoughts about neuromorphic chips?
KF: This is a pressing question in current AI research: should we pursue artificial intelligence on high performance (von Neumann) computers or turn to the principles of natural intelligence? This question speaks to a fork in the road ahead. Currently, all the money is on artificial intelligence—licensed by the truly remarkable competence of generative AI and large language models. So why deviate from the well-trodden path?
There are several answers. One is that the artificial path is a dead end—in the sense that current implementations of AI violate the principles of natural intelligence and thereby preclude themselves from realizing their ultimate aspirations: artificial general intelligence, artificial super intelligence, strong AI, et cetera. The violations are manifest in the shortcomings of generative AI, usually summarized as a lack of (i) efficiency, (ii) explainability and (iii) trustworthiness. This triad neatly frames the alternative way forward, namely, natural intelligence.
So, what is natural intelligence? The answer to this question is simpler than one might think: natural intelligence rests upon the laws or principles that apply to the natural kinds that constitute our lived world. These principles are readily available from the statistical physics of self-organization, when the notion of self is defined carefully.
Put simply, the behavior of certain natural kinds—that can be read as agents. like you and me—can always be described as self-evidencing. Technically, this entails minimizing self-information (also known as surprise) or, equivalently, seeking evidence (also known as marginal likelihood) for an agent’s internal model of its world2. This surprise is scored mathematically with something called variational free energy.
The model in question is variously referred to as a world or generative model. The notion of a generative model takes center stage in any application of the (free energy) principles necessary to reproduce, simulate or realize the behavior of natural agents. In my world, this application is called active inference.
Note that we have moved beyond pattern recognizers and prediction machines into the realm of agency. This is crucial because it means we are dealing with world models that can generate the consequences of behavior, choices or actions. In turn, this equips agents with the capacity to plan or reason. That is, to select the course of action that minimizes the surprise expected when pursuing that course of action. This entails (i) resolving uncertainty while (ii) avoiding surprising outcomes. The simple imperative— to minimize expected surprise or free energy—has clear implications for the way we might build artifacts with natural intelligence. Perhaps, these are best unpacked in terms of the above triad.
Efficiency. Choosing the path of least surprise is the path of least action or effort. This path is statistically and thermodynamically the most efficient path that could be taken. Therefore, by construction, natural intelligence is efficient. The famous example here is that our brains need only about 20 W—equivalent to a light bulb. In short, the objective function in active inference has efficiency built in —and manifests as uncertainty-resolving, information-seeking behavior that can be neatly described as curiosity with constraints. The constraints are supplied by what the agent would find surprising—i.e., costly, aversive, or uncharacteristic.
Artificial Intelligence Essential Reads
A failure to comply with the principle of maximum efficiency (a.k.a., principle of minimum redundancy) means your AI is using the wrong objective function. This can have severe implications for ML approaches that rely upon reinforcement learning (RL). In RL, the objective function is some arbitrary reward or value function. This leads to all sorts of specious problems; such as the value function selection problem, the explore-exploit dilemma, and more3. A failure to use the right value function will therefore result in inefficiency—in terms of sample sizes, memory requirements, and energy consumption (e.g., large language models trained with big data). Not only are the models oversized but they are unable to select those data that would resolve their uncertainty. So, why can’t large language models select their own training data?
This is because they have no notion of uncertainty and therefore don’t know how to reduce it. This speaks to a key aspect of generative models in active inference: They are probabilistic models, which means that they deal with probabilistic “beliefs”—about states of the world—that quantify uncertainty. This endows them not only with the capacity to be curious but also to report the confidence in their predictions and recommendations.
Explainability. if we start with a generative model—that includes preferred outcomes—we have, by construction, an explainable kind of generative AI. This is because the model generates observable consequences from unobservable causes, which means that the (unobservable or latent) cause of any prediction or recommendation is always at hand. Furthermore, predictions are equipped with confidence intervals that quantify uncertainty about inferred causes or states of the world.
The ability to encode uncertainty is crucial for natural intelligence and distinguishes things like variational autoencoders (VAE) from most ML schemes. Interestingly, the objective function used by VAEs is exactly the same as the variational free energy above. The problem with variational autoencoders is that they have no agency because they do not act upon the world— they just encode what they are given.
Trustworthiness: if predictions and recommendations can be explained and qualified with quantified uncertainty, then they become more trustworthy, or, at least, one can evaluate the epistemic trust they should be afforded. In short, natural intelligence should be able to declare its beliefs, predictions, and intentions and decorate those declarations with a measure of uncertainty or confidence.
There are many other ways we could unpack the distinction between artificial and natural intelligence. Several thought leaders—perhaps a nascent rebel alliance—have been trying to surface a natural or biomimetic approach to AI. Some appeal to brain science, based on the self-evident fact that your brain is an existence proof for natural intelligence. Others focus on implementation; for example, neuromorphic computing as the road to efficiency. An interesting technical issue here is that much of the inefficiency of current AI rests upon a commitment to von Neumann architectures, where most energy is expended in reading and writing from memory. In the future, one might expect to see variants of processing-in-memory (PIM) that elude this unnatural inefficiency (e.g., with memristors, photonics, or possibly quantum computing).
Future AI Development
GHB: What does truly agentic AI look like in the near-term horizon? Is this related to the concept of neuromorphic AI (and what is agentic AI)?
KF: Agentic AI is not necessarily neuromorphic AI. Agentic AI is the kind of intelligence evinced by agents with a model that can generate the consequences of action. The curiosity required to learn agentic world models is beautifully illustrated by our newborn children, who are preoccupied with performing little experiments on the world to see what they can change (e.g., their rattle or mobile) and what they cannot (e.g., their bedtime). The dénouement of their epistemic foraging is a skillful little body, the epitome of a natural autonomous vehicle. In principle, one can simulate or realize agency with or without a neuromorphic implementation; however, the inefficiency of conventional (von Neumann) computing may place upper bounds on the autonomy and agency of edge computing.
VERSES AI and Genius System
GHB: You are the chief scientist for VERSES AI, which has been posting groundbreaking advancements seemingly every week. What is Genius VERSES AI and what makes it different from other systems? For the layperson, what is the engine behind Genius?
KF: As a cognitive computing company VERSES is committed to the principles of natural intelligence, as showcased in our baby, Genius. The commitment is manifest at every level of implementation and design:
- Implementation eschews the unnatural backpropagation of errors that predominate in ML by using variational message-passing based on local free energy (gradients), as in the brain.
- Design eschews the inefficient top-down approach—implicit in the pruning of large models—and builds models from the ground up, much in the way that our children teach themselves to become autonomous adults. This ensures efficiency and explainability.
- To grow a model efficiently is to grow it under the right core priors. Core priors can be derived from first principles; for example, states of the world change lawfully, where certain quantities are conserved (e.g., object permanence, mathematical invariances or symmetry, et cetera), usually in a scale-free fashion (e.g., leading to deep or hierarchical architectures with separation of temporal scales).
- Authentic agency is assured by equipping generative models with a minimal self-model; namely, “what would happen if I did that?” This endows them with the capacity to plan and reason, much like System 2 thinking (planful thinking), as opposed to the System 1 kind of reasoning (intuitive, quick thinking).
At the end of the day, all this rests upon using the right objective function; namely, the variational free energy that underwrites self-evidencing. That is, building the most efficient model of the world in which the agent finds herself. With the right objective function, one can then reproduce brain-like dynamics as flows on variational free energy gradients, as opposed to costly and inefficient sampling procedures that are currently the industry standard.
Consciousness and Future Directions
GHB: What might we look forward to for artificial consciousness, and can you comment on the work with Mark Solms?
KF: Commenting on Mark’s work would take another blog (or two). What I can say here is that we have not touched upon two key aspects of natural intelligence that could, in principle, be realized if we take the high (active inference) road. These issues relate to interactive inference or intelligence—that is, inference among agents that are curious about each other. In this setting, one has to think about what it means for a generative model to entertain the distinction between self and other and the requisite mechanisms for this kind of disambiguation and attribution of agency. Mark would say that these mechanisms rest upon the encoding of uncertainty—or its complement, precision —and how this encoding engenders the feelings (i.e., felt-uncertainty) that underwrite selfhood.
AI Insights
How automation is using the latest technology across various sectors
A majority of small businesses are using artificial intelligence and finding out it can save time and money.
Artificial Intelligence and automation are often used interchangeably. While the technologies are similar, the concepts are different. Automation is often used to reduce human labor for routine or predictable tasks, while A.I. simulates human intelligence that can eventually act independently.
“Artificial intelligence is a way of making workers more productive, and whether or not that enhanced productivity leads to more jobs or less jobs really depends on a field-by-field basis,” said senior advisor Gregory Allen with the Wadhwani A.I. center at the Center for Strategic and International Studies. “Past examples of automation, such as agriculture, in the 1920s, roughly one out of every three workers in America worked on a farm. And there was about 100 million Americans then. Fast forward to today, and we have a country of more than 300 million people, but less than 1% of Americans do their work on a farm.”
A similar trend happened throughout the manufacturing sector. At the end of the year 2000, there were more than 17 million manufacturing workers according to the U.S. Bureau of Labor statistics and the Federal Reserve Bank of St. Louis. As of June, there are 12.7 million workers. Research from the University of Chicago found, while automation had little effect on overall employment, robots did impact the manufacturing sector.
“Tractors made farmers vastly more productive, but that didn’t result in more farming jobs. It just resulted in much more productivity in agriculture,” Allen said.
ARTIFICIAL INTELLIGENCE DRIVES DEMAND FOR ELECTRIC GRID UPDATE
Researchers are able to analyze the performance of Major League Baseball pitchers by using A.I. algorithms and stadium camera systems. (University of Waterloo / Fox News)
According to our Fox News Polling, just 3% of voters expressed fear over A.I.’s threat to jobs when asked about their first reaction to the technology without a listed set of responses. Overall, 43% gave negative reviews while 26% reacted positively.
Robots now are being trained to work alongside humans. Some have been built to help with household chores, address worker shortages in certain sectors and even participate in robotic sporting events.
The most recent data from the International Federation of Robotics found more than 4 million robots working in factories around the world in 2023. 70% of new robots deployed that year, began work alongside humans in Asia. Many of those now incorporate artificial intelligence to enhance productivity.
“We’re seeing a labor shortage actually in many industries, automotive, transportation and so on, where the older generation is going into retirement. The middle generation is not interested in those tasks anymore and the younger generation for sure wants to do other things,” Arnaud Robert with Hexagon Robotics Division told Reuters.
Hexagon is developing a robot called AEON. The humanoid is built to work in live industrial settings and has an A.I. driven system with special intelligence. Its wheels help it move four times faster than humans typically walk. The bot can also go up steps while mapping its surroundings with 22 sensors.
ARTIFICIAL INTELLIGENCE FUELS BIG TECH PARTNERSHIPS WITH NUCLEAR ENERGY PRODUCERS
Researchers are able to create 3D models of pitchers, which athletes and trainers could study from multiple angles. (University of Waterloo)
“What you see with technology waves is that there is an adjustment that the economy has to make, but ultimately, it makes our economy more dynamic,” White House A.I. and Crypto Czar David Sacks said. “It increases the wealth of our economy and the size of our economy, and it ultimately improves productivity and wages.”
Driverless cars are also using A.I. to safely hit the road. Waymo uses detailed maps and real-time sensor data to determine its location at all times.
“The more they send these vehicles out with a bunch of sensors that are gathering data as they drive every additional mile, they’re creating more data for that training data set,” Allen said.
Even major league sports are using automation, and in some cases artificial intelligence. Researchers at the University of Waterloo in Canada are using A.I. algorithms and stadium camera systems to analyze Major League Baseball pitcher performance. The Baltimore Orioles joint-funded the project called Pitchernet, which could help improve form and prevent injuries. Using Hawk-Eye Innovations camera systems and smartphone video, researchers created 3D models of pitchers that athletes and trainers could study from multiple angles. Unlike most video, the models remove blurriness, giving a clearer view of the pitcher’s movements. Researchers are also exploring using the Pitchernet technology in batting and other sports like hockey and basketball.
ELON MUSK PREDICTS ROBOTS WILL OUTSHINE EVEN THE BEST SURGEONS WITHIN 5 YEARS
Overview of a PitcherNet System graphics analyzing a pitcher’s baseball throw. (University of Waterloo)
The same technology is also being used as part of testing for an Automated Ball-Strike System, or ABS. Triple-A minor league teams have been using the so-called robot umpires for the past few seasons. Teams tested both situations in which the technology called every pitch and when it was used as challenge system. Major League Baseball also began testing the challenge system in 13 of its spring training parks across Florida and Arizona this February and March.
Each team started a game with two challenges. The batter, pitcher and catcher were the only players who could contest a ball-strike call. Teams lost a challenge if the umpire’s original call was confirmed. The system allowed umpires to keep their jobs, while strike zone calls were slightly more accurate. According to MLB, just 2.6% of calls were challenged throughout spring training games that incorporated ABS. 52.2% of those challenges were overturned. Catchers had the highest success rate at 56%, followed by batters at 50% and pitchers at 41%.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Triple-A announced last summer it would shift to a full challenge system. MLB commissioner Rob Manfred said in June, MLB could incorporate the automated system into its regular season as soon as 2026. The Athletic reports, major league teams would use the same challenge system from spring training, with human umpires still making the majority of the calls.
Many companies across other sectors agree that machines should not go unsupervised.
“I think that we should always ensure that AI remains under human control,” Microsoft Vice Chair and President Brad Smith said. “One of first proposals we made early in 2023 was to insure that A.I., always has an off switch, that it has an emergency brake. Now that’s the way high-speed trains work. That’s the way the school buses, we put our children on, work. Let’s ensure that AI works this way as well.”
AI Insights
Artificial intelligence predicts which South American cities will disappear by 2100
The effects of global warming and climate change are being felt around the world. Extreme weather events are expected to become more frequent from droughts to floods wreaking havoc on communities as well as blistering heatwaves and bone-chilling cold snaps.
While these will affect localized areas temporarily, one inescapable consequence of the increasing temperatures for costal communities around the globe is rising sea levels. This phenomenon will have even more far-reaching effects, displacing hundreds of millions of people as coastal communities are inundated by water, some permanently.
These South American cities will disappear
While there is no doubt that sea levels will rise, predicting exactly how much they will in any given location is a tricky business. This is because oceans don’t rise uniformly as more water is added to the total volume.
However, according to models from the Intergovernmental Panel on Climate Change (IPCC) the most optimistic scenario is between 11 inches and almost 22 inches, if we can curb carbon emissions and keep the temperature rise to 1.5C by 2050. The worst case scenario would be 6 and a half feet by the end of the century.
Caracol Radio in Colombia asked various artificial intelligence systems which cities in South America would disappear due to rising sea levels within the next 200 years. These are the ones most at risk according to their findings:
- Santos, Brazil
- Macaió, Brazil
- Floreanópolis, Brazil
- Mar de Plata, Argentina
- Barranquilla, Colombia
- Lima, Peru
- Cartagena, Colombia
- Paramaribo, Surinam
- Georgetown, Guayana
The last two will be underwater by the end of the century according to modeling done by the non-profit Climate Central along with numerous other communities in low-lying coastal areas.
Their simulator only makes forecasts until the year 2100 as the above image shows for the areas along the northeastern coast of South America including Paramaribo and Georgetown.
Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.
Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.
AI Insights
UW-Stevens Point launches new undergraduate degree in artificial intelligence
STEVENS POINT – The University of Wisconsin-Stevens Point is launching a new bachelor’s degree in artificial intelligence this fall, blending technical programming instruction with real-world application and ethical training.
The new Bachelor of Science in Artificial Intelligence aims to prepare students for the evolving workforce demands in industries increasingly shaped by AI, including healthcare, manufacturing, and cybersecurity.
“It’s a new undergraduate program in computing, so there’s quite a bit of overlap with our existing computer information systems program,” said Associate Professor Tomi Heimonen. “But then we are offering completely new courses in AI. We’re covering everything from deep learning and neural networks to AI for security and natural language processing.”
The curriculum includes machine learning, cloud environments, AI-driven cybersecurity, and a senior capstone project that connects students with local partners. This fall, one project involves building a chatbot to help a local agency’s customer service team access internal policy information.
“I think the hallmark of all our courses is that it’s not just theory,” Heimonen said. “There’s a pretty heavy application emphasis in all of them.”
Students will also complete coursework in programming, data analytics and mathematics. A core component of the program emphasizes ethics in AI design, including fairness, transparency and human oversight.
“We’re not building terminators,” Heimonen said. “AI are systems that try to imitate human intelligence by taking in data, learning from it and then recommending actions or producing outcomes based on that data.”
The university’s decision to offer the program was influenced by market demand and workforce development trends. The program is backed by state funding and is one of only a few of its kind in the region.
“There’s definitely a gap between the number of trained professionals and what the workforce needs,” Heimonen said. “UWSP saw a chance to be one of the few institutions in the state training students specifically to work with AI straight out of their undergraduate and deliver talents to the needs of Wisconsin employers.”
Graduates will be equipped for roles such as software developers, computer systems analysts, and information systems managers. While “AI developer” may not yet be a common job title, Heimonen said employers increasingly value applicants with AI knowledge and skills.
“There has to be some guardrails,” Heimonen said. “If we’re going to trust AI to make decisions, we need to make sure those decisions are accurate, fair and conveyed in a way that can be explained to the user.”
More information about the program is available at uwsp.edu/programs/degree/artificial-intelligence.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions