AI Insights
Deliberating On The Many Definitions Of Artificial General Intelligence

Artificial general intelligence (AGI) does not yet have a universally accepted definition, but we need one ASAP.
getty
In today’s column, I examine an unresolved controversy in the AI field that hasn’t received the attention it rightfully deserves, namely, what constitutes a sensible and universally agreed-upon definition for pinnacle AI, commonly and vaguely referred to as artificial general intelligence (AGI).
This is a vital matter. At some point, we should be ready to agree whether the advent of AGI has been reached. There is also the matter of gauging AI progress and whether we are getting closer to AGI or veering away from AGI. All told, if there isn’t a wholly accepted universal definition, we will be constantly battling over whether pinnacle AI is in our sights and whether it has truly been attained. This is the classic dilemma of apples versus oranges. A person who defines apples as though they are oranges will be forever in a combative mode when trying to discuss whether someone is holding an apple in their hands.
As Socrates once pointed out, the beginning of wisdom is the definition of terms. There needs to be a concerted effort to properly define what AGI means.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
Overall, the definition of AGI generally consists of aiming for AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained the generally envisioned AGI.
In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Controversy About AGI As Terminology
To the surprise of many in the media and the general public at large, there is no universally accepted standardized definition for what AGI consists of.
This lack of an across-the-board formalized definition for AGI spurs numerous difficulties and problems. For example, AI gurus referring to AGI can be making unspoken assumptions about what they believe AGI to be, and therefore stoke confusion since they aren’t referring to the same thing. Discussions can occur at cross purposes due to each respective expert having their own idiosyncratic definition of what AGI is or ought to be.
An especially disquieting concern is that attaining AGI has become a preeminent directional focus for many in the AI industry, yet this is a bit of a mirage since the AI field does not have a said-to-be one-and-only-one North Star that represents what AGI is supposed to be:
- “Recent advances in large language models (LLMs) have sparked interest in ‘achieving human-level ‘intelligence’ as a ‘north-star goal’ of the AI field. This goal is often referred to as ‘artificial general intelligence’ (‘AGI’).”
- “Yet rather than helping the field converge around shared goals, AGI discourse has mired it in controversies.”
- “Researchers diverge on what AGI is and on assumptions about goals and risks. Researchers further contest the motivations, incentives, values, and scientific standing of claims about AGI.”
- Finally, the building blocks of AGI as a concept — intelligence and generality — are contested in their own right.” (source: Borhane et al, “Stop Treating ‘AGI’ as the North-Star Goal of AI Research.” arXiv, February 7, 2025).
The Moving Of The Cheese
In a prior posting, I had noted that some AI luminaries have been opting to define AGI in a manner that suits their specific interests. I refer to this as moving the cheese (see my discussion at the link here). You might be familiar with the movable cheese metaphor — it became part of our cultural lexicon due to a book published in 1998 entitled “Who Moved My Cheese? An Amazing Way To Deal With Change In Your Work And In Your Life”. The book identified that we are all, at times, akin to mice seeking a morsel of cheese in a maze.
OpenAI CEO Sam Altman is especially adept at loosely defining and then redefining AGI. In his personal blog posting entitled “Three Observations” of February 10, 2025, he provided a definition of AGI that said this: “AGI is a weakly defined term, but generally speaking, we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.”
This AGI definition contains a plethora of ambiguity and came under fierce arguments about being shaped to accommodate OpenAI’s AI products. For example, by indicating that AGI would be at a human level in “many fields”, this seemed to be an immense watering down from the earlier concepts that AGI would be versed in all fields. It is a lot easier to devise pinnacle AI that would be merely accomplished in many fields, versus having to reach a much taller threshold of doing so in all fields.
Still Messing Around
According to a reported interview with Sam Altman that took place recently, Altman made these latest remarks about the AGI moniker:
- “I think it’s not a super useful term.”
- “I think the point of all of this is it doesn’t really matter, and it’s just this continuing exponential model capability that we’ll rely on for more and more things.” (source: “Sam Altman now says AGI is ‘not a super useful term’ – and he’s not alone” by Ryan Browne, CNBC, August 11, 2025).
Once again, this type of chatter about the meaning of AGI has sparked renewed controversy. The remarks seem to try and create distance from the AGI definitions that he and others have touted in the last several years.
Why so?
It could be that part of the underlying basis for wanting to distance the AGI phraseology could be laid at the feet of the newly released GPT-5. Leading up to GPT-5, there had been tremendous uplifting of expectations that we were finally going to have AGI in our hands, ready for immediate use. By and large, though GPT-5 had some interesting advances, it wasn’t even close to any kind of AGI, almost no matter how low a bar one might set for AGI, see my detailed analysis at the link here.
Inspecting AGI Definitions
Let’s go ahead and look at a variety of AGI definitions that have been floating around and are considered as potentially viable or at least noteworthy ways to define AGI. I handily list these AGI definitions so that you can see them collected into one convenient place. Furthermore, it makes for handy analysis and comparison by having them at the front and center for inspection.
Before launching into the AGI definitions, you might find it of keen interest that the AI field readily acknowledges that things are in a state of flux on the heady matter. The Association for the Advancement of Artificial Intelligence (AAAI), considered a top-caliber AI non-profit academic professional association, recently convened a special panel to envision the future of AI, and they, too, acknowledged the confounding nature of what AGI might be.
The AAAI futures report that was published in March 2025 made this pointed commentary about AGI (excerpts):
- “AGI is not a formally defined concept, nor is there any agreed test for its achievement.
- “Some researchers suggest that ‘we’ll know it when we see it’ or that it will emerge naturally from the right set of principles and mechanisms for AI system design.”
- “In discussions, AGI may be referred to as reaching a particular threshold on capabilities and generality. However, others argue that this is ill-defined and that intelligence is better characterized as existing within a continuous, multidimensional space.”
Strawman Definitions Of AGI
Let’s get started on the various AGI definitions by beginning with this strawman:
- “AGI is a computer that is capable of solving human solvable problems, but not necessarily in human-like ways.” (source: Morris et al, “Levels of AGI: Operationalizing Progress on the Path to AGI.” arXiv, November 4, 2023).
Give the definition a contemplative moment.
Here’s one mindful facet. Is this AGI definition suggesting that unsolvable problems by humans are completely beyond the capability of AGI? If so, this would be of great dismay to many, since a vaunted basis for pursuing AGI is that the advent of AGI will presumably lead to cures for cancer and many other diseases (aspects that so far have not been solvable by humans).
I trust you can see the challenges associated with devising a universally acceptable, ironclad AGI definition.
In a now classic research paper on the so-called sparks of AGI, the authors provided this definition of AGI:
- “We use AGI to refer to systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level.” (source: Bubeck et al, “Sparks of Artificial General Intelligence: Early Experiments with GPT-4.” arXiv, March 22, 2023).
This research paper became a widespread flashpoint both within and beyond the AI community due to claiming that present-day AI of 2023 was showcasing a hint or semblance of AGI. The researchers invoked parlance that AI at the time was revealing sparks of AGI.
Critics and skeptics alike pointed out that the AGI definition was of such a broad and non-specific nature that nearly any AI system could be construed as being ostensibly AGI.
More Definitions Of AGI
In addition to AI researchers defining AGI, many others have done so, too.
The Gartner Group, a longstanding practitioner-oriented think tank on computing in general, provided this definition of AGI in 2024:
- “Artificial General Intelligence (AGI), also known as strong AI, is the (currently hypothetical) intelligence of a machine that can accomplish any intellectual task that a human can perform. AGI is a trait attributed to future autonomous AI systems that can achieve goals in a wide range of real or virtual environments at least as effectively as humans can” (Gartner Group as quoted in Jaffri, A. “Explore Beyond GenAI on the 2024 Hype Cycle for Artificial Intelligence.” Gartner Group, November 11, 2024).
This definition is indicative that some AGI definitions are short in length and others are lengthier, such that this example is a bit longer than the other two AGI definitions noted earlier. There is an espoused belief amongst some in the AI community that a sufficiently suitable AGI definition would have to be quite lengthy, doing so to encompass the essence of what AGI is and what AGI is not.
Another noteworthy aspect of the Gartner Group definition of AGI is that the phrase “strong AI” is mentioned in the definition. The initial impetus for the AGI moniker arose partially due to debates within the AI community about strong AI versus weak AI (see my explanation at the link here).
Here is another example of a multi-sentence AGI definition:
- “An Artificial General Intelligence (AGI) system is a computer that is adaptive to the open environment with limited computational resources and that satisfies certain principles. For AGI, problems are not predetermined and not specified ones; otherwise, there is most probably always a special system that performs better than any general system. I keep the part ‘certain principles’ to be blurry, waiting for future discussions and debates on it.” (source: Xu, “What is Meant by AGI? On The Definition of Artificial General Intelligence.” arXiv, April 16, 2024).
This definition reveals another facet of AGI definitions overall regarding the importance of defining all terms used within an AGI definition. In this instance, the researcher states that AGI must satisfy “certain principles”. In this instance, the statement mentions that the informally noted “certain principles” remain undefined. A lack of completeness leaves open a wide interpretation of any postulated AGI definition.
Lots And Lots Of AGI Definitions
Wikipedia has a definition for AGI:
- “Artificial general intelligence (AGI) — sometimes called human‑level intelligence AI—is a type of artificial intelligence capable of performing the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans” (Wikipedia 2025).
A notable element of this AGI definition and many others is whether AGI is intended to be on par with humans or exceed humans (“comparable to, or surpassing, that of humans”).
There is an ongoing debate in the AI community on this nuanced but crucial consideration. One viewpoint is that the coined artificial superintelligence or ASI encompasses AI that is beyond or above human capabilities, while AGI is solely intended to be AI that meets or is on par with human capabilities.
IBM has provided a definition of AGI:
- “Artificial general intelligence (AGI) is a hypothetical stage in the development of machine learning (ML) in which an artificial intelligence (AI) system can match or exceed the cognitive abilities of human beings across any task. It represents the fundamental, abstract goal of AI development: the artificial replication of human intelligence in a machine or software” (IBM as quoted in Bergmann et al, “What is artificial general intelligence (AGI)?” IBM, September 17, 2024).
An element of special interest in this AGI definition is the reference to machine learning (ML). There are AGI definitions that refer to subdisciplines within the AI field, such as referring to ML or other areas, such as robotics or autonomous systems.
Should an AGI definition explicitly or firmly refer to AI practices or subdisciplines?
The question is often asked since AGI then seemingly becomes tied to specific AI fields of study. The contention is that the definition of AGI should be fully standalone and not rely upon references to AI fields or subfields (which are subject to change, and otherwise seemingly unnecessary to strictly define AGI per se).
OpenAI has also posted a definition of AGI, as contained within the official OpenAI Charter statement:
- “AGI is defined as highly autonomous systems that outperform humans at most economically valuable work.”
This definition brings up an emerging trend associated with AGI definitions. The wording or a similar variation of “at most economically valuable work” is increasingly being used in the latest definitions of AGI. This appears to tie the capabilities of AGI to the notion of economically valuable work.
Critics argue that this is a limiting factor that does not suitably belong in the definition of AGI and perhaps serves a desired purpose rather than acting to fully and openly define AGI.
My Working Definition Of AGI
The working definition of AGI that I have been using is this strawman that I composed when the AGI moniker was initially coming into vogue as a catchphrase:
- “AGI is defined as an AI system that exhibits intelligent behavior of both a narrow and general manner on par with that of humans, in all respects” (source: Eliot, “Figuring out what artificial general intelligence consists of”, Forbes, December 6, 2023).
The reference to intelligent behavior in both a narrow and general manner is an acknowledgment that historically, AGI as a phrase partially arose to supersede the generation of AI that was viewed as being overly narrow and not of a general nature (such as expert systems, knowledge-based systems, rules-based systems).
Another element is that AGI would be on par with the intelligent behavior of humans in all respects. Thus, not being superhuman, and instead, on the same intellectual level as humankind. And doing so in all respects, comprehensively and exhaustively so.
Mindfully Asking What AGI Means
When you see a banner headline proclaiming that AGI is here, or getting near, or maybe eons away, I hope that the first thought you have is to dig into the meaning of AGI as it is being employed in that media proclamation.
Perhaps the declaration refers to apples rather than oranges or has a definition that is sneakily devised to tilt toward one vantage point over another. AGI has regrettably become a catchall. Some believe we should discard the AGI moniker and come up with a new name for pinnacle AI. Others assert that this might merely be a form of trickery to avoid owning up to the harsh fact that we have not yet attained AGI.
For the time being, I would wager that the AGI moniker is going to stick around. It has gotten enough traction that even though it is loosey-goosey, it does have a certain amount of popularized name recognition. If AGI as a designation is going to have long legs, it would be significant to reach a thoughtful agreement on a universally accepted definition.
The famous English novelist Samuel Butler made this pointed remark: “A definition is the enclosing of a wilderness of ideas within a wall of words.” Do you part to help enclose a wilderness of ideas about pinnacle AI into a neatly packed and fully sensible set of words.
Fame and possibly fortune await.
AI Insights
Why California again backed off on sweeping AI regulation
By Khari Johnson, CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.
After three years of trying to give Californians the right to know when AI is making a consequential decision about their lives and to appeal when things go wrong, Assemblymember Rebecca Bauer-Kahan said she and her supporters will have to wait again, until next year.
The San Ramon Democrat announced Friday that Assembly Bill 1018, which cleared the Assembly and two Senate committees, has been designated a two-year bill, meaning it can return as part of the legislative session next year. That move will allow more time for conversations with Gov. Gavin Newsom and more than 70 opponents. The decision came in the final hours of the California Legislative session, which ends today.
Her bill would require businesses and government agencies to alert individuals when automated systems are used to make important decisions about them, including for apartment leases, school admissions, and, in the workplace, hiring, firing, promotions, and disciplinary actions. The bill also covers decisions made in education, health care, criminal justice, government benefits, financial services, and insurance.
Automated systems that assign people scores or make recommendations can stop Californians from receiving unemployment benefits they’re entitled to, declare job applicants less qualified for arbitrary reasons that have nothing to do with job performance, or deny people health care or a mortgage because of their race.
“This pause reflects our commitment to getting this critical legislation right, not a retreat from our responsibility to protect Californians,” Bauer-Kahan said in a statement shared with CalMatters.
Bauer-Kahan adopted the principles enshrined in the legislation from the Biden administration’s AI Bill of Rights. California has passed more AI regulation than any other state, but has yet to adopt a law like Bauer-Kahan’s or like other laws requiring disclosure of consequential AI decisions like the Colorado AI Act or European Union’s AI Act.
The pause comes at a time when politicians in Washington D.C. continue to oppose AI regulation that they say could stand in the way of progress. Last week, leaders of the nation’s largest tech companies joined President Trump at a White House dinner to further discuss a recent executive order and other initiatives to prevent AI regulation. Earlier this year, Congress tried and failed to pass a moratorium on AI regulation by state governments.
When an automated system makes an error, AB 1018 gives people the right to have that mistake rectified within 60 days. It also reiterates that algorithms must give “full and equal” accommodations to everyone, and cannot discriminate against people based on characteristics like age, race, gender, disability, or immigration status. Developers must carry out impact assessments to, among other things, test for bias embedded in their systems. If an impact assessment is not conducted on an AI system, and that system is used to make consequential decisions about people’s lives, the developer faces fines of up to $25,000 per violation, or legal action by the attorney general, public prosecutors, or the Civil Rights Department.
Amendments made to the bill in recent weeks exempted generative AI models from coverage under the bill, which could prevent it from impacting major AI companies or ongoing generative AI pilot projects carried out by state agencies. The bill was also amended to delay a developer auditing requirement to 2030, and to clarify that the bill intends to address evaluating a person and making predictions or recommendations about them.
An intense legislative fight
Samantha Gordon, a chief program officer at TechEquity, a sponsor of the bill, said she’s seen more lobbyists attempt to kill AB 1018 this week in the California Senate than for any other AI bill ever. She said she thinks AB 1018 had a pathway to passage but the decision was made to pause in order to work with the governor, who ends his second and final term next year.
“There’s a fundamental disagreement about whether or not these tools should face basic scrutiny of testing and informing the public that they’re being used,” Gordon said.
Gordon thinks it’s possible tech companies will use their “unlimited amount of money” to fight the bill next year..
“But it’s clear,” she added, “that Americans want these protections — poll after poll shows Americans want strong laws on AI and that voluntary protections are insufficient.”
AB 1018 faced opposition from industry groups, big tech companies, the state’s largest health care provider, venture capital firms, and the Judicial Council of California, a policymaking body for state courts.
A coalition of hospitals, Kaiser Permanente, and health care software and AI company Epic Systems urged lawmakers to vote no on 1018 because they argued the bill would negatively influence patient care, increase costs, and require developers to contract with third-party auditors to assess compliance by 2030.
A coalition of business groups opposed the bill because of generalizing language and concern that compliance could be expensive for businesses and taxpayers. The group Technet, which seeks to shape policy nationwide and whose members include companies like Apple, Google, Nvidia, and OpenIAI, argued that AB 1018 would stifle job growth, raise costs, and punish the fastest growing industries in the state in a video ad campaign.
Venture capital firm Andreessen Horowitz, whose founder Marc Andreessen supported the re-election of President Trump, oppose the bill due to costs and due to the fact that the bill seeks to regulate AI in California and beyond.
A policy leader in the state judiciary said in an alert sent to lawmakers urging a no vote this week that the burden of compliance with the bill is so great that the judicial branch is at risk of losing the ability to use pretrial risk assessment tools like the kind that assign recidivism scores to sex offenders and violent felons. The state Judicial Council, which makes policy for California courts, estimates that AB 1018 passage will cost the state up to $300 million a year. Similar points were made in a letter to lawmakers last month.
Why backers keep fighting
Exactly how much AB 1018 could cost taxpayers is still a big unknown, due to contradictory information from state government agencies. An analysis by California legislative staff found that if the bill passes it could cost local agencies, state agencies, and the state judicial branch hundreds of millions of dollars. But a California Department of Technology report covered exclusively by CalMatters concluded in May that no state agencies use high risk automated systems, despite historical evidence to the contrary. Bauer-Kahan said last month that she was surprised by the financial impact estimates because CalMatters reporting found that automated decisionmaking system use was not widespread at the state level.
Support for the bill has come from unions who pledged to discuss AI in bargaining agreements, including the California Nurses Association and the Service Employees International Union, and from groups like the Citizen’s Privacy Coalition, Consumer Reports, and the Consumer Federation of California.
Coauthors of AB 1018 include major Democratic proponents of AI regulation in the California Legislature, including Assembly majority leader Cecilia Aguilar-Curry of Davis, author of a bill passed and on the governor’s desk that seeks to stop algorithms from raising prices on consumer goods; Chula Vista Senator Steve Padilla, whose bill to protect kids from companion chatbots awaits the governor’s decision; and San Diego Assemblymember Chris Ward, who previously helped pass a law requiring state agencies to disclose use of high-risk automated systems and this year sought to pass a bill to prevent pricing based on your personal information.
The anti-discrimination language in AB 1018 is important because tech companies and their customers often see themselves as exempt from discrimination law if the discrimination is done by automated systems, said Inioluwa Deborah Raji, an AI researcher at UC Berkeley who has audited algorithms for discrimination and advised government officials in Sacramento and Washington D.C. about how AI can harm people. She questions whether state agencies have the resources to enforce AB 1018, but also likes the disclosure requirement in the bill because “I think people deserve to know, and there’s no way that they can appeal or contest without it.”
“I need to know that an AI system was the reason I wasn’t able to rent this house. Then I can at an individual level appeal and contest. There’s something very valuable about that.”
Raji said she witnessed corporate influence and pushback when she helped shape a report about how California can balance guardrails and innovation for generative AI development, and she sees similar forces at play in the delay of AB 1018.
“It’s disappointing this [AB 1018] isn’t the priority for AI policy folks at this time,” she told CalMatters. “I truly hope the fourth time is the charm.”
A number of other bills with union backing were also considered by lawmakers this session that sought to protect workers from artificial intelligence. For the third year in a row, a bill to require a human driver in commercial delivery trucks in autonomous vehicles failed to become law. Assembly Bill 1331, which sought to prevent surveillance of workers with AI-powered tools in private spaces like locker or lactation rooms and placed limitations on surveillance in breakrooms, also failed to pass.
But another measure, Senate Bill 7 passed the legislature and is headed to the governor. It requires employers to disclose plans to use an automated system 30 days prior to doing so and make annual requests data used by an employer for discipline or firing. In recent days, author Senator Jerry McNerney amended the law to remove the right to appeal decisions made by AI and eliminate a prohibition against employers making predictions about a worker’s political beliefs, emotional state, or neural data. The California Labor Federation supported similar bills in Massachusetts, Vermont, Connecticut, and Washington.
This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.
AI Insights
Best Artificial Intelligence Stocks To Keep An Eye On – September 12th – MarketBeat
AI Insights
Malaysia and Zetrix AI Partner to Build Global Standards for Shariah-Compliant Artificial Intelligence
JOHOR BAHRU, Malaysia, Sept. 13, 2025 /PRNewswire/ — In a significant step towards islamic values-based artificial intelligence, Zetrix AI Berhad, developer of the world’s first Shariah-aligned Large Language Model (LLM) NurAi and the Government of Malaysia, through the Prime Minister’s Department (Religious Affairs), today signed a Letter of Intent (LOI) to collaborate on establishing the foremost global framework for Shariah compliance, certification and governance in AI. The ceremony was witnessed by Prime Minister YAB Dato’ Seri Anwar Ibrahim.
Building Trust in NurAI
Front row: Datuk Mohd Jimmy Wong Abdullah, Director of Zetrix AI Berhad (left) and Dato’ Dr. Sirajuddin Suhaimee, Director General of Department of Islamic Development Malaysia (JAKIM) (right), during the signing of the Letter of Intent between Zetrix AI Berhad and the Government of Malaysia, through the Prime Minister’s Department (Religious Affairs). Back row, from the left: The signing was witnessed by YB Tuan Haji Mohd Fared bin Khalid, Chairman of the Johor State Islamic Religious Affairs Committee; YB Dato’ Haji Asman Shah bin Abd. Rahman, Secretary of the Johor State Government; YAB Dato’ Onn Hafiz bin Ghazi, Chief Minister of Johor; YAB Dato’ Seri Anwar bin Ibrahim, Prime Minister of Malaysia; and YB Senator Dato’ Setia Dr. Haji Mohd Na’im bin Haji Mokhtar, Minister in the Prime Minister’s Department (Religious Affairs).
JAKIM, Malaysia’s Department of Islamic Development, is internationally recognised as the gold standard in halal certification, accrediting foreign certification bodies across nearly 50 countries. Malaysia has consistently ranked first in the Global Islamic Economy Indicator, reflecting its leadership not only in halal certification but also in Islamic finance, food and education. By integrating emerging technologies such as AI and blockchain to enhance compliance and monitoring, Malaysia continues to set holistic benchmarks for the global Islamic economy.
NurAI has already established itself as a pioneering Shariah-aligned AI platform. With today’s collaboration, JAKIM, under the Ministry’s leadership, would play a central role in guiding the certification, governance and ethical standards of NurAI, ensuring its alignment with Islamic principles.
Additionally, this milestone underscores the urgent need for AI systems that move beyond secular or foreign-centric worldviews, offering instead a platform rooted in Islamic ethics. It positions Malaysia as a global leader in ethical and Shariah-compliant AI while setting international benchmarks. The initiative also reflects the country’s halal and digitalisation agendas, ensuring AI remains trusted, secure, and representative of Muslim values while serving more than 2 billion people worldwide.
Prime Minister YAB Dato’ Seri Anwar Ibrahim reinforced that national policies should incorporate various inputs, including digitalisation and artificial intelligence — and must always remain grounded in islamic principles and values that deserve emphasis.
Areas of Collaboration
Through the LOI, Zetrix AI and the Government via JAKIM, propose to collaborate in three key areas:
- Shariah Certification and Governance — Developing frameworks, ethical guidelines and certification standards for AI systems rooted in Islamic principles.
- Global Advocacy and Promotion — Positioning Malaysia as the global centre of excellence for Islamic AI and championing the Islamic digital economy projected at USD 5.74 trillion by 2030.
- JAKIM’s Official Channel on NurAI — Creating a trusted platform for Islamic legal rulings, halal certification and verified Shariah guidance, combating misinformation through AI.
Reinforcing Global Halal Tech Leadership
Through this collaboration, NurAI demonstrates how advanced AI can be guided by ethical and faith-based principles to serve global communities. By extending halal leadership into the digital economy particularly in Islamic finance, education and law — Malaysia positions itself as a key contributor to setting international benchmarks for Shariah-compliant AI.
Inclusive, Secure and Cost-Effective AI
NurAI is developed in Malaysia, supporting Bahasa Melayu, English, Indonesian and Arabic. It complies with national data sovereignty and cybersecurity policies, reducing reliance on foreign tools while ensuring AI knowledge stays local, trusted, and secure.
NurAI is available for download on nur-ai.zetrix.com
About Zetrix AI Berhad
Zetrix AI Berhad (“Zetrix AI”), formerly known as MY E.G. Services Berhad, is leading the way in the deployment of blockchain technology and artificial intelligence in powering the public and private sectors across ASEAN. Headquartered in Malaysia, Zetrix AI started operations in 2000 as a pioneer in the provision of electronic government services and complementary commercial offerings in its home country. Today, it has advanced to the forefront of technology transformation in the broader region, leveraging its Layer-1 blockchain platform Zetrix and embracing the convergence of Web3, AI and robotics to enable optimally-efficient, intelligent and secure cross-border transactions, digital identity interoperability and automation solutions that seamlessly connect peoples, businesses and governments.
SOURCE Zetrix AI Berhad
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries