Connect with us

AI Research

A Realistic Direction for Artificial General Intelligence Today

Published

on


In November 2024, OpenAI’s Sam Altman said that ChatGPT would achieve the holy grail of artificial general intelligence (AGI) in 2025.

AGI is admittedly a fuzzy goal. Most agree that it involves an ability to perform any intellectual task as well as or better than humans. But which humans? An average human? A Nobel laureate? All Nobel laureates put together?

When GPT-5 was released a few weeks ago, it seemed that a version of AGI had been achieved, on time, when Altman boasted that “it really feels like talking to an expert in any topic, like a PhD-level expert.” Alas, Altman had again overpromised and underdelivered.

It immediately became clear that GPT-5 is not a PhD-level expert (see, for example, here, here, and here.) The fundamental problem remains; large language models (LLMs) like GPT-5 are hobbled by their inherent inability to relate the text they input and output to the real world. No amount of scaling will change that. Tens of thousands of human trainers can patch billions of blunders but (a) an LLM following instructions it does not understand is not intelligent; (b) the trainers cannot anticipate every future blunder that LLMs will make; and (c) many important decisions should elicit subjective probabilities that depend on the specific details of the situation and are beyond the capabilities of LLMs.

The reality is that we are not going to pay big bucks for LLMs that simply recite facts, like the capital of Chile or the population of Portugal. Boring! We can use Wikipedia to look up facts, and the answers are more trustworthy than those generated by LLMs.

For example, I gave GPT-5 this prompt four times, “How many siblings did George Washington have?” and GPT-5 gave four different answers: 7, 8, 9, and 12:

GPT-5:   George Washington, the first President of the United States, had five full siblings and seven half-siblings, for a total of 12 siblings.

               Here’s the breakdown:

Full siblings (same mother and father):

  1. Samuel Washington
  2. John Augustine Washington
  3. Charles Washington
  4. Betty Washington Lewis
  5. Mildred Washington

Half-siblings (from his father Augustine Washingtons first marriage to Jane Butler):

  1. Butler Washington
  2. Lawrence Washington
  3. Augustine Washington Jr.
  4. Jane Washington
  5. Mildred Washington
  6. Richard Washington
  7. John Washington

               So, George grew up in a fairly large blended family.

               If you want, I can also make a family tree diagram showing all his siblings—it makes it much clearer. Do you want me to do that?

I responded yes and GPT-5 drew me a confusing family tree with ten siblings:

Wikipedia gives the correct answer (nine siblings — five full siblings and four half-siblings) and, even though GPT-5 trained on Wikipedia, something in its training set had led it astray.

What should Sam Altman and other developers do now?

Instead of admitting defeat (or continuing to make increasingly embarrassing claims), Altman and his colleagues might heed some timeless advice by declaring victory and getting the hell out. Instead of chasing a goal they cannot achieve, change the goal to be something that has already been achieved by redefining AGI.

I have been thinking about this for several years now. A realistic and easily understood goal is for a computer to be as intelligent as a friend I will call Brock. Everyone knows someone like Brock, so we can all relate to what Brock Intelligence means.

Brock is a prototypical mansplainer. Ask him (or anyone within his earshot) any question and he immediately responds with a long-winded, confident answer — sometimes at 200 words a minute with gusts up to 600. Kudos to those who can listen to half of his answer. Condolences to those who live or work with Brock and have to endure his seemingly endless blather.

Instead of trying to compete with Wikipedia, Altman and his competitors might instead pivot to a focus on Brock Intelligence, something LLMs excel at by being relentlessly cheerful and eager to offer facts-be-damned advice on most any topic.

Brock Intelligence vs. GPT Intelligence

The most substantive difference between Brock and GPT is that GPT likes to organize its output in bullet points. Oddly, Brock prefers a less-organized, more rambling style that allows him to demonstrate his far-reaching intelligence. Brock is the chatty one, while ChatGPT is more like a canned slide show.

They don’t always agree with each other (or with themselves). When I recently asked Brock and GPT-5, “What’s the best state to retire to?,” they both had lengthy, persuasive reasons for their choices. Brock chose Arizona, Texas, and Washington. GPT-5 said that the “Best All-Around States for Retirement” are New Hampshire and Florida. A few days later, GPT-5 chose Florida, Arizona, North Carolina, and Tennessee. A few minutes after that, GPT-5 went with Florida, New Hampshire, Alaska, Wyoming, and New England states (Maine/Vermont/Massachusetts).

Consistency is hardly the point. What most people seek with advice about money, careers, retirement, and romance is a straightforward answer. As Harry Truman famously complained, “Give me a one-handed economist. All my economists say “on the one hand…,” then “but on the other….” People ask for advice precisely because they want someone else to make the decision for them. They are not looking for accuracy or consistency, only confidence.

Sam Altman says that GPT can already be used as an AI buddy that offers advice (and companionship) and it is reported that OpenAI is working on a portable, screen-free “personal life advisor.” Kind of like hanging out with Brock 24/7. I humbly suggest that they name this personal life advisor, Brock Says (Design generated by GPT-5.)



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

[2506.08171] Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models

Published

on


View a PDF of the paper titled Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models, by Daniel Koh and 4 other authors

View PDF
HTML (experimental)

Abstract:Large language models (LLMs) have demonstrated strong performance on coding tasks such as generation, completion and repair, but their ability to handle complex symbolic reasoning over code still remains underexplored. We introduce the task of worst-case symbolic constraints analysis, which requires inferring the symbolic constraints that characterise worst-case program executions; these constraints can be solved to obtain inputs that expose performance bottlenecks or denial-of-service vulnerabilities in software systems. We show that even state-of-the-art LLMs (e.g., GPT-5) struggle when applied directly on this task. To address this challenge, we propose WARP, an innovative neurosymbolic approach that computes worst-case constraints on smaller concrete input sizes using existing program analysis tools, and then leverages LLMs to generalise these constraints to larger input sizes. Concretely, WARP comprises: (1) an incremental strategy for LLM-based worst-case reasoning, (2) a solver-aligned neurosymbolic framework that integrates reinforcement learning with SMT (Satisfiability Modulo Theories) solving, and (3) a curated dataset of symbolic constraints. Experimental results show that WARP consistently improves performance on worst-case constraint reasoning. Leveraging the curated constraint dataset, we use reinforcement learning to fine-tune a model, WARP-1.0-3B, which significantly outperforms size-matched and even larger baselines. These results demonstrate that incremental constraint reasoning enhances LLMs’ ability to handle symbolic reasoning and highlight the potential for deeper integration between neural learning and formal methods in rigorous program analysis.

Submission history

From: Daniel Koh [view email]
[v1]
Mon, 9 Jun 2025 19:33:30 UTC (1,462 KB)
[v2]
Tue, 16 Sep 2025 10:35:33 UTC (1,871 KB)



Source link

Continue Reading

AI Research

‘AI Learning Day’ spotlights smart campus and ecosystem co-creation

Published

on


When artificial intelligence (AI) can help you retrieve literature, support your research, and even act as a “super assistant”, university education is undergoing a profound transformation.

On 9 September, XJTLU’s Centre for Knowledge and Information (CKI) hosted its third AI Learning Day, themed “AI-Empowered, Ecosystem-Co-created”. The event showcased the latest milestones of the University’s “Education + AI” strategy and offered in-depth discussions on the role of AI in higher education.

In her opening remarks, Professor Qiuling Chao, Vice President of XJTLU, said: “AI offers us an opportunity to rethink education, helping us create a learning environment that is fairer, more efficient and more personalised. I hope today’s event will inspire everyone to explore how AI technologies can be applied in your own practice.”

Professor Qiuling Chao

In his keynote speech, Professor Youmin Xi, Executive President of XJTLU, elaborated on the University’s vision for future universities. He stressed that future universities would evolve into human-AI symbiotic ecosystems, where learning would be centred on project-based co-creation and human-AI collaboration. The role of educators, he noted, would shift from transmitters of knowledge to mentors for both learning and life.

Professor Youmin Xi

At the event, Professor Xi’s digital twin, created by the XJTLU Virtual Engineering Centre in collaboration with the team led by Qilei Sun from the Academy of Artificial Intelligence, delivered Teachers’ Day greetings to all staff.

 

(Teachers’ Day message from President Xi’s digital twin)

 

“Education + AI” in diverse scenarios

This event also highlighted four case studies from different areas of the University. Dr Ling Xia from the Global Cultures and Languages Hub suggested that in the AI era, curricula should undergo de-skilling (assigning repetitive tasks to AI), re-skilling, and up-skilling, thereby enabling students to focus on in-depth learning in critical thinking and research methodologies.

Dr Xiangyun Lu from International Business School Suzhou (IBSS) demonstrated how AI teaching assistants and the University’s Junmou AI platform can offer students a customised and highly interactive learning experience, particularly for those facing challenges such as information overload and language barriers.

Dr Juan Li from the School of Science shared the concept of the “AI amplifier” for research. She explained that the “double amplifier” effect works in two stages: AI first amplifies students’ efficiency by automating tasks like literature searches and coding. These empowered students then become the second amplifier, freeing mentors from routine work so they can focus on high-level strategy. This human-AI partnership allows a small research team to achieve the output of a much larger one.

Jing Wang, Deputy Director of the XJTLU Learning Mall, showed how AI agents are already being used to support scheduling, meeting bookings, news updates and other administrative and learning tasks. She also announced that from this semester, all students would have access to the XIPU AI Agent platform.

Students and teachers are having a discussion at one of the booths

AI education system co-created by staff and students

The event’s AI interactive zone also drew significant attention from students and staff. From the Junmou AI platform to the E

-Support chatbot, and from AI-assisted creative design to 3D printing, 10 exhibition booths demonstrated the integration of AI across campus life.

These innovative applications sparked lively discussions and thoughtful reflections among participants. In an interview, Thomas Durham from IBSS noted that, although he had rarely used AI before, the event was highly inspiring and motivated him to explore its use in both professional and personal life. He also shared his perspective on AI’s role in learning, stating: “My expectation for the future of AI in education is that it should help students think critically. My worry is that AI’s convenience and efficiency might make students’ understanding too superficial, since AI does much of the hard work for them. Hopefully, critical thinking will still be preserved.”

Year One student Zifei Xu was particularly inspired by the interdisciplinary collaboration on display at the event, remarking that it offered her a glimpse of a more holistic and future-focused education.

Dr Xin Bi, XJTLU’s Chief Officer of Data and Director of the CKI, noted that, supported by robust digital infrastructure such as the Junmou AI platform, more than 26,000 students and 2,400 staff are already using the University’s AI platforms. XJTLU’s digital transformation is advancing from informatisation and digitisation towards intelligentisation, with AI expected to empower teaching, research and administration, and to help staff and students leap from knowledge to wisdom.

Dr Xin Bi

“Looking ahead, we will continue to advance the deep integration of AI in education, research, administration and services, building a data-driven intelligent operations centre and fostering a sustainable AI learning ecosystem,” said Dr Xin Bi.

 

By Qinru Liu

Edited by Patricia Pieterse

Translated by Xiangyin Han



Source link

Continue Reading

AI Research

Vietnam plans to introduce Law on Artificial Intelligence

Published

on


This information was announced by Minister of Science and Technology Nguyen Manh Hung at a conference organised by the Ho Chi Minh National Academy of Politics in coordination with the Ministry of Public Security, the Ministry of National Defense, and the Central Theoretical Council in Hanoi on September 15.

Minister of Science and Technology Nguyen Manh Hung. Photo: MST

At the event, experts, businesses, and managers shared their ideas in two discussion sessions. The first session focused on AI power, risks and control, analysing both positive and negative aspects, affirming the need to exploit potential and control ethics, safety, security, and social risks.

In the second session, they discussed national AI development strategy, from vision to actions, a specific roadmap to make AI a pillar in Vietnam’s socioeconomic development.

They agreed that for AI to truly become a driving force for development, Vietnam needs a comprehensive strategy: data infrastructure, high-quality human resources, a complete legal framework, and a dynamic innovation ecosystem. More importantly, AI must be oriented to serve people, protect human rights, and strengthen national security in the digital age.

According to Minister Hung, Vietnam issued its first AI Strategy in 2021, but AI is a rapidly changing field, so the strategy needed to be updated.

By the end of this year, the country will have an updated version of the National AI Strategy and the AI ​​Law. This is not only a legal framework, but also a declaration of national vision. AI must become the country’s intellectual infrastructure, serving the people, developing sustainably, and enhancing national competitiveness.

Regarding open AI technology, Hung emphasised that Vietnam is committed to developing and mastering digital technology, including AI, based on open standards and open-source code. This is also Vietnam’s strategy to develop and master Vietnamese technology, implementing the “Make in Vietnam” programme.

Vietnam plans to introduce Law on Artificial Intelligence
Experts, businesses, and managers share their ideas at the conference. Photo: MST

Regarding creating a domestic AI market, he said that without applications, there will be no market. Without a market, Vietnamese AI enterprises will remain small. Therefore, promoting AI applications in enterprises, in state agencies and key areas is the fastest way to develop AI and create Vietnamese AI enterprises.

“The government will spend more on AI, the Natif Technology Innovation Fund of the Ministry of Science and Technology will spend at least 40 per cent to support AI applications, issue vouchers for small and medium-sized enterprises using Vietnamese AI. The domestic market is the cradle to create Vietnamese AI enterprises,” he noted.

In terms of policy and institutions, he added that Vietnam will issue a national AI ethics code that is in line with international standards but suitable for Vietnamese practice, and at the same time develop an AI Law and an AI strategy with core principles including risk-based management, transparency and accountability, putting people at the center, encouraging domestic AI development, AI autonomy, using AI as a driving force for rapid and sustainable growth, and protecting digital sovereignty based on three pillars: data, infrastructure, and AI technology.

According to the MST, Vietnam’s AI development will have to be based on four important pillars: transparent institutions, modern infrastructure, high-quality human resources, and humane culture.

Time for Vietnam to make breakthroughs

Speaking at the workshop, Luong Tam Quang, Minister of Public Security, said that AI is considered one of the key technologies, a factor that can lead to changes in the global order.

Vietnam plans to introduce Law on Artificial Intelligence
Luong Tam Quang, Minister of Public Security. Photo: MST

He added that with the ability to promote economic growth, optimise production, improve healthcare, innovate education, and enhance social governance capacity, AI helps countries save costs, increase efficiency, and expand knowledge. It is also a resource, and a driving force to affirm the country’s position in the digital age.

According to Minister Quang, Vietnam’s potential for AI development is huge, and is expected to contribute about $79.3 billion, equivalent to 12 per cent of Vietnam’s GDP in 2030 if widely applied. Under the leadership of the Party, legal regulations for the development of AI have gradually taken shape.

Prof. Dr. Nguyen Xuan Thang, director of the Ho Chi Minh National Academy of Politics, and chairman of the Central Theoretical Council, said that AI is becoming an indispensable part in the process of establishing a new growth model and the operation, governance, and management of the country’s society and economy.

Vietnam plans to introduce Law on Artificial Intelligence
Prof. Dr. Nguyen Xuan Thang, director of the Ho Chi Minh National Academy of Politics, and chairman of the Central Theoretical Council. Photo: MST

However, to turn potential into reality, it requires the support of the entire ecosystem, from national strategies and policies to implementation in businesses, institutes, schools, and the community.

“AI cannot develop sustainably without responsibility, ethics, and a clear humanistic orientation. Technology is the tool, while humans are the goal and the deciding factor, because even if it possesses unlimited power as many people believe, AI is still a product created by humans,” Thang emphasised.

FPT University and Dream Lab harness AI to cultivate startups FPT University and Dream Lab harness AI to cultivate startups

FPT University and Dream Lab on July 31 signed a MoU to launch a groundbreaking initiative aimed at building Vietnam’s most dynamic startup and entrepreneurial ecosystem for students.

Citi launches AI tools for employees in Vietnam Citi launches AI tools for employees in Vietnam

Citi has expanded the rollout of its generative AI tools to employees across key Asian markets, marking a significant step towards enhancing productivity and innovation.

AI boom drives data center surge in Southeast Asia AI boom drives data center surge in Southeast Asia

AI is fueling an unprecedented surge in data center demand that Southeast Asia is not yet ready to meet.





Source link

Continue Reading

Trending