AI Research
Critical thinking vital in age of AI, Large Language Models: Zohos Director AI Research

New Delhi, Sep 9 (PTI) Human edge in critical thinking and reasoning will continue to be vital in the age of AI and large language models, Ramprakash Ramamoorthy, Director of AI Research at homegrown technology company Zoho Corp has said, emphasising that human interventions will remain essential despite massive productivity push from new technologies.
According to him, skills such as prompt engineering will gain traction, while new and upcoming frameworks will expand opportunities in areas such as legal and privacy tech.
“Job roles will evolve, and it is important for everybody to make use of AI, the same way they did when internet came in,” he told PTI.
That said, human strengths in reasoning and critical thinking will remain essential, even in the age of AI and LLMs.
Stressing the need to adopt AI, he said its edge can be thought of in the same way as one would think of other tools, and cited an example of two individuals, one improving results with Google search and the other responding without search tools, vying for the same project.
“It is the same with LLM today. Try and use it. See where it could help, and where it could not. It is more important to build that… In your line of business where could an LLM be effective, where could you do it yourself. So having that exposure would really help you stay afloat,” he said.
In AI and LLM age, hardware skills too will be in demand as companies grapple with increasing complexities around hardware, from acquiring GPUs to managing scale and data centre requirements, he said.
Zoho recently announced its proprietary large language model (Zia LLM) designed for enterprises using its suite of products, a move that reflected the bold ambitions of Indian firms to build and innovate on their own AI stacks in global tech race.
Zoho has also deepened AI portfolio with prebuilt Agents, and custom Agent Builder.
“ChatGPTs and Geminis of the world are built with the consumers in mind. Zia LLM is built with enterprises in mind,” he said.
AI Research
What is artificial intelligence’s greatest risk? – Opinion

Risk dominates current discussions on AI governance. This July, Geoffrey Hinton, a Nobel and Turing laureate, addressed the World Artificial Intelligence Conference in Shanghai. His speech bore the title he has used almost exclusively since leaving Google in 2023: “Will Digital Intelligence Replace Biological Intelligence?” He stressed, once again, that AI might soon surpass humanity and threaten our survival.
Scientists and policymakers from China, the United States, European countries and elsewhere, nodded gravely in response. Yet this apparent consensus masks a profound paradox in AI governance. Conference after conference, the world’s brightest minds have identified shared risks. They call for cooperation, sign declarations, then watch the world return to fierce competition the moment the panels end.
This paradox troubled me for years. I trust science, but if the threat is truly existential, why can’t even survival unite humanity? Only recently did I grasp a disturbing possibility: these risk warnings fail to foster international cooperation because defining AI risk has itself become a new arena for international competition.
Traditionally, technology governance follows a clear causal chain: identify specific risks, then develop governance solutions. Nuclear weapons pose stark, objective dangers: blast yield, radiation, fallout. Climate change offers measurable indicators and an increasingly solid scientific consensus. AI, by contrast, is a blank canvas. No one can definitively convince everyone whether the greatest risk is mass unemployment, algorithmic discrimination, superintelligent takeover, or something entirely different that we have not even heard of.
This uncertainty transforms AI risk assessment from scientific inquiry into strategic gamesmanship. The US emphasizes “existential risks” from “frontier models”, terminology that spotlights Silicon Valley’s advanced systems.
This framework positions American tech giants as both sources of danger and essential partners in control. Europe focuses on “ethics” and “trustworthy AI”, extending its regulatory expertise from data protection into artificial intelligence. China advocates that “AI safety is a global public good”, arguing that risk governance should not be monopolized by a few nations but serve humanity’s common interests, a narrative that challenges Western dominance while calling for multipolar governance.
Corporate actors prove equally adept at shaping risk narratives. OpenAI’s emphasis on “alignment with human goals” highlights both genuine technical challenges and the company’s particular research strengths. Anthropic promotes “constitutional AI” in domains where it claims special expertise. Other firms excel at selecting safety benchmarks that favor their approaches, while suggesting the real risks lie with competitors who fail to meet these standards. Computer scientists, philosophers, economists, each professional community shapes its own value through narrative, warning of technical catastrophe, revealing moral hazards, or predicting labor market upheaval.
The causal chain of AI safety has thus been inverted: we construct risk narratives first, then deduce technical threats; we design governance frameworks first, then define the problems requiring governance. Defining the problem creates causality. This is not epistemological failure but a new form of power, namely making your risk definition the unquestioned “scientific consensus”. For how we define “artificial general intelligence”, which applications constitute “unacceptable risk”, what counts as “responsible AI”, answers to all these questions will directly shape future technological trajectories, industrial competitive advantages, international market structures, and even the world order itself.
Does this mean AI safety cooperation is doomed to empty talk? Quite the opposite. Understanding the rules of the game enables better participation.
AI risk is constructed. For policymakers, this means advancing your agenda in international negotiations while understanding the genuine concerns and legitimate interests behind others’.
Acknowledging construction doesn’t mean denying reality, regardless of how risks are defined, solid technical research, robust contingency mechanisms, and practical safeguards remain essential. For businesses, this means considering multiple stakeholders when shaping technical standards and avoiding winner-takes-all thinking.
True competitive advantage stems from unique strengths rooted in local innovation ecosystems, not opportunistic positioning. For the public, this means developing “risk immunity”, learning to discern the interest structures and power relations behind different AI risk narratives, neither paralyzed by doomsday prophecies nor seduced by technological utopias.
International cooperation remains indispensable, but we must rethink its nature and possibilities. Rather than pursuing a unified AI risk governance framework, a consensus that is neither achievable nor necessary, we should acknowledge and manage the plurality of risk perceptions. The international community needs not one comprehensive global agreement superseding all others, but “competitive governance laboratories” where different governance models prove their worth in practice. This polycentric governance may appear loose but can achieve higher-order coordination through mutual learning and checks and balances.
We habitually view AI as another technology requiring governance, without realizing it is changing the meaning of “governance” itself. The competition to define AI risk isn’t global governance’s failure but its necessary evolution: a collective learning process for confronting the uncertainties of transformative technology.
The author is an associate professor at the Center for International Security and Strategy, Tsinghua University.
The views don’t necessarily represent those of China Daily.
If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.
AI Research
Albania’s prime minister appoints an AI-generated ‘minister’ to tackle corruption

TIRANA, Albania — Albania’s prime minister on Friday tapped an Artificial Intelligence-generated “minister” to tackle corruption and promote transparency and innovation in his new Cabinet.
Officially named Diella — the female form of the word for sun in the Albanian language — the new AI minister is a virtual entity.
Diella will be a “member of the Cabinet who is not present physically but has been created virtually,” Prime Minister Edi Rama said in a post on Facebook.
Rama said the AI-generated bot would help ensure that “public tenders will be 100% free of corruption” and will help the government work faster and with full transparency.
Diella uses AI’s up-to-date models and techniques to guarantee accuracy in offering the duties it is charged with, according to Albania’s National Agency for Information Society’s website.
Diella, depicted as a figure in a traditional Albanian folk costume, was created earlier this year, in cooperation with Microsoft, as a virtual assistant on the e-Albania public service platform, where she has helped users navigate the site and get access to about 1 million digital inquiries and documents.
Rama’s Socialist Party secured a fourth consecutive term after winning 83 of the 140 Assembly seats in the May 11 parliamentary elections. The party can govern alone and pass most legislation, but it needs a two-thirds majority, or 93 seats, to change the Constitution.
The Socialists have said it can deliver European Union membership for Albania in five years, with negotiations concluding by 2027. The pledge has been met with skepticism by the Democrats, who contend Albania is far from prepared.
The Western Balkan country opened full negotiations to join the EU a year ago. The new government also faces the challenges of fighting organized crime and corruption, which has remained a top issue in Albania since the fall of the communist regime in 1990.
Diella also will help local authorities to speed up and adapt to the bloc’s working trend.
Albanian President Bajram Begaj has mandated Rama with the formation of the new government. Analysts say that gives the prime minister authority “for the creation and functioning” of AI-generated Diella.
Asked by journalists whether that violates the constitution, Begaj stopped short on Friday of describing Diella’s role as a ministerial post.
The conservative opposition Democratic Party-led coalition, headed by former prime minister and president Sali Berisha, won 50 seats. The party has not accepted the official election results, claiming irregularities, but its members participated in the new parliament’s inaugural session. The remaining seats went to four smaller parties.
Lawmakers will vote on the new Cabinet but it was unclear whether Rama will ask for a vote on Diella’s virtual post. Legal experts say more work may be needed to establish Diella’s official status.
The Democrats’ parliamentary group leader Gazmend Bardhi said he considered Diella’s ministerial status unconstitutional.
“Prime minister’s buffoonery cannot be turned into legal acts of the Albanian state,” Bardhi posted on Facebook.
Parliament began the process on Friday to swear in the new lawmakers, who will later elect a new speaker and deputies and formally present Rama’s new Cabinet.
AI Research
AI fuels false claims after Charlie Kirk’s death, CBS News analysis reveals

False claims, conspiracy theories and posts naming people with no connection to the incident spread rapidly across social media in the aftermath of conservative activist Charlie Kirk’s killing on Wednesday, some amplified and fueled by AI tools.
CBS News identified 10 posts by Grok, X’s AI chatbot, that misidentified the suspect before his identity, now known to be southern Utah resident Tyler Robinson, was released. Grok eventually generated a response saying it had incorrectly identified the suspect, but by then, posts featuring the wrong person’s face and name were already circulating across X.
The chatbot also generated altered “enhancements” of photos released by the FBI. One such photo was reposted by the Washington County Sheriff’s Office in Utah, which later posted an update saying, “this appears to be an AI enhanced photo” that distorted the clothing and facial features.
One AI-enhanced image portrayed a man appearing much older than Robinson, who is 22. An AI-generated video that smoothed out the suspect’s features and jumbled his shirt design was posted by an X user with more than 2 million followers and was reposted thousands of times.
On Friday morning, after Utah Gov. Spencer Cox announced that the suspect in custody was Robinson, Grok’s replies to X users’ inquiries about him were contradictory. One Grok post said Robinson was a registered Republican, while others reported he was a nonpartisan voter. Voter registration records indicate Robinson is not affiliated with a political party.
CBS News also identified a dozen instances where Grok said that Kirk was alive the day following his death. Other Grok responses gave a false assassination date, labeled the FBI’s reward offer a “hoax” and said that reports about Kirk’s death “remain conflicting” even after his death had been confirmed.
Most generative AI tools produce results based on probability, which can make it challenging for them to provide accurate information in real time as events unfold, S. Shyam Sundar, a professor at Penn State University and the director of the university’s Center for Socially Responsible Artificial Intelligence, told CBS News.
“They look at what is the most likely next word or next passage,” Sundar said. “It’s not based on fact checking. It’s not based on any kind of reportage on the scene. It’s more based on the likelihood of this event occurring, and if there’s enough out there that might question his death, it might pick up on some of that.”
X did not respond to a request for comment about the false information Grok was posting.
Meanwhile, the AI-powered search engine Perplexity’s X bot described the shooting as a “hypothetical scenario” in a since-deleted post, and suggested a White House statement on Kirk’s death was fabricated.
Perplexity’s spokesperson told CBS News that “accurate AI is the core technology we are building and central to the experience in all of our products,” but that “Perplexity never claims to be 100% accurate.”
Another spokesperson added the X bot is not up to date with improvements the company has made to its technology, and the company has since removed the bot from X.
Google’s AI Overview, a summary of search results that sometimes appears at the top of searches, also provided inaccurate information. The AI Overview for a search late Thursday evening for Hunter Kozak, the last person to ask Kirk a question before he was killed, incorrectly identified him as the person of interest the FBI was looking for. By Friday morning, the false information no longer appeared for the same search.
“The vast majority of the queries seeking information on this topic return high quality and accurate responses,” a Google spokesperson told CBS News. “Given the rapidly evolving nature of this news, it’s possible that our systems misinterpreted web content or missed some context, as all Search features can do given the scale of the open web.”
Sundar told CBS News that people tend to perceive AI as being less biased or more reliable than someone online who they don’t know.
“We don’t think of machines as being partisan or bias or wanting to sow seeds of dissent,” Sundar said. “If it’s just a social media friend or some somebody on the contact list that’s sent something on your feed with unknown pedigree … chances are people trust the machine more than they do the random human.”
Misinformation may also be coming from foreign sources, according to Cox, Utah’s governor, who said in a press briefing on Thursday that foreign adversaries including Russia and China have bots that “are trying to instill disinformation and encourage violence.” Cox urged listeners to spend less time on social media.
“I would encourage you to ignore those and turn off those streams, and to spend a little more time with our families,” he said.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi