Connect with us

AI Research

Nigeria jumps from zero to 20 AI research papers in 18 months

Published

on


Nigeria has published 20 peer-reviewed artificial intelligence (AI) compute research papers in less than two years, up from zero, in a sign of the government’s push for Nigeria’s entry into the global conversation on AI research.

That jump is thanks to the Nigerian Artificial Intelligence Research Scheme (NAIRS), a federally funded program designed to stop the country from being a mere exporter of talent and instead anchor research output under Nigerian institutions.

Launched in early 2024 by the Ministry of Communications, Innovation, and Digital Economy, and funded through the National Information Technology Development Agency (NITDA), NAIRS seeks to address a structural gap: while Nigerians abroad were contributing thousands of AI papers, none were credited to local universities or labs, leaving the country with little to show under its own name.

“We discovered thousands of AI papers authored by Nigerians, but none tied to Nigerian institutions,” Olubunmi Ajala, National Director of the National Centre for Artificial Intelligence and Robotics, told TechCabal on the sidelines of GITEX Nigeria in Abuja on Monday. “That’s why NAIRS was created, to give Nigerian researchers, both at home and in the diaspora, a structured platform to produce Nigeria-led AI research.”

Over 4,000 researchers applied to the first NAIRS call, with 45 consortia of academics and startups eventually selected. Each group received grants of up to ₦5 million ($3,400) and a mandate to publish within a year in one of five thematic areas: agriculture, healthcare, education, sustainability, and utilities.

By August 2025, the results were in: 20 peer-reviewed papers, two of them in Springer journals, and several projects already tested in the field. One agricultural consortium used YOLOv8 computer vision models to detect “tomato Ebola,” a disease that wipes out harvests. Another built a smart traffic management system that replaces Nigeria’s fixed 60-second light cycles with an adaptive model, allocating green light time based on real-time traffic flows.

“These are not just academic exercises,” Ajala said. “They are practical solutions tested with real data, designed to solve problems that directly affect Nigerians.”

The initiative is also building long-term infrastructure. Through the AI Collective, a network of over 2,000 Nigerian AI practitioners globally, participants share data, mentor students, and form syndicates to commercialise work.

Ajala said the next phase is to push for patents, biotech applications, and scalable startups. 

“Once strong research outcomes begin to emerge, funding naturally follows,” he said. “Global partners are keen to see how AI can address African realities, and Nigeria is beginning to provide answers.”

Mark your calendars! Moonshot by TechCabal is back in Lagos on October 15–16! Join Africa’s top founders, creatives & tech leaders for 2 days of keynotes, mixers & future-forward ideas. Early bird tickets now 20% off—don’t snooze! moonshot.techcabal.com





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Artificial intelligence loses out to humans in credibility during corporate crisis responses

Published

on


As artificial intelligence tools become increasingly integrated into public relations workflows, many organizations are considering whether these technologies can handle high-stakes communication tasks such as crisis response. A new study published in Corporate Communications: An International Journal provides evidence that, at least for now, human-written crisis messages are perceived as more credible and reputationally beneficial than those authored by AI systems.

The rise of generative artificial intelligence has raised questions about its suitability for replacing human labor in communication roles. In public relations, AI tools are already used for media monitoring, message personalization, and social media management. Some advocates even suggest that AI could eventually author press releases or crisis response messages.

However, prior studies have found that people often view AI-generated messages with suspicion. Despite improvements in the sophistication of these systems, the use of AI can still reduce perceptions of warmth, trustworthiness, and competence. Given the importance of credibility and trust in public relations, especially during crises, the study aimed to evaluate how the perceived source of a message—human or AI—affects how people interpret crisis responses.

The researchers also wanted to assess whether the tone or strategy of a message—whether sympathetic, apologetic, or informational—would influence perceptions. Drawing on situational crisis communication theory, they hypothesized that more accommodating responses might boost credibility and protect organizational reputation, regardless of the source.

“Our interest in understanding how people judge the credibility of AI-generated text grew out of a graduate class Ayman Alhammad (the lead author) took with Cameron Piercy (the third author). They talked about research about trust in AI during the class and the questions we posed in our study naturally grew out of that,” said study author Christopher Etheridge, an assistant professor in the William Allen White School of Journalism and Mass Communications at the University of Kansas.

To explore these questions, the researchers designed a controlled experiment using a hypothetical crisis scenario. Participants were told about a fictitious company called the Chunky Chocolate Company, which was facing backlash after a batch of its chocolate bars reportedly caused consumers to become ill. According to the scenario, the company had investigated the incident and determined that the problem was due to product tampering by an employee.

Participants were then shown one of six possible press releases responding to the crisis. The releases varied in two key ways. First, they were attributed either to a human spokesperson (“Chris Smith”) or to an AI system explicitly labeled as such. Second, the tone of the message followed one of three common strategies: informational (providing details about the incident), sympathetic (expressing empathy for affected customers), or apologetic (taking responsibility and issuing an apology).

The wording of the messages was carefully controlled to ensure consistency across versions, with only the source and emotional tone changing between conditions. After reading the message, participants were asked to rate the perceived credibility of the author, the credibility of the message, and the overall reputation of the company. These ratings were made using standardized scales based on prior research.

The sample included 447 students enrolled in journalism and communication courses at a public university in the Midwestern United States. These participants were chosen because of their familiarity with media content and their relevance as potential future professionals or informed consumers of public relations material. Their average age was just over 20 years old, and most participants identified as white and either full- or part-time employed.

The results provided clear support for the idea that human authors are still viewed as more credible than AI. Across all three key outcomes—source credibility, message credibility, and organizational reputation—participants rated human-written messages higher than identical messages attributed to AI.

“It’s not surprising, given discussions that are taking place, that people found AI-generated content to be less credible,” Etheridge told PsyPost. “Still, capturing the data and showing it in an experiment like this one is valuable as the landscape of AI is ever-changing.”

Participants who read press releases from a human author gave an average source credibility rating of 4.40 on a 7-point scale, compared to 4.11 for the AI author. Message credibility followed a similar pattern, with human-authored messages receiving an average score of 4.82, compared to 4.38 for AI-authored versions. Finally, organizational reputation was also judged to be higher when the company’s message came from a human, with ratings averaging 4.84 versus 4.49.

These differences, while not massive, were statistically significant and suggest that the mere presence of an AI label can diminish trust in a message. Importantly, the content of the message was identical across the human and AI conditions. The only change was who—or what—was said to have authored it.

In contrast, the tone or strategy of the message (apologetic, sympathetic, or informational) did not significantly influence any of the credibility or reputation ratings. Participants did perceive the tone differences when asked directly, meaning the manipulations were effective. But these differences did not translate into significantly different impressions of the author, message, or company. Even though past research has emphasized the importance of an apologetic or sympathetic tone during a crisis, this study found that source effects had a stronger influence on audience perceptions.

“People are generally still pretty weary of AI-generated messages,” Etheridge explained. “They don’t find them as credible as human-written content. In our case, news releases written by humans are more favorably viewed by readers than those written by AI. For people who are concerned about AI replacing jobs, that could be welcome news. We caution Public Relations agencies against over-use of AI, as it could hurt their reputation with the public when public reputation is a crucial measure of the industry.”

But as with all research, there are some caveats to consider. The study relied on a fictional company and crisis scenario, which might not fully capture real-world reactions, and participants—primarily university students—may not represent broader public attitudes due to their greater familiarity with AI. Additionally, while the study clearly labeled the message as AI-generated, real-world news releases often lack such transparency, raising questions about how audiences interpret content when authorship is ambiguous.

“We measured credibility and organizational reputation but didn’t really look at other important variables like trust or message retention,” Etheridge said. “We also may have been more transparent about our AI-generated content than a professional public relations outlet might be, but that allowed us to clearly measure responses. Dr. Alhammad is leading where our research effort might go from here. We have talked about a few ideas, but nothing solid has formed as of yet.”

The study, “Credibility and organizational reputation perceptions of news releases produced by artificial intelligence,” was authored by Ayman Alhammad, Christopher Etheridge, and Cameron W. Piercy.



Source link

Continue Reading

AI Research

Oxford and Ellison Institute Collaborate to Integrate AI in Vaccine Research – geneonline.com

Published

on



Oxford and Ellison Institute Collaborate to Integrate AI in Vaccine Research  geneonline.com



Source link

Continue Reading

AI Research

Chinese social media firms comply with strict AI labelling law, making it clear to users and bots what’s real and what’s not

Published

on


Chinese social media companies have begun requiring users to classify AI generated content that is uploaded to their services in order to comply with new government legislation. By law, the sites and services now need to apply a watermark or explicit indicator of AI content for users, as well as include metadata for web crawling algorithms to make it clear what was generated by a human and what was not, according to SCMP.

Countries and companies the world over have been grappling with how to deal with AI generated content since the explosive growth of popular AI tools like ChatGPT, Midjourney, and Dall-E. After drafting the new law in March, China has now implemented it, taking the lead in increasing oversight and curtailing rampant use with its new labeling law making social media companies more responsible for the content on their platforms.



Source link

Continue Reading

Trending