Connect with us

AI Research

Artificial intelligence loses out to humans in credibility during corporate crisis responses

Published

on


As artificial intelligence tools become increasingly integrated into public relations workflows, many organizations are considering whether these technologies can handle high-stakes communication tasks such as crisis response. A new study published in Corporate Communications: An International Journal provides evidence that, at least for now, human-written crisis messages are perceived as more credible and reputationally beneficial than those authored by AI systems.

The rise of generative artificial intelligence has raised questions about its suitability for replacing human labor in communication roles. In public relations, AI tools are already used for media monitoring, message personalization, and social media management. Some advocates even suggest that AI could eventually author press releases or crisis response messages.

However, prior studies have found that people often view AI-generated messages with suspicion. Despite improvements in the sophistication of these systems, the use of AI can still reduce perceptions of warmth, trustworthiness, and competence. Given the importance of credibility and trust in public relations, especially during crises, the study aimed to evaluate how the perceived source of a message—human or AI—affects how people interpret crisis responses.

The researchers also wanted to assess whether the tone or strategy of a message—whether sympathetic, apologetic, or informational—would influence perceptions. Drawing on situational crisis communication theory, they hypothesized that more accommodating responses might boost credibility and protect organizational reputation, regardless of the source.

“Our interest in understanding how people judge the credibility of AI-generated text grew out of a graduate class Ayman Alhammad (the lead author) took with Cameron Piercy (the third author). They talked about research about trust in AI during the class and the questions we posed in our study naturally grew out of that,” said study author Christopher Etheridge, an assistant professor in the William Allen White School of Journalism and Mass Communications at the University of Kansas.

To explore these questions, the researchers designed a controlled experiment using a hypothetical crisis scenario. Participants were told about a fictitious company called the Chunky Chocolate Company, which was facing backlash after a batch of its chocolate bars reportedly caused consumers to become ill. According to the scenario, the company had investigated the incident and determined that the problem was due to product tampering by an employee.

Participants were then shown one of six possible press releases responding to the crisis. The releases varied in two key ways. First, they were attributed either to a human spokesperson (“Chris Smith”) or to an AI system explicitly labeled as such. Second, the tone of the message followed one of three common strategies: informational (providing details about the incident), sympathetic (expressing empathy for affected customers), or apologetic (taking responsibility and issuing an apology).

The wording of the messages was carefully controlled to ensure consistency across versions, with only the source and emotional tone changing between conditions. After reading the message, participants were asked to rate the perceived credibility of the author, the credibility of the message, and the overall reputation of the company. These ratings were made using standardized scales based on prior research.

The sample included 447 students enrolled in journalism and communication courses at a public university in the Midwestern United States. These participants were chosen because of their familiarity with media content and their relevance as potential future professionals or informed consumers of public relations material. Their average age was just over 20 years old, and most participants identified as white and either full- or part-time employed.

The results provided clear support for the idea that human authors are still viewed as more credible than AI. Across all three key outcomes—source credibility, message credibility, and organizational reputation—participants rated human-written messages higher than identical messages attributed to AI.

“It’s not surprising, given discussions that are taking place, that people found AI-generated content to be less credible,” Etheridge told PsyPost. “Still, capturing the data and showing it in an experiment like this one is valuable as the landscape of AI is ever-changing.”

Participants who read press releases from a human author gave an average source credibility rating of 4.40 on a 7-point scale, compared to 4.11 for the AI author. Message credibility followed a similar pattern, with human-authored messages receiving an average score of 4.82, compared to 4.38 for AI-authored versions. Finally, organizational reputation was also judged to be higher when the company’s message came from a human, with ratings averaging 4.84 versus 4.49.

These differences, while not massive, were statistically significant and suggest that the mere presence of an AI label can diminish trust in a message. Importantly, the content of the message was identical across the human and AI conditions. The only change was who—or what—was said to have authored it.

In contrast, the tone or strategy of the message (apologetic, sympathetic, or informational) did not significantly influence any of the credibility or reputation ratings. Participants did perceive the tone differences when asked directly, meaning the manipulations were effective. But these differences did not translate into significantly different impressions of the author, message, or company. Even though past research has emphasized the importance of an apologetic or sympathetic tone during a crisis, this study found that source effects had a stronger influence on audience perceptions.

“People are generally still pretty weary of AI-generated messages,” Etheridge explained. “They don’t find them as credible as human-written content. In our case, news releases written by humans are more favorably viewed by readers than those written by AI. For people who are concerned about AI replacing jobs, that could be welcome news. We caution Public Relations agencies against over-use of AI, as it could hurt their reputation with the public when public reputation is a crucial measure of the industry.”

But as with all research, there are some caveats to consider. The study relied on a fictional company and crisis scenario, which might not fully capture real-world reactions, and participants—primarily university students—may not represent broader public attitudes due to their greater familiarity with AI. Additionally, while the study clearly labeled the message as AI-generated, real-world news releases often lack such transparency, raising questions about how audiences interpret content when authorship is ambiguous.

“We measured credibility and organizational reputation but didn’t really look at other important variables like trust or message retention,” Etheridge said. “We also may have been more transparent about our AI-generated content than a professional public relations outlet might be, but that allowed us to clearly measure responses. Dr. Alhammad is leading where our research effort might go from here. We have talked about a few ideas, but nothing solid has formed as of yet.”

The study, “Credibility and organizational reputation perceptions of news releases produced by artificial intelligence,” was authored by Ayman Alhammad, Christopher Etheridge, and Cameron W. Piercy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI in Livestock Welfare Monitoring Market Research Explores

Published

on


AI in Livestock Welfare Monitoring Market

InsightAce Analytic Pvt. Ltd. announces the release of a market assessment report on the “Global AI in Livestock Welfare Monitoring Market Size, Share & Trends Analysis Report By Component (Software [Data Management Platforms, Behavior Analytics Software, AI & Machine Learning Models, Health Monitoring Algorithms], Hardware [Cameras, Sensors, Microphones, Gateways, RFID Tags], and Services [Maintenance & Support, Installation & Integration Services, Training & Consulting]), Type (Wearable Sensor-Based Systems, Thermal Imaging Systems, Vision-Based Systems, Integrated Multi-Sensor Platforms, and Audio-Based Monitoring Systems), Livestock Type (Swine, Poultry, Cattle, Sheep & Goats, and Others), Application (Health Monitoring, Environmental Monitoring, Behavior Analysis, Stress & Pain Detection, Feeding Pattern Monitoring, and Breeding Management), Deployment Mode (On-Premise, Cloud-Based, and Hybrid), Technology (Machine Learning, Edge AI, Computer Vision, IoT & Smart Sensors, and Data Analytics), End-user (Animal Welfare Organizations, Commercial Livestock Farms, Veterinary Clinics & Hospitals, Research Institutes & Universities, and Government & Regulatory Bodies),-Market Outlook And Industry Analysis 2034”

The Global AI in Livestock Welfare Monitoring Market is valued at US$ 2.3 Bn in 2024 and it is expected to reach US$ 11.8 Bn by the year 2034, with a CAGR of 18.4% during the forecast period of 2025-2034.

Get Free Access to Demo Report, Excel Pivot and ToC: https://www.insightaceanalytic.com/request-sample/3149

AI in livestock welfare monitoring seeks to use intelligent technologies to enhance animal health, behaviour tracking, and environmental factors. It uses sensors, cameras, and algorithms to monitor livestock continuously without the need for human intervention.

This technique helps farmers identify early signs of illness, stress, or discomfort, allowing them to take precise action to prevent the spread of disease and boost productivity. The market for AI in livestock welfare management is growing quickly due to the need for efficient livestock management and technological advancements.

The need for sustainable agricultural methods, the growing demand for food as a result of the world’s population, and technological improvements are some of the main causes driving the growth of AI in livestock welfare management. AI helps address these needs by increasing efficiency and productivity, which results in higher outputs with less input.

Additionally, governments and the corporate sector are investing more in smart agricultural solutions as they recognize the potential of AI to transform agriculture and promote food security. This will boost the growth of AI in the livestock welfare management market in the coming years.

List of Prominent Players in the AI in Livestock Welfare Monitoring Market:

• Merck Animal Health

• Afimilk

• Connecterra

• DeLaval

• Vence (acquired by Merck)

• Gallagher Animal Management

• HerdDogg

• Lely

• Allflex

• PrecisionAG (formerly PrecisionHawk)

• Stellapps

• Zoetis

• Tri-Scan (acquired by Zoetis)

• AgriWebb

• Cainthus

• Nedap

• Silent Herdsman (acquired by Afimilk)

• Halo (livestock monitoring Al)

• SmartBow (by Allflex)

• Cargill (livestock Al division)

Expert Knowledge, Just a Click Away: https://calendly.com/insightaceanalytic/30min?month=2025-04

Market Dynamics:

Drivers-

The market for AI in livestock welfare management is anticipated to grow in the future due to the rising demand for livestock products. Livestock products are a variety of goods derived from animals bred for agricultural purposes, including meat, dairy, eggs, and other commodities. Large amounts of data from sensors, drones, and satellite photos may be gathered, analysed, and interpreted by farmers thanks to artificial intelligence (AI) technologies.

Furthermore, improvements in machine learning techniques are driving the AI in livestock welfare management market. The behavior and health of livestock may now be predicted with greater accuracy due to these advancements. Businesses are focusing on developing user-friendly solutions that meet the needs of farmers.

Challenges:

There are many obstacles in the way of integrating AI in livestock welfare management. A primary obstacle is the high upfront cost of AI systems, which small and medium-sized farms may find unaffordable. Additionally, farmers must learn how to utilize advanced AI technology, which requires training and skill development.

Furthermore, because these systems frequently gather and handle vast volumes of sensitive data, worries regarding data security and privacy surface. To fully utilize AI in livestock welfare management, two more issues that must be resolved are technological dependability and the requirement for a strong infrastructure to support AI applications.

Regional Trends:

The region’s strong infrastructure and cutting-edge agricultural technology allowed North America to maintain its leading position in the AI in livestock welfare management market in 2024. The incorporation of AI into different livestock farming operations is further fueled by the fact that North American farmers are frequently early adopters of technology that promises more profitability and efficiency. Further supporting the adoption of AI technologies is the region’s significant emphasis on precision and sustainable agriculture.

The AI in livestock welfare management market in Asia Pacific is growing in strength as corporate parties and governments work to modernize livestock welfare management. Asia Pacific nations such as China, Japan, and India choose cost-effective aluminium solutions designed for intensive animal husbandry. The demand for cloud-based, mobile-enabled Al platforms that function well in a variety of infrastructure configurations is also rising in these locations.

Unlock Your GTM Strategy: https://www.insightaceanalytic.com/customization/3149

Recent Developments:

• In October 2024, Merck Animal Health officially introduced SenseHub Cow Calf, a remote livestock monitoring system designed for cow/calf operations. The solution automatically detects estrus, identifies optimal insemination times, tracks activity and rumination using ear-mounted accelerometers, and delivers insights via cloud-based dashboards to improve breeding efficiency and reduce labor.

Segmentation of AI in Livestock Welfare Monitoring Market-

By Component-

• Software

o Data Management Platforms

o Behavior Analytics Software

o AI & Machine Learning Models

o Health Monitoring Algorithms

• Hardware

o Cameras

o Sensors

o Microphones

o Gateways

o RFID Tags

• Services

o Maintenance & Support

o Installation & Integration Services

o Training & Consulting

By Type –

• Wearable Sensor-Based Systems

• Thermal Imaging Systems

• Vision-Based Systems

• Integrated Multi-Sensor Platforms

• Audio-Based Monitoring Systems

By Livestock Type-

• Swine

• Poultry

• Cattle

• Sheep & Goats

• Others

By Application-

• Health Monitoring

• Environmental Monitoring

• Behavior Analysis

• Stress & Pain Detection

• Feeding Pattern Monitoring

• Breeding Management

By Deployment Type-

• On-Premise

• Cloud-Based

• Hybrid

By Technology-

• Machine Learning

• Edge AI

• Computer Vision

• IoT & Smart Sensors

• Data Analytics

By End-use-

• Animal Welfare Organizations

• Commercial Livestock Farms

• Veterinary Clinics & Hospitals

• Research Institutes & Universities

• Government & Regulatory Bodies

By Region-

North America-

• The US

• Canada

Europe-

• Germany

• The UK

• France

• Italy

• Spain

• Rest of Europe

Asia-Pacific-

• China

• Japan

• India

• South Korea

• South East Asia

• Rest of Asia Pacific

Latin America-

• Brazil

• Argentina

• Mexico

• Rest of Latin America

Middle East & Africa-

• GCC Countries

• South Africa

• Rest of Middle East and Africa

Read Overview Report- https://www.insightaceanalytic.com/report/ai-in-livestock-welfare-monitoring-market/3149

About Us:

InsightAce Analytic is a market research and consulting firm that enables clients to make strategic decisions. Our qualitative and quantitative market intelligence solutions inform the need for market and competitive intelligence to expand businesses. We help clients gain competitive advantage by identifying untapped markets, exploring new and competing technologies, segmenting potential markets and repositioning products. Our expertise is in providing syndicated and custom market intelligence reports with an in-depth analysis with key market insights in a timely and cost-effective manner.

Contact us:

InsightAce Analytic Pvt. Ltd.

Visit: www.insightaceanalytic.com

Tel : +1 607 400-7072

Asia: +91 79 72967118

info@insightaceanalytic.com

This release was published on openPR.



Source link

Continue Reading

AI Research

Artificial intelligence helps break barriers for Hispanic homeownership | Nation World

Published

on


























Artificial intelligence helps break barriers for Hispanic homeownership | Nation World | thesunchronicle.com

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.

For any issues, call 508-222-7000.



Source link

Continue Reading

AI Research

School Cheating: Research Shows AI Has Not Increased Its Scale

Published

on


Changes in Learning: Cheating and Artificial Intelligence

When reading the news, one gets the impression that all students use artificial intelligence to cheat in their studies. Headlines in newspapers such as The Wall Street Journal or the New York Times often mention ‘cheating’ and ‘AI’. Many stories, similar to a publication in New York Magazine, describe students who openly testify about using generative AI to complete assignments.

With the rise of such headlines, it seems that education is under threat: traditional exams, readings, and essays are filled with cheating through AI. In the worst cases, students use tools like ChatGPT to write complete works.

This seems frustrating, but such a thought is only part of the story.

Cheating has always existed. As an educational researcher studying cheating with AI, I can assert that preliminary data indicate that AI has changed the methods of cheating, but not its volumes.

Our early data suggest that AI has changed the method, but not necessarily the scale of cheating that was already taking place.

This does not mean that cheating using AI is not a serious problem. Important questions are raised: Will cheating increase in the future due to AI? Is the use of AI in education cheating? How should parents and schools respond to prepare children for a life that is significantly different from our experience?

The Pervasiveness of Cheating

Cheating has existed for a very long Time — probably since the creation of educational institutions. In the 1990s and 2000s, Don McCabe, a business school professor at Rutgers University, recorded high levels of cheating among students. One of his studies showed that up to 96% of business students admitted to engaging in ‘cheating behavior’.

McCabe used anonymous surveys where students had to indicate how often they engaged in cheating. This allowed for high cheating rates, which varied from 61.3% to 82.7% before the pandemic.

Cheating in the AI Era

Has cheating using AI increased? Analyzing data from over 1900 students from three schools before and after the introduction of ChatGPT, we found no significant changes in cheating behavior. In particular, 11% of students used AI to write their papers.

Our diligent work showed that AI is becoming a popular tool for cheating, but many questions remain to be explored. For example, in 2024 and 2025, we studied the behavior of another 28000-39000 students, where 15% admitted to using AI to create their work.

Challenges of Using AI

Students are accustomed to using AI but understand that there are boundaries between acceptable and unacceptable use. Reports indicate that many use AI to avoid doing homework or to gain ideas for creative work.

Students feel that their teachers use AI, and many consider it unfair when they are punished for using AI in education.

What Will AI Use Mean for Schools?

The modern education system was not designed with generative AI in mind. Traditionally, educational tasks are seen as the result of intensive work, but now this work is increasingly blurred.

It is important to understand what the main reasons for cheating are, how it relates to stress, time management, and the curriculum. Protecting students from cheating is important, but ways of teaching and the use of AI in classrooms also need to be rethought.

Four Future Questions

AI has not caused cheating in educational institutions but has only opened new possibilities. Here are questions worth considering:

  • Why do students resort to cheating? The stress of studying may lead them to seek easier solutions.
  • Do teachers adhere to their rules? Hypocrisy in demands on students can shape false perceptions of AI use in education.
  • Are the rules concerning AI clearly stated? Determining the acceptability of AI use in education may be vague.
  • What is important for students to know in a future rich in AI? Educational methods must be timely adapted to the new reality.

The future of education in the age of AI requires an open dialogue between teachers and students. This will allow for the development of new skills and knowledge necessary for successful learning.



Source link

Continue Reading

Trending