Connect with us

AI Research

AI in PR Research: Speed That Lacks Credibility

Published

on


Artificial intelligence is transforming how research is created and used in PR and thought leadership. Surveys that once took weeks to design and analyze can now be drafted, fielded and summarized in days or even hours. For communications professionals, the appeal is obvious: AI makes it possible to generate insights that keep pace with the news cycle. But does the quality of those insights hold?

In the race to move faster, an uncomfortable truth is emerging. AI may make aspects of research easier, but it also creates enormous pitfalls for the layperson. Journalists rightfully expect research to be transparent, verifiable and meaningful. This credibility cannot be compromised. Yet an overreliance on AI risks jeopardizing the very characteristics that make research such a powerful tool for thought leadership and PR.

This is where the opportunity and the risk converge. AI can help research live up to its potential as a driver of media coverage, but only if it is deployed responsibly, and never as a total substitute for skilled practitioners. Used without oversight, or by untrained but well-meaning communicators, it produces data that looks impressive on the surface but fails under scrutiny. Used wisely, it can augment and enhance the research process but never supplant it.

The Temptation: Faster, Cheaper, Scalable

AI has upended the traditional pace of research. Writing questions, cleaning data, coding open-ended responses and building reports required days of manual effort. Now, many of these tasks can be automated.

  • Drafting: Generative models can create survey questions in seconds, offering PR teams a head start on design.
  • Fielding: AI can help identify fraudulent or bot-like responses.
  • Analysis: Large datasets can be summarized almost instantly, and open-text responses can be categorized without armies of coders.
  • Reporting: Tools can generate data summaries and visualizations that make insights more accessible.

The acceleration is appealing. PR professionals can, in theory, generate surveys and insert data into the media conversation before a trend peaks. The opportunity is real, but it comes with a condition: speed matters only when the research holds up to scrutiny.

The Risk: Data That Doesn’t Stand Up

AI makes it possible to create research faster, but not necessarily better. Fully automated workflows often miss the standards required for earned media.

Consider synthetic respondents, artificial personas generated by AI to simulate human answers to surveys, trained on data from previous surveys. On the surface, they provide instant answers to survey questions. But research shows they diverge from real human data once tested across different groups and contexts. The issue isn’t limited to surveys. Even at the model level, AI outputs remain unreliable. OpenAI’s own system card shows that despite improvements in its newest model, GPT-5 still makes incorrect claims nearly 10% of the time.

For journalists, these shortcomings are disqualifying. Reporters and editors want to know how respondents were sourced, how questions were framed and whether findings were verified. If the answer is simply “AI produced it,” credibility collapses. Worse, errors that slip into coverage can damage brand reputation. Research meant to support PR should build trust, not risk it.

Why Journalists Demand More, Not Less

The reality for PR teams is that reporters are inundated with pitches. That volume has made editors more discerning, and credible data can differentiate a pitch from the competition.

Research that earns coverage typically delivers three things:

  1. Clarity: Methods are clearly explained.
  2. Context: Results are tied to trends or issues audiences care about.
  3. Credibility: Findings are grounded in sound design and transparent analysis.

These expectations have only intensified. Public trust in media is at a historic low. Only 31% of Americans trust the news “a great deal” or “a fair amount.” At the same time, 36% have “no trust at all,” the highest level of complete distrust Gallup has recorded in more than 50 years of tracking. Reporters know this and apply greater scrutiny before publishing any research.

For PR professionals, the implication is clear: AI can speed up processes, but unless findings meet editorial standards, they will never see the light of day.

Why Human Oversight Is Indispensable

AI can process data at scale, but it cannot replicate the judgment or accountability of human researchers. Oversight matters most in four areas:

  • Defining objectives: Humans decide which questions are newsworthy or align with campaign goals and what narratives are worth testing.
  • Interpreting nuance: Machines can classify sentiment, but are bad at identifying sarcasm, cultural context and emotional cues that shape meaningful insights.
  • Accountability: When findings are published, people – not algorithms – must explain the methods and defend the results.
  • Bias detection: AI reflects the limitations of its training data. Without human review, skewed or incomplete findings can pass as fact.

Public opinion reinforces the need for this oversight. Nearly half of Americans say AI will have a negative impact on the news they get, while only one in 10 say it will have a positive effect. If audiences are skeptical of AI-created news, journalists will be even more cautious about publishing research that lacks human validation. For PR teams, that means credibility comes from oversight: AI may accelerate the process, but only people can provide the transparency that makes research media ready.

AI as a Partner, Not a Shortcut

AI is best used strategically. It is as an “assistant” that enhances workflows rather than a substitute for expertise. That means:

  • Letting AI handle repetitive tasks such as transcription, always with human oversight.
  • Documenting when and how AI tools are used, to build transparency.
  • Validating AI outputs against human coders or traditional benchmarks.
  • Training teams to understand AI’s capabilities and limitations.
  • Aligning with evolving disclosure standards, such as the AAPOR Transparency Initiative.

Used this way, AI accelerates processes while preserving the qualities that make research credible. It becomes a force multiplier for human expertise, not a replacement for it.

What’s at Stake for PR Campaigns

Research has always been one of the most powerful tools for earning media. A well-executed survey can create headlines, drive thought leadership and support campaigns long after launch. But research that lacks credibility can do the opposite, damaging relationships with journalists and eroding trust.

Editors are paying closer attention to how AI is being used in PR. Some are experimenting with it themselves, while exercising caution. In Cision’s 2025 State of the Media Report, nearly three-quarters of journalists (72%) said factual errors are their biggest concern with AI-generated material, while many also worried about quality and authenticity. And although some reporters remain open to AI-assisted content if it is carefully validated, more than a quarter (27%) are strongly opposed to AI-generated press content of any kind. Those figures show why credibility cannot be an afterthought: skepticism is high, and mistakes will close doors.

The winners will be teams that integrate AI responsibly, using it to move quickly without cutting corners. They will produce findings that are timely enough to tap into news cycles and rigorous enough to withstand scrutiny. In a crowded media landscape, that balance will be the difference between earning coverage and being ignored.

Conclusion: Credibility as Currency

AI is here to stay in PR research. Its role will only expand, reshaping workflows and expectations across the industry. The question is not whether to use AI, but how to use it responsibly.

Teams that treat AI as a shortcut will see their research dismissed by the media. Teams that treat it as a partner – accelerating processes while upholding standards of rigor and transparency – will produce insights that both journalists and audiences trust.

In today’s environment, credibility is the most valuable currency. Journalists will continue to demand research that meets high standards. AI can help meet those standards, but only when guided by human judgment. The future belongs to PR professionals who prove that speed and credibility are not in conflict, but in partnership.



Source link

AI Research

Pentagon research official wants to have AI on every desktop in 6 to 9 months

Published

on


The Pentagon is angling to introduce artificial intelligence across its workforce within nine months following the reorganization of its key AI office.

Emil Michael, under secretary of defense for research and engineering at the Department of Defense, talked about the agency’s plans for introducing AI to its operations as it continues its modernization journey. 

“We want to have an AI capability on every desktop — 3 million desktops — in six or nine months,” Michael said during a Politico event on Tuesday. “We want to have it focus on applications for corporate use cases like efficiency, like you would use in your own company … for intelligence and for warfighting.”

This announcement follows the recent shakeups and restructuring of the Pentagon’s main artificial intelligence office. A senior defense official said the Chief Digital and Artificial Intelligence Office will serve as a new addition to the department’s research portfolio.

Michael also said he is “excited” about the restructured CDAO, adding that its new role will pivot to a focus on research that is similar to the Defense Advanced Research Projects Agency and Missile Defense Agency. This change is intended to enhance research and engineering priorities that will help advance AI for use by the armed forces and not take agency focus away from AI deployment and innovation.

“To add AI to that portfolio means it gets a lot of muscle to it,” he said. “So I’m spending at least a third of my time –– maybe half –– rethinking how the AI deployment strategy is going to be at DOD.”

Applications coming out of the CDAO and related agencies will then be tailored to corporate workloads, such as efficiency-related work, according to Michael, along with intelligence and warfighting needs.

The Pentagon first stood up the CDAO and brought on its first chief digital and artificial intelligence officer in 2022 to advance the agency’s AI efforts.

The restructuring of the CDAO this year garnered attention due to its pivotal role in investigating the defense applications of emerging technologies and defense acquisition activities. Job cuts within the office added another layer of concern, with reports estimating a 60% reduction in the CDAO workforce.





Source link

Continue Reading

AI Research

Pentagon CTO wants AI on every desktop in 6 to 9 months

Published

on


The Pentagon aims to get AI tools to its entire workforce next year, the department’s chief technical officer said one month after being given control of its main AI office.

“We want to have an AI capability on every desktop — 3 million desktops — in six or nine months,” Emil Michael, defense undersecretary for research and engineering, said at a Politico event on Tuesday. “We want to have it focus on applications for corporate use cases like efficiency, like you would use in your own company…for intelligence and for warfighting.”

Four weeks ago, the Chief Digital and Artificial Intelligence Office was demoted from reporting to Deputy Defense Secretary Stephen Feinberg to Michael, a subordinate.

Michael said CDAO will become a research body like the Defense Advanced Research Projects Agency and Missile Defense Agency. He said the change is meant to boost research and engineering into AI for the military, but not reduce its efforts to deploy AI and make innovations.

“To add AI to that portfolio means it gets a lot of muscle to it,” he said. “So I’m spending at least a third of my time—maybe half—rethinking how the AI-deployment strategy is going to be at DOD.”

He said applications would emerge from the CDAO and related agencies that will be tailored to corporate workloads.

The Pentagon created the CDAO in 2022 to advance the agency’s AI efforts and look into defense applications for emerging technologies. The office’s restructuring earlier this year garnered attention. Job cuts within the office added another layer of concern, with reports estimating a 60% reduction in the CDAO workforce.





Source link

Continue Reading

AI Research

Panelists Will Question Who Controls AI | ACS CC News

Published

on


Artificial intelligence (AI) has become one of the fastest-growing technologies in the world today. In many industries, individuals and organizations are racing to better understand AI and incorporate it into their work. Surgery is no exception, and that is why Clinical Congress 2025 has made AI one of the six themes of its Opening Day Thematic Sessions.

The first full day of the conference, Sunday, October 5, will include two back-to-back Panel Sessions on AI. The first session, “Using ChatGPT and AI for Beginners” (PS104), offers a foundation for surgeons not yet well versed in AI. The second, “AI: Who Is In Control?” (PS 110), will offer insights into the potential upsides and drawbacks of AI use, as well as its limitations and possible future applications, so that surgeons can involve this technology in their clinical care safely and effectively.

“AI: Who Is In Control?” will be moderated by Anna N. Miller, MD, FACS, an orthopaedic surgeon at Dartmouth Hitchcock Medical Center in Lebanon, New Hampshire, and Gabriel Brat, MD, MPH, MSc, FACS, a trauma and acute care surgeon at Beth Israel Deaconess Medical Center and an assistant professor at Harvard Medical School, both in Boston, Massachusetts.

In an interview, Dr. Brat shared his view that the use of AI is not likely to replace surgeons or decrease the need for surgical skills or decision-making. “It’s not an algorithm that’s going to be throwing the stitch. It’s still the surgeon.”

Nonetheless, he said that the starting presumption of the session is that AI is likely to be highly transformative to the profession over time.  

“Once it has significant uptake, it’ll really change elements of how we think about surgery,” he said, including creating meaningful opportunities for improvements.

The key question of the session, therefore, is not whether to engage with AI, but to do so in ways that ensure the best outcomes: “We as surgeons need to have a role in defining how to do so safely and effectively. Otherwise, people will start to use these tools, and we will be swept along with a movement as opposed to controlling it.”

To that end, Dr. Brat explained that the session will offer “a really strong translational focus by people who have been in the trenches working with these technologies.” He and Dr. Miller have specifically chosen an “all-star panel” designed to represent academia, healthcare associations, and industry. 

The panelists include Rachael A. Callcut, MD, MSPH, FACS, who is the division chief of trauma, acute care surgery and surgical critical care as well as associate dean of data science and innovation at the University of California-Davis Health in Sacramento, California. She will share the perspective on AI from academic surgery.

Genevieve Melton-Meaux, MD, PhD, FACS, FACMI, the inaugural ACS Chief Health Informatics Officer, will present on AI usage in healthcare associations. She also is a colorectal surgeon and the senior associate dean for health informatics and data science at the University of Minnesota and chief health informatics and AI officer for Fairview Health Services, both in Minneapolis.

Finally, Khan Siddiqui, MD, a radiologist and serial entrepreneur who is the cofounder, chairman, and CEO of a company called HOPPR AI, will present the view from industry. HOPPR AI is a for-profit company focused on building AI apps for medical imaging. As a radiologist, Dr. Siddiqui represents a medical specialty that is thought to likely undergo sweeping change as AI is incorporated into image-reading and diagnosis. His comments will focus on professional insights relevant to surgeons.

Their presentations will provide insights on general usage of AI at present, as well as predictions on what the landscape for AI in healthcare will look like in approximately 5 years. The session will include advice on what approaches to AI may be most effective for surgeons interested in ensuring positive outcomes and avoiding negative ones.

Additional information on AI usage pervades Clinical Congress 2025. In addition to various sessions that will comment on AI throughout the 4 days of the conference, various researchers will present studies that involve AI in their methods, starting presumptions, and/or potential applications to practice.

Access the Interactive Program Planner for more details about Clinical Congress 2025 sessions.



Source link

Continue Reading

Trending