Connect with us

AI Research

Five9 Honored by Opus Research for AI Innovation and Scalable CX

Published

on


Five9 (NASDAQ: FIVN), provider of the Intelligent CX Platform, today announced that it has been recognized by Opus Research in its latest Intelliview Report for its enterprise-focused approach to conversational AI and CX delivery.

Independent analysis by Opus Research explores current trends in CX transformation, noting that customer experience has become a key competitive differentiator for brand loyalty, and enterprises that are unable to deliver seamless and intelligent interactions may risk losing customers quickly. According to the Five9-commissioned Business Leaders CX Report, 84% of surveyed business leaders reported a significant increase in customer interaction volumes over the past two years. The findings underscore the need for scalable, AI-enhanced CX solutions that help businesses meet rising expectations while reducing friction and operational complexity. Identified by Opus Research as a “Pragmatist,” Five9 embeds Generative AI into a mature CCaaS platform with governance frameworks and enterprise operational readiness built in – blending innovation with pragmatic delivery to provide advanced automation and orchestration.

“Enterprises today are under intense pressure to innovate with AI while ensuring trust, compliance, and business outcomes,” said Andy Dignan, President, Five9. “This recognition from Opus Research affirms our role as a trusted partner in helping organizations move beyond AI experimentation and scale with confidence. By providing more trust, innovation, and measurable outcomes, Five9 empowers enterprises to navigate the hype cycle and unlock AI’s true impact as a driver of competitive advantage.”

Key findings from the report recognize Five9 for its innovative approach, including:

  • Enterprise-Ready AI: Delivers conversational AI that is production-ready from day one, complete with governance, guardrails, and compliance frameworks.
  • Pragmatic Innovation: Blends cutting-edge AI features with proven CCaaS maturity, enabling faster ROI and reduced implementation risks.
  • Trust & Governance: Five9 earned top-tier scores in security, observability, and risk mitigation, making it a strong choice for regulated industries.
  • Unified Orchestration: Ensures seamless context continuity and orchestration across self-service and live channels, reducing customer friction and improving resolution rates.

Opus Research’s analysis signals a market shift toward “Pragmatic AI” – AI designed for practical application to help solve real-world problems – supporting higher standards for trust and governance in AI-powered CX. By combining innovation with reliability, Five9 demonstrates how AI can help drive business outcomes beyond cost savings – enabling scalable, secure, and impactful customer experiences.

“Five9 has carved out a spot as a pragmatic leader in enterprise AI,” said Ian Jacobs, VP & Lead Analyst at Opus Research. “It’s blending governance, orchestration, and real-world results with the kind of CCaaS maturity enterprises already trust. That combination gives companies a way to move fast with AI innovation—without losing sight of compliance, efficiency, or customer trust. It’s why Five9 is becoming a go-to partner for organizations that want to scale AI with confidence, not chaos.”

The above findings are derived from the Opus Research Intelliview Report. All opinions and conclusions are those of the original authors. This press release may contain forward-looking statements. Actual results may differ materially from those projected or implied. Five9 does not guarantee any outcomes related to the use of its services. All trademarks and product names are the property of their respective owners.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Hackers exploit hidden prompts in AI images, researchers warn

Published

on


Cybersecurity firm Trail of Bits has revealed a technique that embeds malicious prompts into images processed by large language models (LLMs). The method exploits how AI platforms compress and downscale images for efficiency. While the original files appear harmless, the resizing process introduces visual artifacts that expose concealed instructions, which the model interprets as legitimate user input.

In tests, the researchers demonstrated that such manipulated images could direct AI systems to perform unauthorized actions. One example showed Google Calendar data being siphoned to an external email address without the user’s knowledge. Platforms affected in the trials included Google’s Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Gemini’s web interface.

Read More: Meta curbs AI flirty chats, self-harm talk with teens

The approach builds on earlier academic work from TU Braunschweig in Germany, which identified image scaling as a potential attack surface in machine learning. Trail of Bits expanded on this research, creating “Anamorpher,” an open-source tool that generates malicious images using interpolation techniques such as nearest neighbor, bilinear, and bicubic resampling.

From the user’s perspective, nothing unusual occurs when such an image is uploaded. Yet behind the scenes, the AI system executes hidden commands alongside normal prompts, raising serious concerns about data security and identity theft. Because multimodal models often integrate with calendars, messaging, and workflow tools, the risks extend into sensitive personal and professional domains.

Also Read: Nvidia CEO Jensen Huang says AI boom far from over

Traditional defenses such as firewalls cannot easily detect this type of manipulation. The researchers recommend a combination of layered security, previewing downscaled images, restricting input dimensions, and requiring explicit confirmation for sensitive operations.

“The strongest defense is to implement secure design patterns and systematic safeguards that limit prompt injection, including multimodal attacks,” the Trail of Bits team concluded.



Source link

Continue Reading

AI Research

When AI Freezes Over | Psychology Today

Published

on


A phrase I’ve often clung to regarding artificial intelligence is one that is also cloaked in a bit of techno-mystery. And I bet you’ve heard it as part of the lexicon of technology and imagination: “emergent abilities.” It’s common to hear that large language models (LLMs) have these curious “emergent” behaviors that are often coupled with linguistic partners like scaling and complexity. And yes, I’m guilty too.

In AI research, this phrase first took off after a 2022 paper that described how abilities seem to appear suddenly as models scale and tasks that a small model fails at completely, a larger model suddenly handles with ease. One day a model can’t solve math problems, the next day it can. It’s an irresistible story as machines have their own little Archimedean “eureka!” moments. It’s almost as if “intelligence” has suddenly switched on.

But I’m not buying into the sensation, at least not yet. A newer 2025 study suggests we should be more careful. Instead of magical leaps, what we’re seeing looks a lot more like the physics of phase changes.

Ice, Water, and Math

Think about water. At one temperature it’s liquid, at another it’s ice. The molecules don’t become something new—they’re always two hydrogens and an oxygen—but the way they organize shifts dramatically. At the freezing point, hydrogen bonds “loosely set” into a lattice, driven by those fleeting electrical charges on the hydrogen atoms. The result is ice, the same ingredients reorganized into a solid that’s curiously less dense than liquid water. And, yes, there’s even a touch of magic in the science as ice floats. But that magic melts when you learn about Van der Waals forces.

The same kind of shift shows up in LLMs and is often mislabeled as “emergence.” In small models, the easiest strategy is positional, where computation leans on word order and simple statistical shortcuts. It’s an easy trick that works just enough to reduce error. But scale things up by using more parameters and data, and the system reorganizes. The 2025 study by Cui shows that, at a critical threshold, the model shifts into semantic mode and relies on the geometry of meaning in its high-dimensional vector space. It isn’t magic, it’s optimization. Just as water molecules align into a lattice, the model settles into a more stable solution in its mathematical landscape.

The Mirage of “Emergence”

That 2022 paper called these shifts emergent abilities. And yes, tasks like arithmetic or multi-step reasoning can look as though they “switch on.” But the model hasn’t suddenly “understood” arithmetic. What’s happening is that semantic generalization finally outperforms positional shortcuts once scale crosses a threshold. Yes, it’s a mouthful. But happening here is the computational process that is shifting from a simple “word position” in a prompt (like, the cat in the _____) to a complex, hyperdimensional matrix where semantic associations across thousands of dimensions create amazing strength to the computation.

And those sudden jumps? They’re often illusions. On simple pass/fail tests, a model can look stuck at zero until it finally tips over the line and then it seems to leap forward. In reality, it was improving step by step all along. The so-called “light-bulb moment” is really just a quirk of how we measure progress. No emergence, just math.

Why “Emergence” Is So Seductive

Why does the language of “emergence” stick? Because it borrows from biology and philosophy. Life “emerges” from chemistry as consciousness “emerges” from neurons. It makes LLMs sound like they’re undergoing cognitive leaps. Some argue emergence is a hallmark of complex systems, and there’s truth to that. So, to a degree, it does capture the idea of surprising shifts.

But we need to be careful. What’s happening here is still math, not mind. Calling it emergence risks sliding into anthropomorphism, where sudden performance shifts are mistaken for genuine understanding. And it happens all the time.

A Useful Imitation

The 2022 paper gave us the language of “emergence.” The 2025 paper shows that what looks like emergence is really closer to a high-complexity phase change. It’s the same math and the same machinery. At small scales, positional tricks (word sequence) dominate. At large scales, semantic structures (multidimensional linguistic analysis) win out.

No insight, no spark of consciousness. It’s just a system reorganizing under new constraints. And this supports my larger thesis: What we’re witnessing isn’t intelligence at all, but anti-intelligence, a powerful, useful imitation that mimics the surface of cognition without the interior substance that only a human mind offers.

Artificial Intelligence Essential Reads

So the next time you hear about an LLM with “emergent ability,” don’t imagine Archimedes leaping from his bath. Picture water freezing. The same molecules, new structure. The same math, new mode. What looks like insight is just another phase of anti-intelligence that is complex, fascinating, even beautiful in its way, but not to be mistaken for a mind.



Source link

Continue Reading

AI Research

MIT Researchers Develop AI Tool to Improve Flu Vaccine Strain Selection

Published

on


Insider Brief

  • MIT researchers have developed VaxSeer, an AI system that predicts which influenza strains will dominate and which vaccines will offer the best protection, aiming to reduce guesswork in seasonal flu vaccine selection.
  • Using deep learning on decades of viral sequences and lab data, VaxSeer outperformed the World Health Organization’s strain choices in 9 of 10 seasons for H3N2 and 6 of 10 for H1N1 in retrospective tests.
  • Published in Nature Medicine, the study suggests VaxSeer could improve vaccine effectiveness and may eventually be applied to other rapidly evolving health threats such as antibiotic resistance or drug-resistant cancers.

MIT researchers have unveiled an artificial intelligence tool designed to improve how seasonal influenza vaccines are chosen, potentially reducing the guesswork that often leaves health officials a step behind the fast-mutating virus.

The study, published in Nature Medicine, was authored by lead researcher Wenxian Shi along with Regina Barzilay, Jeremy Wohlwend, and Menghua Wu. It was supported in part by the U.S. Defense Threat Reduction Agency and MIT’s Jameel Clinic.

According to MIT, the system, called VaxSeer, was developed by scientists at MIT’s Computer Science and Artificial Intelligence Laboratory and the MIT Jameel Clinic for Machine Learning in Health. It uses deep learning models trained on decades of viral sequences and lab results to forecast which flu strains are most likely to dominate and how well candidate vaccines will work against them. Unlike traditional approaches that evaluate single mutations in isolation, VaxSeer’s large protein language model can capture the combined effects of multiple mutations and model shifting viral dominance more accurately.

“VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” Shi noted. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”

In retrospective tests covering ten years of flu seasons, VaxSeer’s strain recommendations outperformed those of the World Health Organization in nine of ten cases for H3N2 influenza, and in six of ten cases for H1N1, researchers said. In one notable example, the system correctly identified a strain for 2016 that the WHO did not adopt until the following year. Its predictions also showed strong correlation with vaccine effectiveness estimates reported by U.S., Canadian, and European surveillance networks.

The tool works in two parts: one model predicts which viral strains are most likely to spread, while another evaluates how effectively antibodies from vaccines can neutralize them in common hemagglutination inhibition assays. These predictions are then combined into a coverage score, which estimates the likely effectiveness of a candidate vaccine months before flu season begins.

“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” Barzilay noted.



Source link

Continue Reading

Trending