Connect with us

AI Research

Quantum machine learning (QML) is closer than you think: Why business leaders should start paying attention now


The enterprise technology landscape is witnessing a remarkable shift. While most discussions around quantum computing focus on distant breakthroughs and theoretical applications, a quiet revolution is happening at the intersection of quantum systems and machine learning. Quantum machine learning (QML) is transitioning from academic curiosity to a practical business tool, and the timeline for enterprise adoption may be shorter than many anticipate.

The quantum advantage: Beyond classical limitations

To truly appreciate how QML is evolving, and how those changes might end up having a huge impact on business technology, it is important to first understand how it differs from current forms of computing. Traditional computers process information in binary states, using ones and zeros. Quantum computers, however, operate on quantum bits (qubits) that can exist in multiple states simultaneously through a phenomenon called superposition. This fundamental difference enables quantum systems to process complex, interdependent variables at scales and speeds that classical machines simply cannot match.

While current quantum hardware still faces significant limitations — including error rates, decoherence, and the need for extreme cooling — consistent progress in quantum simulation and optimization is confirming the technology’s transformative potential. The key insight is that quantum systems don’t need to be perfect to be useful; they need to be better than classical alternatives for specific problem sets.

Why QML matters: Unlocking new performance frontiers

The rapid growth of AI has played a key role in unlocking the potential of QML because it has created a foundation for the technology to be integrated into existing models. QML represents a hybrid approach that combines quantum circuits with classical machine learning models to unlock performance improvements in targeted, data-intensive domains. This isn’t about replacing classical AI wholesale; it’s about identifying specific use cases where quantum advantages can be leveraged within existing enterprise AI workflows.

Early-stage experimentation across industries is already demonstrating measurable improvements:

  • Accelerated training: Complex models that typically require extensive computational resources can be trained more efficiently using quantum-enhanced algorithms, reducing both time-to-insight and energy consumption.
  • High-dimensional data handling: Quantum systems excel at processing datasets with many variables and sparse data points, scenarios where classical methods often struggle or require significant preprocessing.
  • Enhanced accuracy with limited data: QML can achieve greater model accuracy with smaller sample sizes, particularly valuable in regulated industries or specialized domains where data is scarce or expensive to obtain.

The timeline is shortening: From theory to practice

One of the most compelling aspects of QML is how well its inherently probabilistic nature aligns with modern generative AI and uncertainty modeling. Just as classical computing advanced despite early hardware imperfections, current-generation quantum systems are producing measurable results in narrow but high-value use cases.

The progression mirrors the early days of cloud computing or AI: initial skepticism gave way to pilot projects, which demonstrated clear value in specific applications, ultimately leading to widespread enterprise adoption. Today’s quantum systems may be imperfect, but they’re becoming increasingly consistent in delivering advantages for well-defined problem sets.

What enterprises can do today: Practical entry points

Organizations don’t need to wait for quantum hardware perfection to begin exploring value. Several practical entry points offer immediate opportunities for experimentation and learning:

  1. Risk scenario simulation: Financial services and insurance companies can use quantum systems to simulate rare or complex risk scenarios that are computationally intensive for classical systems. This includes stress testing portfolios under extreme market conditions or modeling catastrophic insurance events.
  2. Enhanced forecasting: Quantum-inspired sampling techniques can improve forecasting accuracy and sensitivity analysis, particularly for supply chain optimization, demand planning, and resource allocation.
  3. Synthetic data generation: In heavily regulated industries or data-scarce environments, QML can generate high-quality synthetic datasets that preserve statistical properties while ensuring compliance with privacy regulations.
  4. Anomaly detection: Quantum systems excel at identifying subtle patterns and anomalies in complex datasets, particularly valuable for fraud detection, cybersecurity, and quality control applications.
  5. Specialized industry applications: Early adopters are finding success in claims forecasting, patient risk stratification, drug efficacy modeling, and portfolio optimization — areas where the quantum advantage directly translates to business value.

Building quantum readiness: Strategic considerations

For enterprise leaders considering QML adoption, the focus should be on building organizational readiness rather than waiting for perfect technology. This means investing in quantum literacy across technical teams, identifying use cases where quantum advantages align with business priorities, and developing partnerships with quantum computing providers and research institutions.

The talent dimension is particularly critical. Organizations that begin developing quantum expertise today will have significant advantages as the ecosystem matures, whether they pursue projects by training existing data scientists or recruiting quantum-aware talent. This isn’t just about understanding quantum mechanics; it’s about recognizing how quantum capabilities can be integrated into existing AI and data science workflows.

The enterprise imperative: Early movers’ advantage

QML is no longer confined to research laboratories. It’s becoming a tool with real strategic potential, offering competitive advantages for organizations willing to invest in early-stage experimentation. The companies that begin building quantum capabilities today — starting with awareness, progressing to experimentation, and developing internal expertise — will be best positioned to capitalize on the technology as it continues to mature.

The question isn’t whether QML will impact enterprise AI, but rather when and how. Organizations that treat quantum computing as a distant future technology risk being left behind by competitors who recognize its emerging practical value. The time for quantum awareness and preparation is now.

As we’ve learned from previous technology transitions, the companies that lead aren’t always those with the most resources; they’re the ones that recognize inflection points earliest and act decisively. For QML, that inflection point is approaching faster than most expect.​​​​​​​​​​​​​​​​

Learn more about EXL’s data and AI capabilities here.

Anand “Andy” Logani is executive vice president and chief digital and AI officer at EXL, a global data and AI company.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review – Cureus

Published

on



Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review  Cureus



Source link

Continue Reading

AI Research

A Real-Time Look at How AI Is Reshaping Work : Information Sciences Institute

Published

on


Artificial intelligence may take over some tasks and transform others, but one thing is certain: it’s reshaping the job market. Researchers at USC’s Information Sciences Institute (ISI) analyzed LinkedIn job postings and AI-related patent filings to measure which jobs are most exposed, and where those changes are happening first. 

The project was led by ISI research assistant Eun Cheol Choi, working with students in a graduate-level USC Annenberg data science course taught by USC Viterbi Research Assistant Professor Luca Luceri. The team developed an “AI exposure” score to measure how closely each role is tied to current AI technologies. A high score suggests the job may be affected by automation, new tools, or shifts in how the work is done. 

Which Industries Are Most Exposed to AI?

To understand how exposure shifted with new waves of innovation, the researchers compared patent data from before and after a major turning point. “We split the patent dataset into two parts, pre- and post-ChatGPT release, to see how job exposure scores changed in relation to fresh innovations,” Choi said. Released in late 2022, ChatGPT triggered a surge in generative AI development, investment, and patent filings.

Jobs in wholesale trade, transportation and warehousing, information, and manufacturing topped the list in both periods. Retail also showed high exposure early on, while healthcare and social assistance rose sharply after ChatGPT, likely due to new AI tools aimed at diagnostics, medical records, and clinical decision-making.

In contrast, education and real estate consistently showed low exposure, suggesting they are, at least for now, less likely to be reshaped by current AI technologies.

AI’s Reach Depends on the Role

AI exposure doesn’t just vary by industry, it also depends on the specific type of work. Jobs like software engineer and data scientist scored highest, since they involve building or deploying AI systems. Roles in manufacturing and repair, such as maintenance technician, also showed elevated exposure due to increased use of AI in automation and diagnostics.

At the other end of the spectrum, jobs like tax accountant, HR coordinator, and paralegal showed low exposure. They center on work that’s harder for AI to automate: nuanced reasoning, domain expertise, or dealing with people.

AI Exposure and Salary Don’t Always Move Together

The study also examined how AI exposure relates to pay. In general, jobs with higher exposure to current AI technologies were associated with higher salaries, likely reflecting the demand for new AI skills. That trend was strongest in the information sector, where software and data-related roles were both highly exposed and well compensated.

But in sectors like wholesale trade and transportation and warehousing, the opposite was true. Jobs with higher exposure in these industries tended to offer lower salaries, especially at the highest exposure levels. The researchers suggest this may signal the early effects of automation, where AI is starting to replace workers instead of augmenting them.

“In some industries, there may be synergy between workers and AI,” said Choi. “In others, it may point to competition or replacement.”

From Class Project to Ongoing Research

The contrast between industries where AI complements workers and those where it may replace them is something the team plans to investigate further. They hope to build on their framework by distinguishing between different types of impact — automation versus augmentation — and by tracking the emergence of new job categories driven by AI. “This kind of framework is exciting,” said Choi, “because it lets us capture those signals in real time.”

Luceri emphasized the value of hands-on research in the classroom: “It’s important to give students the chance to work on relevant and impactful problems where they can apply the theoretical tools they’ve learned to real-world data and questions,” he said. The paper, Mapping Labor Market Vulnerability in the Age of AI: Evidence from Job Postings and Patent Data, was co-authored by students Qingyu Cao, Qi Guan, Shengzhu Peng, and Po-Yuan Chen, and was presented at the 2025 International AAAI Conference on Web and Social Media (ICWSM), held June 23-26 in Copenhagen, Denmark.

Published on July 7th, 2025

Last updated on July 7th, 2025



Source link

Continue Reading

AI Research

SERAM collaborates on AI-driven clinical decision project

Published

on


The Spanish Society of Medical Radiology (SERAM) has collaborated with six other scientific societies to develop an AI-supported urology clinical decision-making project called Uro-Oncogu(IA)s.

Uro-Oncog(IA)s project team.SERAM

The initiative produced an algorithm that will “reduce time and clinical variability” in the management of urological patients, the society said. SERAM’s collaborators include the Spanish Urology Association (AEU), the Foundation for Research in Urology (FIU), the Spanish Society of Pathological Anatomy (SEAP), the Spanish Society of Hospital Pharmacy (SEFH), the Spanish Society of Nuclear Medicine and Molecular Imaging (SEMNIM), and the Spanish Society of Radiation Oncology (SEOR).

SERAM Secretary General Dr. MaríLuz Parra launched the project in Madrid on 3 July with AEU President Dr. Carmen González.

On behalf of SERAM, the following doctors participated in this initiative:

  • Prostate cancer guide: Dr. Joan Carles Vilanova, PhD, of the University of Girona,
  • Upper urinary tract guide: Dr. Richard Mast of University Hospital Vall d’Hebron in Barcelona,
  • Muscle-invasive bladder cancer guide: Dr. Eloy Vivas of the University of Malaga,
  • Non-muscle invasive bladder cancer guide: Dr. Paula Pelechano of the Valencian Institute of Oncology in Valencia,
  • Kidney cancer guide: Dr. Nicolau Molina of the University of Barcelona.



Source link

Continue Reading

Trending