Connect with us

AI Research

SMX, Google Launch Pilot for AI-Driven Military Intelligence

Published

on


SMX, in partnership with Google Public Sector and World Wide Technology, has launched a pilot program to integrate advanced artificial intelligence, machine learning and digital productivity capabilities into real-world military operations.

Bringing AI & Machine Learning to Real-World Military Missions

The technology and mission-focused services provider said Tuesday the program, under the Long-Range Enterprise Intelligence, Surveillance and Reconnaissance, or LEIA, task order, will leverage Google AI Optical Character Recognition technologies, Google Translate and machine learning technologies to enhance operational workflows, communications and decision-making on the front lines. The program’s main objective is to establish an operations center designed to operationalize these capabilities for command and control operations.

Remarks From SMX, Google Public Sector Executives

“This initiative represents a new chapter in how the DOD applies secure, scalable commercial technology to mission operations. Our team at SMX led the development and orchestration of this partnership to support LEIA long-term strategic objectives,” stated Dana Dewey, president of global defense at SMX. 

“We’re proud to work with SMX and the LEIA team to help bring transformative AI capabilities to the defense industry,” said Jan Niemiec, managing director of national security at Google Public Sector. “By combining secure cloud collaboration with generative AI and machine learning, we’re delivering new tools that help improve agility, reduce friction and enhance mission readiness,” he added.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

How London Stock Exchange Group is detecting market abuse with their AI-powered Surveillance Guide on Amazon Bedrock

Published

on


London Stock Exchange Group (LSEG) is a global provider of financial markets data and infrastructure. It operates the London Stock Exchange and manages international equity, fixed income, and derivative markets. The group also develops capital markets software, offers real-time and reference data products, and provides extensive post-trade services. This post was co-authored with Charles Kellaway and Rasika Withanawasam of LSEG.

Financial markets are remarkably complex, hosting increasingly dynamic investment strategies across new asset classes and interconnected venues. Accordingly, regulators place great emphasis on the ability of market surveillance teams to keep pace with evolving risk profiles. However, the landscape is vast; London Stock Exchange alone facilitates the trading and reporting of over £1 trillion of securities by 400 members annually. Effective monitoring must cover all MiFID asset classes, markets and jurisdictions to detect market abuse, while also giving weight to participant relationships, and market surveillance systems must scale with volumes and volatility. As a result, many systems are outdated and unsatisfactory for regulatory expectations, requiring manual and time-consuming work.

To address these challenges, London Stock Exchange Group (LSEG) has developed an innovative solution using Amazon Bedrock, a fully managed service that offers a choice of high-performing foundation models from leading AI companies, to automate and enhance their market surveillance capabilities. LSEG’s AI-powered Surveillance Guide helps analysts efficiently review trades flagged for potential market abuse by automatically analyzing news sensitivity and its impact on market behavior.

In this post, we explore how LSEG used Amazon Bedrock and Anthropic’s Claude foundation models to build an automated system that significantly improves the efficiency and accuracy of market surveillance operations.

The challenge

Currently, LSEG’s surveillance monitoring systems generate automated, customized alerts to flag suspicious trading activity to the Market Supervision team. Analysts then conduct initial triage assessments to determine whether the activity warrants further investigation, which might require undertaking differing levels of qualitative analysis. This could involve manual collation of all and any evidence that might be applicable when methodically corroborating regulation, news, sentiment and trading activity. For example, during an insider dealing investigation, analysts are alerted to statistically significant price movements. The analyst must then conduct an initial assessment of related news during the observation period to determine if the highlighted price move has been caused by specific news and its likely price sensitivity, as shown in the following figure. This initial step in assessing the presence, or absence, of price sensitive news guides the subsequent actions an analyst will take with a possible case of market abuse.

Initial triaging can be a time-consuming and resource-intensive process and still necessitate a full investigation if the identified behavior remains potentially suspicious or abusive.

Moreover, the dynamic nature of financial markets and evolving tactics and sophistication of bad actors demand that market facilitators revisit automated rules-based surveillance systems. The increasing frequency of alerts and high number of false positives adversely impact an analyst’s ability to devote quality time to the most meaningful cases, and such heightened emphasis on resources could result in operational delays.

Solution overview

To address these challenges, LSEG collaborated with AWS to improve insider dealing detection, developing a generative AI prototype that automatically predicts the probability of news articles being price sensitive. The system employs Anthropic’s Claude Sonnet 3.5 model—the most price performant model at the time—through Amazon Bedrock to analyze news content from LSEG’s Regulatory News Service (RNS) and classify articles based on their potential market impact. The results support analysts to more quickly determine whether highlighted trading activity can be mitigated during the observation period.

The architecture consists of three main components:

  • A data ingestion and preprocessing pipeline for RNS articles
  • Amazon Bedrock integration for news analysis using Claude Sonnet 3.5
  • Inference application for visualising results and predictions

The following diagram illustrates the conceptual approach:

Conceptual approach showing data and process flow

The workflow processes news articles through the following steps:

  1. Ingest raw RNS news documents in HTML format
  2. Preprocess and extract clean news text
  3. Fill the classification prompt template with text from the news documents
  4. Prompt Anthropic’s Claude Sonnet 3.5 through Amazon Bedrock
  5. Receive and process model predictions and justifications
  6. Present results through the visualization interface developed using Streamlit

Methodology

The team collated a comprehensive dataset of approximately 250,000 RNS articles spanning 6 consecutive months of trading activity in 2023. The raw data—HTML documents from RNS—were initially pre-processed within the AWS environment by removing extraneous HTML elements and formatted to extract clean textual content. Having isolated substantive news content, the team subsequently carried out exploratory data analysis to understand distribution patterns within the RNS corpus, focused on three dimensions:

  • News categories: Distribution of articles across different regulatory categories
  • Instruments: Financial instruments referenced in the news articles
  • Article length: Statistical distribution of document sizes

Exploration provided contextual understanding of the news landscape and informed the sampling strategy in creating a representative evaluation dataset. 110 articles were selected to cover major news categories, and this curated subset was presented to market surveillance analysts who, as domain experts, evaluated each article’s price sensitivity on a nine-point scale, as shown in the following image:

  • 1–3: PRICE_NOT_SENSITIVE – Low probability of price sensitivity
  • 4–6: HARD_TO_DETERMINE – Uncertain price sensitivity
  • 7–9: PRICE_SENSITIVE – High probability of price sensitivity

Screenshot of the News Price Sensitivity screen

The experiment was executed within Amazon SageMaker using Jupyter Notebooks as the development environment. The technical stack consisted of:

  • Instructor library: Provided integration capabilities with Anthropic’s Claude Sonnet 3.5 model in Amazon Bedrock
  • Amazon Bedrock: Served as the API infrastructure for model access
  • Custom data processing pipelines (Python): For data ingestion and preprocessing

This infrastructure enabled systematic experimentation with various algorithmic approaches, including traditional supervised learning methods, prompt engineering with foundation models, and fine-tuning scenarios.

The evaluation framework established specific technical success metrics:

  1. Data pipeline implementation: Successful ingestion and preprocessing of RNS data
  2. Metric definition: Clear articulation of precision, recall, and F1 metrics
  3. Workflow completion: Execution of comprehensive exploratory data analysis (EDA) and experimental workflows

The analytical approach was a two-step classification process, as shown in the following figure:

  • Step 1: Classify news articles as potentially price sensitive or other
  • Step 2: Classify news articles as potentially price not sensitive or other

The two-step classification process

This multi-stage architecture was designed to maximize classification accuracy by allowing analysts to focus on specific aspects of price sensitivity at each stage. The results from each step were then merged to produce the final output, which was compared with the human-labeled dataset to generate quantitative results.

To consolidate the results from both classification steps, the data merging rules followed were:

Step 1 Classification Step 2 Classification Final Classification
Sensitive Other Sensitive
Other Non-sensitive Non-sensitive
Other Other Ambiguous – requires manual review i.e., Hard to Determine
Sensitive Non-sensitive Ambiguous – requires manual review i.e., Hard to Determine

Based on the insights gathered, prompts were optimized. The prompt templates elicited three key components from the model:

  • A concise summary of the news article
  • A price sensitivity classification
  • A chain-of-thought explanation justifying the classification decision

The following is an example prompt:

system non sensitive = "*"
You are an expert financial analyst with deep knowledge of market dynamics, investor
    psychology, and the intricate relationships between news events and asset prices.
    Your core function is to analyze news articles and assess their likelihood of being
    non-price sensitive with unparalleled accuracy and insight.
Key aspects of your expertise include:
1. Market Dynamics: You have a comprehensive understanding of how financial markets
    operate, including the factors that typically drive price movements and those that
    are often overlooked by the market.
2. Investor Psychology: You possess keen insight into how different types of news affect
    investor sentiment and decision-making, particularly in distinguishing between
    information that causes reactions and information that doesn't.
3. News Analysis: You excel at dissecting financial news articles, identifying key
    elements, and determining their relevance (or lack thereof) to asset valuations and
    market movements.
4. Pattern Recognition: You can draw upon a vast knowledge of historical market 
    reactions to various types of news, allowing you to identify patterns of 
    non-impactful information.
5. Sector-Specific Knowledge: You understand the nuances of different industry sectors
    and how the importance of news can vary across them.
6. Regulatory Insight: You're well-versed in financial regulations and can identify when
    news does or doesn't meet thresholds for material information.
7. Macroeconomic Perspective: You can place company-specific news in the broader context
    of economic trends and assess whether it's likely to be overshadowed by larger market
    forces.
8. Quantitative Skills: You can evaluate financial metrics and understand when changes or
    announcements related to them are significant enough to impact prices.
Your primary task is to analyze given news articles and determine, with a high degree of
    confidence, whether they are likely to be non-price sensitive. This involves:
- Carefully examining the content and context of each news item
- Assessing its potential (or lack thereof) to influence investor decisions
- Considering both short-term and long-term implications
- Providing clear, well-reasoned justifications for your assessments
- Identifying key factors that support your conclusion
- Recommending further information that could enhance the analysis
- Offering insights that can help traders make more informed decisions
You should always maintain a conservative approach, erring on the side of caution. If
    there's any reasonable doubt about whether news could be price-sensitive, you should
    classify it as 'OTHER' rather than 'NOT_PRICE_SENSITIVE'.
Your analyses should be sophisticated yet accessible, catering to both experienced
    traders and those new to the market. Always strive for objectivity, acknowledging any
    uncertainties or limitations in your assessment.
Remember, your insights play a crucial role in helping traders filter out market noise
    and focus on truly impactful information, ultimately contributing to more effective
    and educated trading decisions.

As shown in the following figure, the solution was optimized to maximize:

  • Precision for the NOT SENSITIVE class
  • Recall for the PRICE SENSITIVE class

Confusion matrix and preliminary results summary

This optimization strategy was deliberate, facilitating high confidence in non-sensitive classifications to reduce unnecessary escalations to human analysts (in other words, to reduce false positives). Through this methodical approach, prompts were iteratively refined while maintaining rigorous evaluation standards through comparison against the expert-annotated baseline data.

Key benefits and results

Over a 6-week period, Surveillance Guide demonstrated remarkable accuracy when evaluated on a representative sample dataset. Key achievements include the following:

  • 100% precision in identifying non-sensitive news, allocating 6 articles to this category that analysts confirmed were non price sensitive
  • 100% recall in detecting price-sensitive content, allocating 36 hard to determine and 28 price sensitive articles labelled by analysts into one of these two categories (never misclassifying price sensitive content)
  • Automated analysis of complex financial news
  • Detailed justifications for classification decisions
  • Effective triaging of results by sensitivity level

In this implementation, LSEG has employed Amazon Bedrock so that they can use secure, scalable access to foundation models through a unified API, minimizing the need for direct model management and reducing operational complexity. Because of the serverless architecture of Amazon Bedrock, LSEG can take advantage of dynamic scaling of model inference capacity based on news volume, while maintaining consistent performance during market-critical periods. Its built-in monitoring and governance features support reliable model performance and maintain audit trails for regulatory compliance.

Impact on market surveillance

This AI-powered solution transforms market surveillance operations by:

  • Reducing manual review time for analysts
  • Improving consistency in price-sensitivity assessment
  • Providing detailed audit trails through automated justifications
  • Enabling faster response to potential market abuse cases
  • Scaling surveillance capabilities without proportional resource increases

The system’s ability to process news articles instantly and provide detailed justifications helps analysts focus their attention on the most critical cases while maintaining comprehensive market oversight.

Proposed next steps

LSEG plans to first enhance the solution, for internal use, by:

  • Integrating additional data sources, including company financials and market data
  • Implementing few-shot prompting and fine-tuning capabilities
  • Expanding the evaluation dataset for continued accuracy improvements
  • Deploying in live environments alongside manual processes for validation
  • Adapting to additional market abuse typologies

Conclusion

LSEG’s Surveillance Guide demonstrates how generative AI can transform market surveillance operations. Powered by Amazon Bedrock, the solution improves efficiency and enhances the quality and consistency of market abuse detection.

As financial markets continue to evolve, AI-powered solutions architected along similar lines will become increasingly important for maintaining integrity and compliance. AWS and LSEG are intent on being at the forefront of this change.

The selection of Amazon Bedrock as the foundation model service provides LSEG with the flexibility to iterate on their solution while maintaining enterprise-grade security and scalability. To learn more about building similar solutions with Amazon Bedrock, visit the Amazon Bedrock documentation or explore other financial services use cases in the AWS Financial Services Blog.


About the authors

Charles Kellaway is a Senior Manager in the Equities Trading team at LSE plc, based in London. With a background spanning both Equity and Insurance markets, Charles specialises in deep market research and business strategy, with a focus on deploying technology to unlock liquidity and drive operational efficiency. His work bridges the gap between finance and engineering, and he always brings a cross-functional perspective to solving complex challenges.

Rasika Withanawasam is a seasoned technology leader with over two decades of experience architecting and developing mission-critical, scalable, low-latency software solutions. Rasika’s core expertise lies in big data and machine learning applications, focusing intently on FinTech and RegTech sectors. He has held several pivotal roles at LSEG, including Chief Product Architect for the flagship Millennium Surveillance and Millennium Analytics platforms, and currently serves as Manager of the Quantitative Surveillance & Technology team, where he leads AI/ML solution development.

Richard Chester is a Principal Solutions Architect at AWS, advising large Financial Services organisations. He has 25+ years’ experience across the Financial Services Industry where he has held leadership roles in transformation programs, DevOps engineering, and Development Tooling. Since moving across to AWS from being a customer, Richard is now focused on driving the execution of strategic initiatives, mitigating risks and tackling complex technical challenges for AWS customers.



Source link

Continue Reading

AI Research

UMD Researchers Leverage AI to Enhance Confidence in HPV Vaccination

Published

on


Human papillomavirus (HPV) vaccination represents a critical breakthrough in cancer prevention, yet its uptake among adolescents remains disappointingly low. Despite overwhelming evidence supporting the vaccine’s safety and efficacy against multiple types of cancer—including cervical, anal, and oropharyngeal cancers—only about 61% of teenagers aged 13 to 17 in the United States have received the recommended doses. Even more concerning are the even lower vaccination rates among younger children, starting at age nine, when the vaccine is first suggested. Addressing this paradox between scientific consensus and public hesitancy has become a focal point for an innovative research project spearheaded by communication expert Professor Xiaoli Nan at the University of Maryland (UMD).

The project’s core ambition involves harnessing artificial intelligence (AI) to transform the way vaccine information is communicated to parents, aiming to dismantle the barriers that fuel hesitancy. With a robust $2.8 million grant from the National Cancer Institute, part of the National Institutes of Health, Nan and her interdisciplinary team are developing a personalized, AI-driven chatbot. This technology is engineered not only to provide accurate health information but to adapt dynamically to parents’ individual concerns, beliefs, and communication preferences in real time—offering a tailored conversational experience that traditional brochures and websites simply cannot match.

HPV vaccination has long struggled with public misconceptions, stigma, and misinformation that discourage uptake. A significant factor behind the reluctance is tied to the vaccine’s association with a sexually transmitted infection, which prompts some parents to believe their children are too young for the vaccine or that vaccination might imply premature engagement with sexual activity. This misconception, alongside a lack of tailored communication strategies, has contributed to persistent disparities in vaccination rates. These disparities are especially pronounced among men, individuals with lower educational attainment, and those with limited access to healthcare, as Professor Cheryl Knott, a public health behavioral specialist at UMD, highlights.

Unlike generic informational campaigns, the AI chatbot leverages cutting-edge natural language processing (NLP) to simulate nuanced human dialogue. However, it does so without succumbing to the pitfalls of generative AI models, such as ChatGPT, which can sometimes produce inaccurate or misleading answers. Instead, the system draws on large language models to generate a comprehensive array of possible responses. These are then rigorously curated and vetted by domain experts before deployment, ensuring that the chatbot’s replies remain factual, reliable, and sensitive to users’ needs. When interacting live, the chatbot analyzes parents’ input in real time, selecting the most appropriate response from this trusted set, thereby balancing flexibility with accuracy.

This “middle ground” model, as described by Philip Resnik, an MPower Professor affiliated with UMD’s Department of Linguistics and Institute for Advanced Computer Studies, preserves the flexibility of conversational AI while instituting “guardrails” to maintain scientific integrity. The approach avoids the rigidity of scripted chatbots that deliver canned, predictable replies; simultaneously, it steers clear of the “wild west” environment of fully generative chatbots, where the lack of control can lead to misinformation. Instead, it offers an adaptive yet responsible communication tool, capable of engaging parents on their terms while preserving public health objectives.

The first phase of this ambitious experiment emphasizes iterative refinement of the chatbot via a user-centered design process. This involves collecting extensive feedback from parents, healthcare providers, and community stakeholders to optimize the chatbot’s effectiveness and cultural sensitivity. Once this foundational work is complete, the team plans to conduct two rigorous randomized controlled trials. The first trial will be conducted online with a nationally representative sample of U.S. parents, compare the chatbot’s impact against traditional CDC pamphlets, and measure differences in vaccine acceptance. The second trial will take place in clinical environments in Baltimore, including pediatric offices, to observe how the chatbot influences decision-making in real-world healthcare settings.

Min Qi Wang, a behavioral health professor participating in the project, emphasizes that “tailored, timely, and actionable communication” facilitated by AI signals a paradigm shift in public health strategies. This shift extends beyond HPV vaccination, as such advanced communication systems possess the adaptability to address other complex public health challenges. By delivering personalized guidance directly aligned with users’ expressed concerns, AI can foster a more inclusive health dialogue that values empathy and relevance, which traditional mass communication methods often lack.

Beyond increasing HPV vaccination rates, the research team envisions broader implications for public health infrastructure. In an era where misinformation can spread rapidly and fear often undermines scientific recommendations, AI-powered tools offer a scalable, responsive mechanism to disseminate trustworthy information quickly. During future pandemics or emergent health crises, such chatbots could serve as critical channels for delivering customized, real-time guidance to diverse populations, helping to flatten the curve of misinformation while respecting individual differences.

The integration of AI chatbots into health communication represents a fusion of technological innovation with behavioral science, opening new horizons for personalized medicine and health education. By engaging users empathetically and responsively, these systems can build trust and facilitate informed decision-making, critical components of successful public health interventions. Professor Nan highlights the profound potential of this marriage between AI and public health communication by posing the fundamental question: “Can we do a better job with public health communication—with speed, scale, and empathy?” Project outcomes thus far suggest an affirmative answer.

As the chatbot advances through its pilot phases and into clinical trials, the research team remains committed to maintaining a rigorous scientific approach, ensuring that the tool’s recommendations align with the highest standards of evidence-based medicine. This careful balance between innovation and reliability is essential to maximize public trust and the chatbot’s ultimate impact on vaccine uptake. Should these trials demonstrate efficacy, the model could serve as a blueprint for deploying AI-driven communication tools across various domains of health behavior change.

Moreover, the collaborative nature of this project—bringing together communication experts, behavioral scientists, linguists, and medical professionals—illustrates the importance of interdisciplinary efforts in addressing complex health challenges. Each field contributes unique insights: linguistic analysis enables nuanced conversation design, behavioral science guides motivation and persuasion strategies, and medical expertise ensures factual accuracy and clinical relevance. This holistic framework strengthens the chatbot’s ability to resonate with diverse parent populations and to overcome entrenched hesitancy.

In conclusion, while HPV vaccines represent a major advancement in cancer prevention, their potential remains underutilized due to deeply embedded hesitancy fueled by stigma and misinformation. Leveraging AI-driven, personalized communication stands as a promising strategy to bridge this gap. The University of Maryland’s innovative chatbot project underscores the use of responsible artificial intelligence to meet parents where they are, addressing their unique concerns with empathy and scientific rigor. This initiative not only aspires to improve HPV vaccine uptake but also to pave the way for AI’s transformative role in future public health communication efforts.

Subject of Research: Artificial intelligence-enhanced communication to improve HPV vaccine uptake among parents.

Article Title: Transforming Vaccine Communication: AI Chatbots Target HPV Vaccine Hesitancy in Parents

News Publication Date: Information not provided in the source content.

Web References:
https://sph.umd.edu/people/cheryl-knott
https://sph.umd.edu/people/min-qi-wang

Image Credits: Credit: University of Maryland (UMD)

Keywords: Vaccine research, Science communication

Tags: adolescent vaccination ratesAI-driven health communicationcancer prevention strategieschatbot technology in healthcareevidence-based vaccine educationHPV vaccination awarenessinnovative communication strategies for parentsNational Cancer Institute fundingovercoming vaccine hesitancyparental engagement in vaccinationpersonalized health informationUniversity of Maryland research



Source link

Continue Reading

AI Research

EY-Parthenon practice unveils neurosymbolic AI capabilities to empower businesses to identify, predict and unlock revenue at scale | EY


Jeff Schumacher, architect behind the groundbreaking AI solution, to steer EY Growth Platforms.

Ernst & Young LLP (EY) announced the launch of EY Growth Platforms (EYGP), a disruptive artificial intelligence (AI) solution powered by neurosymbolic AI. By combining machine learning with logical reasoning, EYGP empowers organizations to uncover transformative growth opportunities and revolutionize their commercial models for profitability. The neurosymbolic AI workflows that power EY Growth Platforms consistently uncover hundred-million-dollar+ growth opportunities for global enterprises, with the potential to enhance revenue.

This represents a rapid development in enterprise technology—where generative AI and neurosymbolic AI combine to redefine how businesses create value. This convergence empowers enterprises to reimagine growth at impactful scale, producing outcomes that are traceable, trustworthy and statistically sound.

EYGP serves as a powerful accelerator for the innovative work at EY-Parthenon, helping clients with their most complex strategic opportunities to realize greater value and transform their business, by reimagining their business from the ground up—including building and scaling new corporate ventures, or executing high-stakes transactions.

“In today’s uncertain economic climate, leading companies aren’t just adapting—they’re taking control,” says Mitch Berlin, EY Americas Vice Chair, EY-Parthenon. “EY Growth Platforms gives our clients the predictive power and actionable foresight they need to confidently steer their revenue trajectory. EY Growth Platforms is a game changer, poised to become the backbone of enterprise growth.”

How EY Growth Platforms work

Neurosymbolic AI merges the statistical power of neural networks with the structured logic of symbolic reasoning, driving powerful pattern recognition to deliver predictions and decisions that are practical, actionable and grounded in real-world outcomes. EYGP harnesses this powerful technology to simulate real-time market scenarios and their potential impact, uncovering the most effective business strategies tailored to each client. It expands beyond the limits of generative AI, becoming a growth operating system for companies to tackle complex go-to-market challenges and unlock scalable revenue. 

At the core of EYGP is a unified data and reasoning engine that ingests structured and unstructured data from internal systems, external signals, and deep EY experience and data sets. Developed over three years, this robust solution is already powering proprietary AI applications and intelligent workflows for EY-Parthenon clients across the consumer product goods, industrials and financial services sectors without the need for extensive data cleaning or digital transformation.

Use cases for EY Growth Platforms

With the ability to operate in complex high-stakes scenarios, EYGP is driving a measurable impact across industries such as:

  • Financial services: In a tightly regulated industry, transparency and accountability are nonnegotiable. Neurosymbolic AI enhances underwriting, claims processing and compliance with transparency and rigor, validating that decisions are aligned with regulatory standards and optimized for customer outcomes.
  • Consumer products: Whether powering real-time recommendations, adaptive interfaces or location-aware services, neurosymbolic AI drives hyperpersonalized experiences at a one-to-one level. By combining learned patterns with structured knowledge, it delivers precise, context-rich insights tailored to individual behavior, preferences and environments.
  • Industrial products: Neurosymbolic AI helps industrial conglomerates optimize the entire value chain — from sourcing and production to distribution and service. By integrating structured domain knowledge with real-time operational data, it empowers leaders to make smarter decisions — from facility placement and supply routing to workforce allocation tailored to specific geographies and market-specific conditions.

The platform launch follows the appointment of Jeff Schumacher as the EYGP Leader for EY-Parthenon. Schumacher brings over 25 years of experience in business strategy, innovation and digital disruption, having helped establish over 100 early growth companies. He is the founder of neurosymbolic AI company, Growth Protocol, the technology that EY has exclusive licensing agreement with.

“Neurosymbolic AI is not another analytics tool, it’s a growth engine,” says Jeff Schumacher, EY Growth Platforms Leader, EY-Parthenon. “With EY Growth Platforms, we’re putting a dynamic, AI-powered operating system in the hands of leaders, giving them the ability to rewire how their companies make money. This isn’t incremental improvement; it’s a complete reset of the commercial model.”

EYGP is currently offered in North America, Europe, and Australia. For more information, visit ey.com/NeurosymbolicAI/

– ends –

About EY

EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.

Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.

EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

All in to shape the future with confidence.

EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation are available via ey.com/privacy. EY member firms do not practice law where prohibited by local laws. For more information about our organization, please visit ey.com.

This news release has been issued by EYGM Limited, a member of the global EY organization that also does not provide any services to clients.



Source link

Continue Reading

Trending