Connect with us

AI Insights

A Platform Leader’s Path to Sustained Dominance

Published

on


This article first appeared on GuruFocus.

Salesforce (NYSE: CRM) offers a compelling long-term opportunity due to its leadership in the customer relationship management (NYSE:CRM) market, expanding AI integration, and growing addressable market. because of its continuous leading position in the customer relationship management (NYSE:CRM) market, the fast-growing adoption of artificial intelligence, and rapidly increasing the total addressable market. The business model that the company provides allows subscriptions and grants profits that are predictable, moreover, the platform-based approach increases the switching costs and provides opportunities to expand within the existing client bases. Thus, theThe business model enables high visibility into recurring revenue and long-term client retention.

Standing at the forefront of the global CRM market, Salesforce has captured nearly a quarter (23%) of the market share, leaving behind formidable adversaries like Microsoft, Oracle, and SAP. The position of being number one provides Salesforce with a multilayer competitive fortifications; thus, it builds a strong economic moat.

Network Effects and Ecosystem Dominance: There are over 4,000 applications on the AppExchange belonging to the Salesforce ecosystem, reinforcing its self-expanding cycle where developers attract customers, and vice versa. This network effect, which grows stronger with the addition of new applications thus the ecosystem expands, creates a barrier to entry that competitors find it almost impossible to duplicate. Independent software vendors (ISVs) devote a large amount of time and energy in creating applications that work with Salesforce, which in turn, makes the customers think that if they switch to another platform they would be missing out on the benefits of the whole partner network.

Data Network Effects: The greater the number of customers availing of Salesforce services, the larger is the amount of the data which is being accumulated by the platform on customer interactions across different sectors and applications. This data facilitates the enhancement of AI models, predictive analytics, and benchmarking capabilities; thus, the platform continues to generate value that compounds over the years and is not easily reproduced by new entrants.

Salesforce’s artificial intelligence strategy is a key component not just because it adds features, Salealos because sforce’s artificial intelligence strategy enhances its platform differentiation and supports operational leverage.. The company introduces the next generation of AI technology and improves the algorithmic base inspiring the organizations to work different with the which they produce and hold about their own customer data.

Einstein Platform Foundation: By means of the Salesforce Einstein accouting, the platform processes 200 billion predictions a day, thus, it is leveraging the collective data of hundreds of thousands of customers for the purpose of the constant improvement of AI models. This magnitude of data processing and machine learning approach creates predictive features that do not have any equivalent among smaller competitors who do not have a similar scale of data or equal access to it.

Salesforce Empowers Generative AI Integration: By launching Einstein GPT and merging it with large language models Salesforce enters a new arena of generative AI market and value retrieval. Unlike isolated AI appliances, Salesforce’s AI is integrated with the knowledge of customer data, ongoing work processes, and the history of past interactions which makes it more precise and applicable for straight-tied AI-generated analytical views and suggestions.

Industry-Specific AI Models: In order to increase the vertical-specific AI functionality, Salesforce is working on certain technologies for the fields of healthcare, financial services, and retail. These individual models not only take into account the industry standard rules and jargon but also best practices, thereby creating additional switching costs and competitive differentiations that generic AI platforms cannot compete with easily.

Recurring Revenue Model: Salesforce’s subscription-based model provides an excellent forecast of the company’s financial outcome and adds up to the long-term investors’ value in a compounding way. The company with more than 90% of the revenue recurrence and contracts being mainly over one year gives the insiders a rare opportunity to see the metric trends moving forward. The remaining performance obligation (RPO) backlog of the company which is over $25 billion is the contracted future income and it brings both risk mitigating and growth visibility effects.

Salesforce evolved from single-app CRM to a multi-solutions customer management platform that now has several advantages in running the business and growing.

Multi-Cloud Synergies: The connected system of Sales Cloud, Service Cloud, Marketing Cloud, Commerce Cloud, and Analytics Cloud is a source of valuable cross-selling potentials and a customer switching cost increase. Enterprises using multiple clouds of Salesforce enjoy unified customer data, a consistent user experience, and an integrated workflow that becomes hard to build in the same way if the applications were used across different vendors.

Industry Verticalization: Salesforce has produced vertical-specific solutions for healthcare, financial services, retail, manufacturing, and other segments. These industry clouds merge the core platform with the prebuilt processes, compliance features, and the specific data models of the industry. This verticalization strategy will sidestep a generic CRM program by offering more suitable tools, which leads to further competitive moats.

Salesforce’s major acquisitions: MuleSoft (2018), Tableau (2019), and Slack (2021), have all been integrated into its broader platform strategy, not as standalone tools but as functional extensions of the core CRM architecture. MuleSoft’s API capabilities enable connectivity across enterprise systems, making Salesforce more interoperable within legacy environments. Tableau enhances data visibility across Salesforce products, giving users embedded analytics tailored to operational workflows. Slack has become central to Salesforce’s vision of asynchronous collaboration, now embedded directly into Sales and Service Cloud interfaces.

Each acquisition has followed a clear integration path: building native connectors, embedding dashboards or features directly into Salesforce interfaces, and enabling data-sharing across clouds. This approach has allowed Salesforce to expand the surface area of customer engagement without disrupting platform cohesion.

EBITDA Productivity Trends

Between FY2022 and FY2024, Salesforce significantly enhanced its operational efficiency. According to its 10-K filings, EBITDA per employee grew from approximately $63,000 in FY2022 to over $145,000 by FY2024 , more than doubling in just two years.

This improvement was driven by a two-fold strategy:

First, Salesforce undertook a substantial restructuring in 2023, reducing its global workforce by about 10%, or over 7,000 employees.

Second, the company implemented tighter cost controls, improved operating discipline, and began integrating AI-driven productivity tools internally, helping expand operating margins from ~3% in FY2022 to ~17% in FY2024.

The net result was a leaner, more focused organization generating more value per employee a trend that aligns with broader tech-sector shifts toward profitability over pure growth.

Salesforce: A Platform Leader’s Path to Sustained Dominance

Salesforce’s Change of Operations Dramatically

Salesforce’s expansion from about $60K EBITDA per employee in 2022 to $149K in 2025 was a huge leap with a 150% increase in operational efficiency. The report suggests that the company has undergone a fundamental restructuring of its cost base and has gained important operating leverage, likely through the use of AI for automation, process optimization, and adopting more disciplined hiring practices after the 2022 tech downturn.

Salesforce’s trajectory of improvement implies that the management’s insistence on profitability is yielding positive results, which lends credence to the investment thesis. On the other hand, Oracle’s consistent efficiency combined with the lower valuation multiple entails a strong value proposition. SAP’s lambing statistics underscore the execution risks involved in large-scale business model transitions which render it the riskiest despite its market position.

AI Integration and Competitive Positioning

Salesforce is experiencing other execution risks in its AI strategy that could generally change its competitive position and attractiveness to investors.

Technical Integration Complexity

Efficiently integrating AI across Salesforce’s really wide platform ecosystem takes the complete integration with existing workflows, data models, and user interfaces to be totally free of any bugs. An imperfect implementation of AI could site planning disruption since customers would need to operationally integrate core CRM components which they cannot omit, and this could further lead to system instability or user resistance. The shortfall of technical supports lies in maintaining platform reliability while allowing multiple clouds to introduce advanced AI capabilities is a difficult execution task.

Competitive Vulnerability

Microsoft’s superb AI know-how via the OpenAI partnership and the Azure infrastructure presents a strong competitive menace to Salesforce’s AI hopes. When Salesforce’s AI capabilities are slower than Microsoft’s Copilot integration, or else when they do not introduce any substantial improvement in productivity, enterprise clients may change to the Microsoft ecosystem for cohesive AI-revealed productivity implements. This risk is further increased due to the fact that Microsoft has already established Office 365 customer relationships and its holistic approach to the market.

ATOMVEST has a comparatively huge 48.91% portfolio concentration in a likely single position of $23.4 million. This is a very acute concentration risk that contradicts essential portfolio management guidelines. The concentration has actually increased from 40.55% to 48.91% of the portfolio, which implies either poor rebalancing discipline or severe underperformance in other investments.

VALUEACT on the other hand, follows far better diversification strategies with a 16.98% allocation, down from 22.08% but poses different issues regarding position management. The primary sizeable holding of 2.9 million shares worth $778 million denotes deep belief and their 0.30% ownership stake would make them the notable influence as an activist investor. Nevertheless, the company hasn’t made any position changes while the stock has seemingly gone down considerably in value. The change from 22.08% to 16.98% seems to be the result of price drop rather than active selling. This is odd for an activist investor who would be expected to influence outcomes.

Salesforce is a high-quality growth stock suitable for both investors who are into technology and energy and those who seek to gain from digital transformation. The firm’s dominance in the market, its stable recurring income, and the AI-based innovative developments all act to create a strong foundation for value creation in the long term. The call for investment stays valid for people who believe in the ongoing digitization of commercial processes and Salesforce’s capability to run its multi-cloud platform with efficiency thus keeping its leadership role in the evolving CRM space.



Source link

AI Insights

AI rollout in NHS hospitals faces major challenges

Published

on


Implementing artificial intelligence (AI) into NHS hospitals is far harder than initially anticipated, with complications around governance, contracts, data collection, harmonisation with old IT systems, finding the right AI tools and staff training, finds a major new UK study led by UCL researchers. 

Authors of the study, published in The Lancet eClinicalMedicine, say the findings should provide timely and useful learning for the UK Government, whose recent 10-year NHS plan identifies digital transformation, including AI, as a key platform to improving the service and patient experience. 

In 2023, NHS England launched a programme to introduce AI to help diagnose chest conditions, including lung cancer, across 66 NHS hospital trusts in England, backed by £21 million in funding. The trusts are grouped into 12 imaging diagnostic networks: these hospital networks mean more patients have access to specialist opinions. Key functions of these AI tools included prioritising critical cases for specialist review and supporting specialists’ decisions by highlighting abnormalities on scans.

Funded by the National Institute for Health and Care Research (NIHR), this research was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge, analysing how procurement and early deployment of the AI tools went. The study is one of the first studies to analyse real-world implementation of AI in healthcare.

Evidence from previous studies¹, mostly laboratory-based, suggested that AI might benefit diagnostic services by supporting decisions, improving detection accuracy, reducing errors and easing workforce burdens.

In this UCL-led study, the researchers reviewed how the new diagnostic tools were procured and set up through interviews with hospital staff and AI suppliers, identifying any pitfalls but also any factors that helped smooth the process.

They found that setting up the AI tools took longer than anticipated by the programme’s leadership. Contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.

Key challenges included engaging clinical staff with already high workloads in the project, embedding the new technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding, and scepticism, among staff about using AI in healthcare.

The study also identified important factors which helped embed AI including national programme leadership and local imaging networks sharing resources and expertise, high levels of commitment from hospital staff leading implementation, and dedicated project management.

The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope” and are recommending that NHS staff are trained in how AI can be used effectively and safely and that dedicated project management is used to implement schemes like this in the future.

First author Dr Angus Ramsay (UCL Department of Behavioural Science and Health) said: “In July ministers unveiled the Government’s 10-year plan for the NHS, of which a digital transformation is a key platform.

“Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.

“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.

“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals. Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.

“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.

“AI tools can offer valuable support for diagnostic services, but they may not address current healthcare service pressures as simply as policymakers may hope.”

The researchers conducted their evaluation between March and September last year, studying 10 of the participating networks and focusing in depth on six NHS trusts. They interviewed network teams, trust staff and AI suppliers, observed planning, governance and training and analysed relevant documents.

Some of the imaging networks and many of the hospital trusts within them were new to procuring and working with AI.

The problems involved in setting up the new tools varied – for example, in some cases those procuring the tools were overwhelmed by a huge amount of very technical information, increasing the likelihood of key details being missed. Consideration should be given to creating a national approved shortlist of potential suppliers to facilitate procurement at local level, the researchers said.

Another problem was initial lack of enthusiasm among some NHS staff for the new technology in this early phase, with some more senior clinical staff raising concerns about the potential impact of AI making decisions without clinical input and on where accountability lay in the event a condition was missed. The researchers found the training offered to staff did not address these issues sufficiently across the wider workforce – hence their call for early and ongoing training on future projects.

In contrast, however, the study team found the process of procurement was supported by advice from the national team and imaging networks learning from each other. The researchers also observed high levels of commitment and collaboration between local hospital teams (including clinicians and IT) working with AI supplier teams to progress implementation within hospitals.

In this project, each hospital selected AI tools for different reasons, such as focusing on X-ray or CT scanning, and purposes, such as to prioritise urgent cases for review or to identify potential symptoms.


The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex. These findings indicate AI might not be the silver bullet some have hoped for but the lessons from this study will help the NHS implement AI tools more effectively.”


Naomi Fulop, Senior Author, Professor UCL Department of Behavioural Science and Health

Limitations

While the study has added to the very limited body of evidence on the implementation and use of AI in real-world settings, it focused on procurement and early deployment. The researchers are now studying the use of AI tools following early deployment when they have had a chance to become more embedded. Further, the researchers did not interview patients and carers and are therefore now conducting such interviews to address important gaps in knowledge about patient experiences and perspectives, as well as considerations of equity.

Source:

Journal reference:

Ramsay, A. I. G., et al. (2025). Procurement and early deployment of artificial intelligence tools for chest diagnostics in NHS services in England: a rapid, mixed method evaluation. eClinicalMedicine. doi.org/10.1016/j.eclinm.2025.103481



Source link

Continue Reading

AI Insights

AI takes passenger seat in Career Center with Microsoft Copilot

Published

on


By Arden Berry | Staff Writer

To increase efficiency and help students succeed, the Career Center has created artificial intelligence programs through Microsoft Copilot.

Career Center Director Amy Rylander said the program began over the summer with teams creating user guides that described how students could ethically use AI while applying for jobs.

“We started learning about prompting AI to do things, and as we began writing the guides and began putting updates in them and editing them to be in a certain way, our data person took our guides and fed them into Copilot, and we created agents,” Rylander said. “So instead of just a user’s guide, we now have agents to help students right now with three areas.”

Rylander said these three areas were resume-building, interviewing and career discovery. She also said the Career Center sent out an email last week linking the Copilot Agents for these three areas.

“Agents use AI to perform tasks by reasoning, planning and learning — using provided information to execute actions and achieve predetermined goals for the user,” the email read.

To use these Copilot Agents, Rylander said students should log in to Microsoft Office with their Baylor email, then use the provided Copilot Agent links and follow the provided prompts. For example, the Career Discovery Agent would provide a prompt to give the agent, then would ask a set of questions and suggest potential career paths.

“It’ll help you take the skills that you’re learning in your major and the skills that you’ve learned along the way and tell you some things that might work for you, and then that’ll help with the search on what you might want to look for,” Rylander said.

Career Center Assistant Vice Provost Michael Estepp said creating AI systems was a “proactive decision.”

“We’re always saying, ‘What are the things that students are looking for and need, and what can our staff do to make that happen?’” Estepp said. “Do we go AI or not? We definitely needed to, just so we were ahead of the game.”

Estepp said the AI systems would not replace the Career Center but would increase its efficiency, allowing the Career Center more time to help students in a more specialized way.

“Students want to come in, and they don’t want to meet with us 27 times,” Estepp said. “We can actually even dive deeper into the relationships because, hopefully, we can help more students, because our goal is to help 100% of students, so I think that’s one of the biggest pieces.”

However, Rylander said students should remember to use AI only as a tool, not as a replacement for their own experience.

“Use it ethically. AI does not take the place of your voice,” Rylander said. “It might spit out a bullet that says something, and I’ll say, ‘What did you mean by that?’ and get the whole story, because we want to make sure you don’t lose your voice and that you are not presenting yourself as something that you’re not.”

For the future, Rylander said the Career Center is currently working on Graduate School Planning and Career Communications Copilots. Estepp also said Baylor has a contract with LinkedIn that will help students learn to use AI for their careers.

“AI has impacted the job market so significantly that students have to have that. It’s a mandatory skill now,” Estepp said. “We’re going to start messaging out to students different certifications they can take within LinkedIn, that they can complete videos and short quizzes, and then actually be able to get certifications in different AI and large language model aspects and then put that on their resume.”



Source link

Continue Reading

AI Insights

When Cybercriminals Weaponize Artificial Intelligence at Scale

Published

on


Anthropic’s August threat intelligence report sounds like a cybersecurity novel, except it’s terrifyingly not fiction. The report describes how cybercriminals used Claude AI to orchestrate and attack 17 organizations with ransom demands exceeding $500,000. This may be the most sophisticated AI-driven attack campaign to date.

But beyond the alarming headlines lies a more fundamental swing – the emergence of “agentic cybercrime,” where AI doesn’t just assist attackers, it becomes their co-pilot, strategic advisor, and operational commander all at once. 

The End of Traditional Cybercrime Economics

The Anthropic report highlights a cruel reality that IT leaders have long feared. The economics of cybercrime have undergone significant change. What previously required teams of specialized attackers working for weeks can now be accomplished by a single individual in a matter of hours with AI assistance.

For example, the “vibe hacking” operation is detailed in the report. One cybercriminal used Claude Code to automate reconnaissance across thousands of systems, create custom malware with anti-detection capabilities, perform real-time network penetration, and analyze stolen financial data to calculate psychologically optimized ransom amounts. 

More than just following instructions, the AI made tactical decisions about which data to exfiltrate and crafted victim-specific extortion strategies that maximized psychological pressure. 

Sophisticated Attack Democratization

One of the most unnerving revelations in Anthropic’s report involves North Korean IT workers who have infiltrated Fortune 500 companies using AI to simulate technical competence they don’t have. While these attackers are unable to write basic code or communicate professionally in English, they’re successfully maintaining full-time engineering positions at major corporations thanks to AI handling everything from technical interviews to daily work deliverables. 

The report also discloses that 61 percent of the workers’ AI usage focused on frontend development, 26 percent on programming tasks, and 10 percent on interview preparation. They are essentially human proxies for AI systems, channeling hundreds of millions of dollars to North Korea’s weapons programs while their employers remain unaware. 

Similarly, the report reveals how criminals with little technical skill are developing and selling sophisticated ransomware-as-a-service packages for $400 to $1,200 on dark web forums. Features that previously required years of specialized knowledge, such as ChaCha20 encryption, anti-EDR techniques, and Windows internals exploitation, are now generated on demand with the aid of AI. 

Defense Speed Versus Attack Velocity

Traditional cybersecurity operates on human timetables, with threat detection, analysis, and response cycles measured in hours or days. AI-powered attacks, on the other hand, operate at machine speed, with reconnaissance, exploitation, and data exfiltration occurring in minutes. 

The cybercriminal highlighted in Anthropic’s report automated network scanning across thousands of endpoints, identified vulnerabilities with “high success rates,” and crossed through compromised networks faster than human defenders could respond. When initial attack vectors failed, the AI immediately generated alternative attacks, creating a dynamic adversary that adapted in real-time. 

This speed delta creates an impossible situation for traditional security operations centers (SOCs). Human analysts cannot keep up with the velocity and persistence of AI-augmented attackers operating 24/7 across multiple targets simultaneously. 

Asymmetry of Intelligence

What makes these AI-powered attacks particularly dangerous isn’t only their speed – it’s their intelligence. The criminals highlighted in the report utilized AI to analyze stolen data and develop “profit plans” by incorporating multiple monetization strategies. Claude evaluated financial records to gauge optimal ransom amounts, analyzed organizational structures to locate key decision-makers, and crafted sector-specific threats based on regulatory vulnerabilities. 

This level of strategic thinking, combined with operational execution, has created a new category of threats. These aren’t script-based armatures using predefined playbooks; they’re adaptive adversaries that learn and evolve throughout each campaign. 

The Acceleration of the Arms Race 

The current challenge is summed up as: “All of these operations were previously possible but would have required dozens of sophisticated people weeks to carry out the attack. Now all you need is to spend $1 and generate 1 million tokens.”

The asymmetry is significant. Human defenders must deal with procurement cycles, compliance requirements, and organizational approval before deploying new security technologies. Cybercriminals simply create new accounts when existing ones are blocked – a process that takes about “13 seconds.” 

But this predicament also presents an opportunity. The same AI functions being weaponized can be harnessed for defenses, and in many cases defensive AI has natural advantages. 

Attackers can move fast, but defenders have access to something criminals don’t – historical data, organizational context, and the ability to establish baseline behaviors across entire IT environments. AI defense systems can monitor thousands of endpoints simultaneously, correlate subtle anomalies across network traffic, and respond to threats faster than human attackers can ever hope to. 

Modern AI security platforms, such as the AI SOC Agent that works like an AI SOC Analyst, have proven this principle in practice. By automating alert triage, investigation, and response processes, these systems process security events at machine speed while maintaining the context and judgment that pure automation lacks. 

Defensive AI doesn’t need to be perfect; it just needs to be faster and more persistent than human attackers. When combined with human expertise for strategic oversight, this creates a formidable defensive posture for organizations. 

Building AI-Native Security Operations

The Anthropic report underscores how incremental improvements to traditional security tools won’t matter against AI-augmented adversaries. Organizations need AI-native security operations that match the scale, speed, and intelligence of modern AI attacks. 

This means leveraging AI agents that autonomously investigate suspicious activities, correlate threat intelligence across multiple sources, and respond to attacks faster than humans can. It requires SOCs that use AI for real-time threat hunting, automated incident response, and continuous vulnerability assessment. 

This new approach demands a shift from reactive to predictive security postures. AI defense systems must anticipate attack vectors, identify potential compromises before they fully manifest, and adapt defensive strategies based on emerging threat patterns. 

The Anthropic report clearly highlights that attackers don’t wait for a perfect tool. They train themselves on existing capabilities and can cause damage every day, even if the AI revolution were to stop. Organizations cannot afford to be more cautious than their adversaries. 

The AI cybersecurity arms race is already here. The question isn’t whether organizations will face AI-augmented attacks, but if they’ll be prepared when those attacks happen. 

Success demands embracing AI as a core component of security operations, not an experimental add-on. It means leveraging AI agents that operate autonomously while maintaining human oversight for strategic decisions. Most importantly, it requires matching the speed of adoption that attackers have already achieved. 

The cybercriminals highlighted in the Anthropic report represent the new threat landscape. Their success demonstrates the magnitude of the challenge and the urgency of the needed response. In this new reality, the organizations that survive and thrive will be those that adopt AI-native security operations with the same speed and determination that their adversaries have already demonstrated. 

The race is on. The question is whether defenders will move fast enough to win it.  



Source link

Continue Reading

Trending