Connect with us

Tools & Platforms

Check Point acquires AI security firm Lakera in push for enterprise AI protection

Published

on


Check Point Software Technologies announced Monday it will acquire Lakera, a specialized artificial intelligence security platform, as entrenched cybersecurity companies continue to expand their offerings to match the generative AI boom.

The deal, expected to close in the fourth quarter of 2025, positions Check Point to offer what the company describes as an “end-to-end AI security solution.” Financial terms were not disclosed.

The acquisition reflects growing concerns about security risks as companies integrate large language models, generative AI, and autonomous agents into core business operations. These technologies introduce potential attack vectors including data exposure, model manipulation, and risks from multi-agent collaboration systems.

“AI is transforming every business process, but it also introduces new attack surfaces,” said Check Point CEO Nadav Zafrir. The company chose Lakera for its AI-native security approach and performance capabilities, he said.

Lakera, founded by former AI specialists from Google and Meta, operates out of both Zurich and San Francisco. The company’s platform provides real-time protection for AI applications, claiming detection rates above 98% with response times under 50 milliseconds and false positive rates below 0.5%.

The startup’s flagship products, Lakera Red and Lakera Guard, offer pre-deployment security assessments and runtime enforcement to protect AI models and applications. The platform supports more than 100 languages and serves Fortune 500 companies globally. The company also operates what it calls Gandalf, an adversarial AI network that has generated more than 80 million attack patterns to test AI defenses. This continuous testing approach helps the platform adapt to emerging threats.

David Haber, Lakera’s co-founder and CEO, said joining Check Point will accelerate the company’s global mission to protect AI applications with the speed and accuracy enterprises require.

Check Point already offers AI-related security through its GenAI Protect service and other AI-powered defenses for applications, cloud systems, and endpoints. The Lakera acquisition extends these capabilities to cover the full AI lifecycle, from models to data to autonomous agents.

Upon completion of the deal, Lakera will form the foundation of Check Point’s Global Center of Excellence for AI Security. The integration aims to accelerate AI security research and development across Check Point’s broader security platform.

The acquisition is another in a flurry of bigger cybersecurity companies moving to acquire AI-focused startups. Earlier this month, F5 acquired CalypsoAI, Cato Networks acquired Aim Security, and Varonis acquired SlashNext. 

The deal remains subject to customary closing conditions.


Written by Greg Otto

Greg Otto is Editor-in-Chief of CyberScoop, overseeing all editorial content for the website. Greg has led cybersecurity coverage that has won various awards, including accolades from the Society of Professional Journalists and the American Society of Business Publication Editors. Prior to joining Scoop News Group, Greg worked for the Washington Business Journal, U.S. News & World Report and WTOP Radio. He has a degree in broadcast journalism from Temple University.



Source link

Tools & Platforms

Tech giants to pour billions into UK AI. Here’s what we know so far

Published

on


Microsoft CEO Satya Nadella speaks at Microsoft Build AI Day in Jakarta, Indonesia, on April 30, 2024.

Adek Berry | AFP | Getty Images

LONDON — Microsoft said on Tuesday that it plans to invest $30 billion in artificial intelligence infrastructure in the U.K. by 2028.

The investment includes $15 billion in capital expenditures and $15 billion in its U.K. operations, Microsoft said. The company said the investment would enable it to build the U.K.’s “largest supercomputer,” with more than 23,000 advanced graphics processing units, in partnership with Nscale, a British cloud computing firm.

The spending commitment comes as President Donald Trump embarks on a state visit to Britain. Trump arrived in the U.K. Tuesday evening and is set to be greeted at Windsor Castle on Wednesday by King Charles and Queen Camilla.

During his visit, all eyes are on U.K. Prime Minister Keir Starmer, who is under pressure to bring stability to the country after the exit of Deputy Prime Minister Angela Rayner over a house tax scandal and a major cabinet reshuffle.

On a call with reporters on Tuesday, Microsoft President Brad Smith said his stance on the U.K. has warmed over the years. He previously criticized the country over its attempt in 2023 to block the tech giant’s $69 billion acquisition of video game developer Activision-Blizzard. The deal was cleared by the U.K.s competition regulator later that year.

“I haven’t always been optimistic every single day about the business climate in the U.K.,” Smith said. However, he added, “I am very encouraged by the steps that the government has taken over the last few years.”

“Just a few years ago, this kind of investment would have been inconceivable because of the regulatory climate then and because there just wasn’t the need or demand for this kind of large AI investment,” Smith said.

Starmer and Trump are expected to sign a new deal Wednesday “to unlock investment and collaboration in AI, Quantum, and Nuclear technologies,” the government said in a statement late Tuesday.



Source link

Continue Reading

Tools & Platforms

Workday previews a dozen AI agents, acquires Sana

Published

on


After introducing its first AI agents for its HR and financial users last year, Workday returns this year with more prebuilt agents, a data layer for agents to feed analytics systems, and developer tools for  custom agents.

The company also said it entered a definitive agreement to acquire Sana, whose AI-based tools enable learning and content creation. Workday said the acquisition will cost $1.1 billion and expects it to close by Jan. 31.

Workday has been on a tear with acquisitions this year. It reached an agreement to buy Paradox, an AI agent builder that automates tasks such as candidate screening, texting and interview scheduling. The deal is expected to close by the end of October. In April, Workday acquired Flowise, an AI agent builder.

HR software, in general, is complex compared with enterprise systems such as CRM, said Josh Bersin, an independent HR technology analyst. Because of that, some HR vendors will have to add agentic AI functionality through acquisition. Workday’s acquisitions this year coincide with the hiring of former SAP S/4HANA and analytics leader Gerrit Kazmaier as its president of product and technology.

“Workday knows that the architecture they have is not going to quickly get them to the world of agents — they can’t build agents fast enough to work across the proprietary workflow system that they have,” Bersin said. “Their direct competitors, SAP and Oracle, are all in the same boat.”

Agents, tools to come

Workday previewed several agents to automate HR work, including the Business Process Copilot Agent, which configures Workday for individual user tasks; Document Intelligence for Contingent Labor Agent, which manages scope of work processes and aligns contracts; Employee Sentiment Agent, which analyzes employee feedback; Job Architecture Agent, which automates job creation, titles and management; and Performance Agent, which surveys data across Workday and assembles it for performance reviews.

Another tool, Case Agent, can potentially be a significant time-saver for HR workers, said Peter Bailis, chief technology officer at Workday. He is a former Google AI for cloud analytics executive who also recently joined the company.

“One of the biggest challenges in HR [is when] an employee has a critical question,” Bailis said. “But their questions are often complex, and processing times for HR departments are often long.”

The case agent can review similar cases in HR, apply the right regional and compliance context, and draft a tailored response for humans to review and deliver.

“The most important part — caring for employees — stays human,” Bailis said.

On the financials side, Workday previewed Cost & Profitability Agent, which enables users to define allocation rules with natural language to derive insights; Financial Close Agent, which automates closing processes; and Financial Test Agent, which analyzes financials to detect fraud and enable compliance. For the education vertical, Workday plans to release Student Administration Agent and Academic Requirements Agent.

Workday also plans agents that bring the functionality of recent acquisitions Paradox and Flowise to its platform.

Expected in the next platform update is the zero-copy Workday Data Cloud, which brings together Workday data with other operational systems such as sales and risk management for analytics, forecasting and planning. Also in the works is Workday Build, a developer platform that includes no-code features from Flowise that enables the creation of custom agents.

How HR vendors will use generative AI

How AI will affect HR jobs

The AI transformation Workday and the rest of the enterprise HR software market is undergoing will likely affect the ratios of HR workers to employees for large businesses, Bersin said.

Currently, many companies aim for an industry standard of one HR employee per 100 employees; with AI agents automating many administrative processes, he said he sees the potential for ratios of 1:200, 1:250, or — in the case of one client that Bersin’s company interviewed — possibly 1:400.

As such, automation will enable companies to do more work with smaller HR teams.

“In recruiting, there are sourcers, screeners, interview schedulers, people that do assessment, people that look at pay, people that write job offers, people that create start dates, people that do onboarding,” Bersin said. “Those jobs, maybe a third of them will go away. In learning and development, there’s a new era where a lot of the training content is being generated by AI.”

Workday previewed these features and announced the Sana acquisition in conjunction with its Workday Rising user conference in Las Vegas Sept. 15-18.

Don Fluckinger is a senior news writer for Informa TechTarget. He covers customer experience, digital experience management and end-user computing. Got a tip? Email him.



Source link

Continue Reading

Tools & Platforms

Humanity’s Best or Worst Invention? – Pacific Index

Published

on


Artificial Intelligence is everywhere– and professors at Pacific are less than thrilled

Half the world seems to be under the impression that the creation of Artificial Intelligence (a.k.a AI) is the greatest invention since the wheel, while another half seems to worry that AI will roll over humanity and crush it.

For students, though, it especially seems like humanity has struck gold. Why whittle away precious hours doing homework when AI can spit out an entire essay in seconds? (Plus, it can create fun images like the one you see accompanying this article.) It can even make videos that look so real you begin to doubt everything you see. AI is the robot that’s smarter than you, faster than you, and more creative than you—but maybe it’s not as good as some make it seem. 

“I think it’s a mistake to think of it as a tool,” voiced Professor Sang-hyoun Pahk. “They are replacing a little bit too much of the thinking that we want to do and want our students to do.” Professor Pahk recently gave a presentation to Pacific’s faculty on the topic, along with professors Aimee Wodda, Dana Mirsalis, and Rick Jobs. Like many universities around the world, AI has become an increasingly popular topic at Pacific, with some accepting the technology and others shunning it.  Professor Pahk explained that there are a lot of opportunities for faculty to learn techniques for using AI as a tool in the classroom, but that not all faculty have a desire to go down that road. “There’s less…kind of strategies for what you do when you don’t want to use it,” he shared, firmly expressing that it’s something he wishes to stay far away from. “It’s part of what we were trying to start when we presented.”

Professor Pahk and his colleagues presented a mere week before school was back in-session, so there was little time for faculty to change their syllabi as a way to safe-guard students from using AI on coursework. Still, many professors seemed to be on a similar page, tweaking their lesson plans and teaching methods to ensure that AI is a technology students can’t even be tempted to use. 

“I’ve tried to, if you will, AI proof my classes to a certain degree,” Professor Jules Boykoff shared. “Over the recent years, I’ve changed the assignments quite a bit; one example is I have more in-class examinations.” Professor Boykoff is no stranger to AI and admits that he’s done his fair share of testing out the technology. Still, the cons seem to greatly outweigh the pros, especially when it comes to education. “I’m a big fan of students being able to write with clarity and confidence and I’m concerned that overall, AI provides an incentive to not work as hard at writing with clarity and confidence.” Professor Boykoff, like many of his colleagues, stresses that AI short-circuits one’s ability to learn and develop by doing all the heavy lifting for them. 

Philosophy Professor Richard Frohock puts it like this, “It would be like going to the gym and turning on the treadmill, and then just sitting next to it.” Professor Frohock said this with good humor, but his analogy rings true. “Thinking is the actual act of running. It’s hard, sometimes it sucks, we never really want to do it– and it’s not about having five miles on your watch…it’s about that process, that getting to five miles. And using AI is skipping that process, so it’s not actually helping you.” Similar to his colleagues, Professor Frohock doesn’t allow any AI usage in his classes, especially since students are still in the process of developing their minds. “I don’t want it to be us vs the students, and like we’re policing what you guys do,” he admits, explaining that he has no desire to make student learning more difficult, but rather the opposite. “If we want to use AI to expand our mind, first we actually have to have the skills to be thinkers independently without AI.” 

This is just one of many reasons that professors warn against using AI, but they’re not naïve to the fact that students will use it, nonetheless. It’s become integrated into all Google searches and social media, which means students interact with AI whether they want to or not. “I have come to the conclusion that it’s counterproductive to try and control in some way student use of AI,” commented Professor Michael Huntsberger. Like other faculty, Professor Huntsberger has adjusted his lesson plans to make using AI more challenging for students, but he recognizes that this may not be foolproof. Still, he warns students to be very cautious when approaching AI and advises, “Don’t use it past your first step–so as a place to start your research…I think that’s a great way to make use of these things, but then tread very carefully.” He suggests that students should leave behind the technology altogether once they’ve established their starting point so that their work can still maintain enough human interaction to be considered student work and not AI. 

The problem with any AI usage is that the results that pop up are the product of other people’s work, which encroaches on the grounds of plagiarism. “This is the big fight right now between creators and the big tech companies because the creators are saying ‘you’re drawing on our work,’” explained Professor Huntsberger. “And of course, those creators are, A, not being compensated, and B, are not being recognized in any way, and ultimately it’s stealing their work.” 

Pacific’s own Professor Boykoff recognizes that his work is victim of this process, explaining that a generous chunk of his writing has been stolen by this technology. “A big conglomerate designed to make money is stealing my hard-earned labor,” he articulated. “It’s not just me, it’s not just like it’s a personal thing, I’m just saying, as a general principle it’s offensive.” 

Alongside those obvious concerns, Professor Pahk adds a few more items to the cons list saying, “Broadly, there’s on the one hand, the social, and environmental, and political costs of artificial intelligence.” 

AI 0, Professor Pahk 1.

Looking past all the cons, Professor Pahk acknowledged a bright side to the situation. “It’s just…culturally a less serious problem here,” claimed Professor Pahk, sharing that from his experience, students at Pacific want to learn and aren’t here just to mark off courses on a to-do list. “It’s not that it’s not a problem here, but it’s not the same kind of problem.” 



Source link

Continue Reading

Trending