Connect with us

AI Insights

European publishers file complaint over Google’s AI Overviews: report

Published

on


Google’s controversial AI-generated summaries — which have been blamed for crushing the traffic of US news sites — have drawn an antitrust complaint in the European Union from a group of independent publishers.

The complaint by the Independent Publishers Alliance accuses the Sundar Pichai-led Big Tech giant of abusing its dominant position in online search by promoting its own AI-generated summaries over links to original content.

The filing, submitted on June 30, requests that the European Commission impose interim measures to prevent what it describes as “irreparable harm” to publishers.

Google’s artificial intelligence tools are being blamed for harming publishers’ businesses. Koshiro K – stock.adobe.com

“Google’s core search engine service is misusing web content for Google’s AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss,” the complaint alleges.

The complaint comes as damning data revealed that AI Overviews have resulted in 37 of the top 50 US news domains suffering year-over-year traffic declines since its launch in May 2024, according to digital intelligence firm SimilarWeb.

A report by SimilarWeb also found that the AI summaries have led to a significant increase in the frequency of “zero clicks” to search queries.

The percentage of web searches related to news that end without a click to a news site jumped to 69% in May 2025 from 56% for the same month last year, SimilarWeb found.

A spokesperson for the Competition and Markets Authority, the EU’s antitrust agency, confirmed to The Post that it received the complaint.

“Last week, we proposed to designate Google with strategic market status in search and search advertising. If designated, this would allow us to introduce targeted measured to address specific aspects of how Google operates search services in the UK,” the rep told The Post on Friday.

A group of independent publishers in the European Union filed an antitrust complaint against Google over its AI Overviews technology. dts News Agency Germany/Shutterstock

AI Overviews are summaries generated using Google’s artificial intelligence models and are displayed at the top of general search results. The feature is available in more than 100 countries. Google began incorporating advertisements into AI Overviews this past May.

The publishers allege that Google’s practice of displaying its own summaries above hyperlinks disadvantages original content and is made worse by the lack of control publishers have over how their material is used.

“Publishers using Google Search do not have the option to opt out from their material being ingested for Google’s AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google’s general search results page,” the complaint alleges.

The Movement for an Open Web, whose members include digital advertisers and publishers, and British nonprofit Foxglove Legal Community Interest Company are also signatories to the complaint.

“In short, AI Overviews are theft from the publishing industry,” Tim Cowen, co-founder of Movement for an Open Web, told The Post.

“They steal publishers’ content and then use that to steal their traffic before it reaches their site. That’s unfair and a clear breach of copyright principles.”

Cowen added that he wants publishers “to have the ability to opt out of their content being harvested for AI Overviews without fear of being punished in search results.”

The complaint submitted by the Independent Publishers Alliance accuses Google of abusing its dominant position in online search by promoting its own AI-generated summaries over links to original content. Google

“In the longer term we want to see a fair economic and regulatory model that rewards publishers for the value of their works,” he said.

The three organizations are seeking regulatory intervention to address what they say is an urgent threat to competition and access to news.

Foxglove co-executive director Rosa Curling said the consequences of AI Overviews for news publishers are severe.

“Independent news faces an existential threat: Google’s AI Overviews,” Foxglove co-executive director Rosa Curling said.

“That’s why with this complaint, Foxglove and our partners are urging the European Commission, along with other regulators around the world, to take a stand and allow independent journalism to opt out.”

A Google spokesperson defended the AI Overviews feature and disputed the characterization of its impact on publishers.

“New AI experiences in Search enable people to ask even more questions, which creates new opportunities for content and businesses to be discovered,” the spokesperson told Reuters.

Google added that the company sends billions of clicks to websites each day and that traffic fluctuations can be influenced by many factors.

“The reality is that sites can gain and lose traffic for a variety of reasons, including seasonal demand, interests of users, and regular algorithmic updates to Search,” the spokesperson said.

The claims in the EU complaint echo a similar argument made in a lawsuit filed in the United States by an education technology company, which alleges that Google’s AI Overviews are eroding demand for original content and damaging the competitive ability of publishers, resulting in declines in both traffic and subscriptions.

Google has been subject to antitrust scrutiny in the US and the European Union. Sundar Pichai, CEO of Google parent company Alphabet, is pictured above on May 20. AP

Google has faced several antitrust investigations on both sides of the Atlantic Ocean in recent years.

The tech giant is appealing a $4.7 billion fine imposed by the European Commission for allegedly abusing its dominance with the Android operating system. Last month, an advisor to the EU’s top court recommended the fine be upheld.

The European Commission is also continuing investigations into Google’s conduct in digital advertising and search, with potential for further regulatory action.

In the United States, a federal judge ruled in August 2024 that Google violated antitrust law by maintaining monopolies in general search and search advertising, citing exclusive deals such as those with Apple.

 A verdict after a trial on the remedy phase — which could include breaking up Google — is expected next month.

In a separate ruling in April 2025, another judge found Google had illegally monopolized online advertising markets by controlling both the buy and sell sides of the ad exchange.

With Post Wires



Source link

AI Insights

AI takes passenger seat in Career Center with Microsoft Copilot

Published

on


By Arden Berry | Staff Writer

To increase efficiency and help students succeed, the Career Center has created artificial intelligence programs through Microsoft Copilot.

Career Center Director Amy Rylander said the program began over the summer with teams creating user guides that described how students could ethically use AI while applying for jobs.

“We started learning about prompting AI to do things, and as we began writing the guides and began putting updates in them and editing them to be in a certain way, our data person took our guides and fed them into Copilot, and we created agents,” Rylander said. “So instead of just a user’s guide, we now have agents to help students right now with three areas.”

Rylander said these three areas were resume-building, interviewing and career discovery. She also said the Career Center sent out an email last week linking the Copilot Agents for these three areas.

“Agents use AI to perform tasks by reasoning, planning and learning — using provided information to execute actions and achieve predetermined goals for the user,” the email read.

To use these Copilot Agents, Rylander said students should log in to Microsoft Office with their Baylor email, then use the provided Copilot Agent links and follow the provided prompts. For example, the Career Discovery Agent would provide a prompt to give the agent, then would ask a set of questions and suggest potential career paths.

“It’ll help you take the skills that you’re learning in your major and the skills that you’ve learned along the way and tell you some things that might work for you, and then that’ll help with the search on what you might want to look for,” Rylander said.

Career Center Assistant Vice Provost Michael Estepp said creating AI systems was a “proactive decision.”

“We’re always saying, ‘What are the things that students are looking for and need, and what can our staff do to make that happen?’” Estepp said. “Do we go AI or not? We definitely needed to, just so we were ahead of the game.”

Estepp said the AI systems would not replace the Career Center but would increase its efficiency, allowing the Career Center more time to help students in a more specialized way.

“Students want to come in, and they don’t want to meet with us 27 times,” Estepp said. “We can actually even dive deeper into the relationships because, hopefully, we can help more students, because our goal is to help 100% of students, so I think that’s one of the biggest pieces.”

However, Rylander said students should remember to use AI only as a tool, not as a replacement for their own experience.

“Use it ethically. AI does not take the place of your voice,” Rylander said. “It might spit out a bullet that says something, and I’ll say, ‘What did you mean by that?’ and get the whole story, because we want to make sure you don’t lose your voice and that you are not presenting yourself as something that you’re not.”

For the future, Rylander said the Career Center is currently working on Graduate School Planning and Career Communications Copilots. Estepp also said Baylor has a contract with LinkedIn that will help students learn to use AI for their careers.

“AI has impacted the job market so significantly that students have to have that. It’s a mandatory skill now,” Estepp said. “We’re going to start messaging out to students different certifications they can take within LinkedIn, that they can complete videos and short quizzes, and then actually be able to get certifications in different AI and large language model aspects and then put that on their resume.”



Source link

Continue Reading

AI Insights

When Cybercriminals Weaponize Artificial Intelligence at Scale

Published

on


Anthropic’s August threat intelligence report sounds like a cybersecurity novel, except it’s terrifyingly not fiction. The report describes how cybercriminals used Claude AI to orchestrate and attack 17 organizations with ransom demands exceeding $500,000. This may be the most sophisticated AI-driven attack campaign to date.

But beyond the alarming headlines lies a more fundamental swing – the emergence of “agentic cybercrime,” where AI doesn’t just assist attackers, it becomes their co-pilot, strategic advisor, and operational commander all at once. 

The End of Traditional Cybercrime Economics

The Anthropic report highlights a cruel reality that IT leaders have long feared. The economics of cybercrime have undergone significant change. What previously required teams of specialized attackers working for weeks can now be accomplished by a single individual in a matter of hours with AI assistance.

For example, the “vibe hacking” operation is detailed in the report. One cybercriminal used Claude Code to automate reconnaissance across thousands of systems, create custom malware with anti-detection capabilities, perform real-time network penetration, and analyze stolen financial data to calculate psychologically optimized ransom amounts. 

More than just following instructions, the AI made tactical decisions about which data to exfiltrate and crafted victim-specific extortion strategies that maximized psychological pressure. 

Sophisticated Attack Democratization

One of the most unnerving revelations in Anthropic’s report involves North Korean IT workers who have infiltrated Fortune 500 companies using AI to simulate technical competence they don’t have. While these attackers are unable to write basic code or communicate professionally in English, they’re successfully maintaining full-time engineering positions at major corporations thanks to AI handling everything from technical interviews to daily work deliverables. 

The report also discloses that 61 percent of the workers’ AI usage focused on frontend development, 26 percent on programming tasks, and 10 percent on interview preparation. They are essentially human proxies for AI systems, channeling hundreds of millions of dollars to North Korea’s weapons programs while their employers remain unaware. 

Similarly, the report reveals how criminals with little technical skill are developing and selling sophisticated ransomware-as-a-service packages for $400 to $1,200 on dark web forums. Features that previously required years of specialized knowledge, such as ChaCha20 encryption, anti-EDR techniques, and Windows internals exploitation, are now generated on demand with the aid of AI. 

Defense Speed Versus Attack Velocity

Traditional cybersecurity operates on human timetables, with threat detection, analysis, and response cycles measured in hours or days. AI-powered attacks, on the other hand, operate at machine speed, with reconnaissance, exploitation, and data exfiltration occurring in minutes. 

The cybercriminal highlighted in Anthropic’s report automated network scanning across thousands of endpoints, identified vulnerabilities with “high success rates,” and crossed through compromised networks faster than human defenders could respond. When initial attack vectors failed, the AI immediately generated alternative attacks, creating a dynamic adversary that adapted in real-time. 

This speed delta creates an impossible situation for traditional security operations centers (SOCs). Human analysts cannot keep up with the velocity and persistence of AI-augmented attackers operating 24/7 across multiple targets simultaneously. 

Asymmetry of Intelligence

What makes these AI-powered attacks particularly dangerous isn’t only their speed – it’s their intelligence. The criminals highlighted in the report utilized AI to analyze stolen data and develop “profit plans” by incorporating multiple monetization strategies. Claude evaluated financial records to gauge optimal ransom amounts, analyzed organizational structures to locate key decision-makers, and crafted sector-specific threats based on regulatory vulnerabilities. 

This level of strategic thinking, combined with operational execution, has created a new category of threats. These aren’t script-based armatures using predefined playbooks; they’re adaptive adversaries that learn and evolve throughout each campaign. 

The Acceleration of the Arms Race 

The current challenge is summed up as: “All of these operations were previously possible but would have required dozens of sophisticated people weeks to carry out the attack. Now all you need is to spend $1 and generate 1 million tokens.”

The asymmetry is significant. Human defenders must deal with procurement cycles, compliance requirements, and organizational approval before deploying new security technologies. Cybercriminals simply create new accounts when existing ones are blocked – a process that takes about “13 seconds.” 

But this predicament also presents an opportunity. The same AI functions being weaponized can be harnessed for defenses, and in many cases defensive AI has natural advantages. 

Attackers can move fast, but defenders have access to something criminals don’t – historical data, organizational context, and the ability to establish baseline behaviors across entire IT environments. AI defense systems can monitor thousands of endpoints simultaneously, correlate subtle anomalies across network traffic, and respond to threats faster than human attackers can ever hope to. 

Modern AI security platforms, such as the AI SOC Agent that works like an AI SOC Analyst, have proven this principle in practice. By automating alert triage, investigation, and response processes, these systems process security events at machine speed while maintaining the context and judgment that pure automation lacks. 

Defensive AI doesn’t need to be perfect; it just needs to be faster and more persistent than human attackers. When combined with human expertise for strategic oversight, this creates a formidable defensive posture for organizations. 

Building AI-Native Security Operations

The Anthropic report underscores how incremental improvements to traditional security tools won’t matter against AI-augmented adversaries. Organizations need AI-native security operations that match the scale, speed, and intelligence of modern AI attacks. 

This means leveraging AI agents that autonomously investigate suspicious activities, correlate threat intelligence across multiple sources, and respond to attacks faster than humans can. It requires SOCs that use AI for real-time threat hunting, automated incident response, and continuous vulnerability assessment. 

This new approach demands a shift from reactive to predictive security postures. AI defense systems must anticipate attack vectors, identify potential compromises before they fully manifest, and adapt defensive strategies based on emerging threat patterns. 

The Anthropic report clearly highlights that attackers don’t wait for a perfect tool. They train themselves on existing capabilities and can cause damage every day, even if the AI revolution were to stop. Organizations cannot afford to be more cautious than their adversaries. 

The AI cybersecurity arms race is already here. The question isn’t whether organizations will face AI-augmented attacks, but if they’ll be prepared when those attacks happen. 

Success demands embracing AI as a core component of security operations, not an experimental add-on. It means leveraging AI agents that operate autonomously while maintaining human oversight for strategic decisions. Most importantly, it requires matching the speed of adoption that attackers have already achieved. 

The cybercriminals highlighted in the Anthropic report represent the new threat landscape. Their success demonstrates the magnitude of the challenge and the urgency of the needed response. In this new reality, the organizations that survive and thrive will be those that adopt AI-native security operations with the same speed and determination that their adversaries have already demonstrated. 

The race is on. The question is whether defenders will move fast enough to win it.  



Source link

Continue Reading

AI Insights

Westwood joins 40 other municipalities using artificial intelligence to examine roads

Published

on


The borough of Westwood has started using artificial intelligence to determine if their roads need to be repaired or repaved.

It’s an effort by elected officials as a way to save money on manpower and to be sure that all decisions are objective.

Instead of relying on his own two eyes, the superintendent of Public Works is now allowing an app on his phone to record images of Westwood’s roads as he drives them.

Data on every pothole, faded striping and 13 other types of road defects are collected by the app.

The road management app is from a New Jersey company called Vialytics.

Westwood is one of 40 municipalities in the state to use the software, which also rates road quality and provides easy to use data.

“Now you’re relying on the facts here not just my opinion of the street. It’s helped me a lot already. A lot of times you’ll have residents who just want their street paved. Now I can go back to people and say there’s nothing wrong with your street that it needs to be repaved,” said Rick Woods, superintendent of Public Works.

Superintendent Woods says he can even create work orders from the road as soon as a defect is detected.

Borough officials believe the Vialytics app will pay for itself in manpower and offer elected officials objective data when determining how to use taxpayer dollars for roads.



Source link

Continue Reading

Trending