Connect with us

AI Research

Artificial Intelligence and Abuse of Dominance in EU Law

Published

on


The rise of artificial intelligence (AI) and its widespread availability raises questions regarding how it might facilitate EU competition law violations. This issue is complex due to two characteristics of AI systems highlighted under their definition in the EU AI Act: AI systems (1) operate with varying levels of autonomy and (2) infer from the input they receive how to generate outputs such as predictions, recommendations or decisions that can influence physical or virtual environments.

This blog post examines how the rise of AI could affect the application of Article 102 of the Treaty on the Functioning of the European Union (TFEU), which prohibits abusive behavior by dominant undertakings.

Abuse of Dominance

Under EU law, an undertaking is dominant if it can act independently from its competitors, customers and consumers to an appreciable extent. Having a dominant position alone does not violate EU competition law; an abuse must be identified.

There is no exhaustive list of categories of abuse under Article 102 TFEU. Examples of abusive conduct typically include foreclosing competitors or exploiting consumers.

Defining Markets

The first step in investigating possible abusive behavior is to define the relevant market to determine whether the company in question holds a dominant position. Market definition can be challenging in AI markets. The European Commission’s 2024 market definition notice provides limited guidance on defining such markets.

AI presents unique challenges for market definition, especially due to its reliance on various technologies used to create tools with a wide range of potential end-use applications.

Another key challenge for market definition is the risk of concluding too quickly that a firm has a dominant position, especially when concerns over market power concentration in AI markets are closely tied to geopolitical issues and the race for AI dominance between the United States, Europe and China. For example, the United States appeared to have a significant advantage with several prominent AI companies and significant investment projects seemingly benefiting a few major companies with a strong presence in the digital space, raising concerns among competition authorities that US firms might primarily control AI markets. However, DeepSeek’s breakthrough shortly after these investment announcements demonstrates that such concerns may be overstated and that large players are not immune to competition. On the contrary, DeepSeek’s market entry highlighted the dynamic nature of AI markets and their vulnerability to unexpected, large-scale disruptions.

Abusing Dominance Through AI

Among many AI-related potential Article 102 (or analogous national law) concerns that they might investigate, competition authorities in Europe are particularly concerned that dominant companies may abuse their position by using AI systems for self-preferential purposes or by seemingly engaging in pricing abuse practices.

  • Self-preferencing. A dominant company may arguably abuse its dominant position by programming AI systems to favor its own products or services over those of competitors.
    •  The European Commission fined Google €2.42 billion for allegedly illegally favoring its own comparison-shopping service. Unlike its competitors, Google’s service was allegedly not subjected to the company’s generic search algorithm, and the European Commission concluded that this led to unfair demotion of rival services.
    • The European Commission has accepted commitments from Amazon regarding its use of nonpublic data on sellers’ activities on Amazon’s marketplace. The European Commission found that Amazon’s use of sellers’ nonpublic business data to inform its own retail decisions was detrimental to effective competition. According to the European Commission, this practice enabled Amazon to adjust its prices to undermine rivals’ opportunities to compete effectively in retail markets.
  • Pricing abuses. A dominant company may purportedly abuse its dominant position through AI-based pricing for exclusionary or exploitative purposes, e.g., by employing AI to engage in predatory pricing (i.e., below-cost prices) or price discrimination (i.e., unjustifiably selling identical or similar goods or services at different prices.
    • Predatory pricing. Dominant firms may arguably use AI systems to target customers that are at risk of switching to a competitor or are price-sensitive, offering predatory prices to retain or attract these customers at the expense of competitors. Assessing alleged predatory pricing is particularly challenging due to the difficulty in proving prices are below cost.
    • Price discrimination. AI tools enable businesses to adjust prices based on various factors, such as consumer preferences and past purchases. With access to large datasets, businesses can change prices in real time, raising concerns about the potential harm to consumer welfare. Under EU law, however, demonstrating exploitative abuse requires a high standard of proof. For personalized pricing to be considered abusive, it would at the very least be necessary to demonstrate that (1) price discrimination is frequent; (2) the AI system consistently targets specific consumer groups; (3) there are no objective justifications; and (4) there is harm to social welfare absent a world without the alleged conduct.

Essential Inputs

Dominant firms seemingly could be especially likely to abuse their position by foreclosing access to essential inputs, such as IT components and data.

  • Access to IT components. Specialized chips are required to train foundation models for generative AI. If one company holds a dominant position in the market for such chips, it may abuse its position in various ways. Here are a few examples:
    • Rebates. A dominant AI chip manufacturer may grant rebates in exchange for a buyer’s commitment to purchase all or a substantial part of its requirements from the dominant manufacturer.
    • Refusal to supply. A dominant AI chipmaker may refuse to supply its chips to a company with which it competes in a downstream market. Such refusal could constitute an abuse if access to the chip is indispensable, the refusal is likely to eliminate all effective competition in the downstream market, and the refusal cannot be objectively justified.
    • Discriminatory behavior. A dominant AI chipmaker may offer preferential pricing and early access to its chips for its affiliated companies while charging independent AI players significantly higher prices and delaying their orders.
  • Access to data. Data is a key pillar of AI. For AI to function effectively, it requires good quality and abundant data so that it can be trained to identify patterns and relationships. Data protection requirements may make it particularly difficult for smaller players—with limited resources and budgets to ensure compliance—to gain access to sufficient personal data to develop leading AI products. Companies with access to larger datasets may be expected to develop superior AI models, potentially leading to allegations of market dominance and anticompetitive creation of barriers to entry. This may lead to accusations of denial of access to data or providing discriminatory access.

Tying and Bundling

Another example of abusive conduct is where a dominant company may attempt to foreclose its competitors by tying or bundling. “Tying” refers to a practice where the seller requires customers to purchase one product (the tying product) to obtain another product (the tied product). “Bundling” refers to offering discounts conditioned on the customer buying a package of two or more products from the supplier.

  • Tying. Tying may be contractual or technical.
    • Contractual tying occurs when the customer who purchases the tying product undertakes to also purchase the tied product rather than alternatives offered by competitors. This is most common in “traditional” markets. A possible example of contractual tying is where a software provider with a dominant position contractually requires that its AI tool only be used in conjunction with its software platform.
    • Technical tying occurs when two products are physically integrated or designed in a way that makes it impossible to use one without the other. For example, a dominant company might integrate its AI solutions into specific software, search engines or mobile devices, ensuring that these AI solutions cannot be separated or used independently.
  • Bundling. A typical form of bundling, often referred to as a multiproduct rebate, involves offering products as part of a bundle at a more attractive price compared to purchasing them separately. For example, a dominant company may provide better pricing when customers purchase its software packaged with its AI tools rather than buying software services alone.
  • Commitments. One area that has received substantial scrutiny is allegations of suppliers tying software products. Concerns about this practice could trigger competition authorities to impose fines or require commitments. For example, in July 2023, the European Commission initiated a formal antitrust investigation to determine whether Microsoft’s distribution of Teams violated EU competition rules. Almost a year later, the European Commission indicated its preliminary view that Microsoft may have abused its dominant position in the market for productivity software by tying Teams to its core productivity applications. In response, Microsoft proposed several commitments to address these concerns. Specifically, Microsoft offered to provide versions of Office 365 without Teams at a reduced price, allow customers to switch to suites without Teams, enhance interoperability for Teams’ competitors with other Microsoft products, and enable customers to transfer their data out of Teams to facilitate use of competing solutions. The European Commission’s assessment of these commitments is ongoing.

For more information on this or other AI matters, please contact one of the authors.

The authors would like to thank Simon Lelouche for his assistance in preparing this blog post.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Training on AI, market research, raising capital offered through Jamestown Regional Entrepreneur Center – Jamestown Sun

Published

on


Sevearl training events will be held for the public through the Jamestown Regional Entrepreneur Center in September.

On Sept. 9-12, a “Get Found Masterclass” will be offered to the public. This four-part workshop series is designed specifically for small business service providers who are focused on growth through smarter systems, trusted tools and clear visibility strategies. Across four focused sessions, participants will learn how to protect their brand while embracing automation, use Google’s free tools to enhance online visibility and send the right visibility signals to today’s AI-powered search engines. Participants will discover how AI can support small businesses, how to build ethical systems that scale, and what really influences trust, authority and ranking.

On Sept. 10, a stand-alone workshop on “Market and Customer Research” will be held. This workshop will guide participants on where and how to find customers. The presentation will
also discuss which SEO keywords competitors are using for free. Participants will compare current methods of social media marketing and discuss the variety of free market research tools that offer critical information on your industry and customers.

On Sept. 23, “AI tools for Social Media Marketing” is planned. Discuss the use of tools like ChatGPT to brainstorm post ideas, captions, and scripts; Lately.ai to repurpose long-form content into social media snippets; Canva + Magic Studio for fast, on-brand visuals and Metricool or later for AI-assisted scheduling and analytics. Automate so you can focus on connection and creativity.

On Sept. 24 is a high-level presentation led by Kat Steinberg, special counsel, and Amy Reischauer, deputy director of the of the Securities and Exchange Commision’s Office of the Advocate for Small Business Capital Formation, on the regulatory framework and SEC resources surrounding raising capital. They will also share broad data from their most recent annual report on what has been happening in capital raising in recent years. The office seeks to advocate and advance the interests of small businesses seeking to raise capital and the investors who support them at the SEC and in the capital markets. The office develops comprehensive educational materials and resources while actively engaging with
industry stakeholders to identify both obstacles and emerging opportunities in the capital
formation landscape. Through events like this, the office creates platforms for meaningful
dialogue, collecting valuable feedback and disseminating insights about capital-raising
pathways for small businesses from early-stage startups to established small public companies.

To register for these training events, visit www.JRECenter.com/Events. Follow the Jamestown
Regional Entrepreneur Center at Facebook.com/JRECenter, on Instagram at JRECenter and on
LinkedIn. Questions may be directed to Katherine.Roth@uj.edu.





Source link

Continue Reading

AI Research

1 Brilliant Artificial Intelligence (AI) Stock Down 30% From Its All-Time High That’s a No-Brainer Buy

Published

on


ASML is one of the world’s most critical companies.

Few companies’ products are as critical to the modern world’s technological infrastructure as those made by ASML (ASML 3.75%). Without the chipmaking equipment the Netherlands-based manufacturer provides, much of the world’s most innovative technology wouldn’t be possible. That makes it one of the most important companies in the world, even if many people have never heard of it.

Over the long term, ASML has been a profitable investment, but the stock has struggled recently — it’s down by more than 30% from the all-time high it touched in July 2024. I believe this pullback presents an excellent opportunity to buy shares of this key supporting player for the AI sector and other advanced technologies.  

Image source: Getty Images.

ASML has been a victim of government policies around the globe

ASML makes lithography machines, which trace out the incredibly fine patterns of the circuits on silicon chips. Its top-of-the-line extreme ultraviolet (EUV) lithography machines are the only ones capable of printing the newest, most powerful, and most feature-dense chips. No other companies have been able to make EUV machines thus far. They are also highly regulated, as Western nations don’t want this technology going to China, so the Dutch and U.S. governments have put strict restrictions on the types of machines ASML can export to China or its allies. In fact, even tighter new regulations were put in place last year that prevented ASML from servicing some machines that it previously was allowed to sell to Chinese companies.

As a result of these export bans, ASML’s sales to one of the world’s largest economies have been curtailed. This led to investors bidding the stock down in 2024 — a drop it still hasn’t recovered from.

2025 has been a relatively strong year for ASML’s business, but tariffs have made it challenging to forecast where matters are headed. Management has been cautious with its guidance for the year as it is unsure of how tariffs will affect the business. In its Q2 report, management stated that tariffs had had a less significant impact in the quarter than initially projected. As a result, ASML generated 7.7 billion euros in sales, which was at the high end of its 7.2 billion to 7.7 billion euro guidance range. For Q3, the company says it expects sales of between 7.4 billion and 7.9 billion euros, but if tariffs have a significantly negative impact on the economic picture, it could come up short.

Given all the planned spending on new chip production capacity to meet AI-related demand, investors would be wise to assume that ASML will benefit. However, the company is staying conservative in its guidance even as it prepares for growth. This conservative stance has caused the market to remain fairly bearish on ASML’s outlook even as all signs point toward a strong 2026.

This makes ASML a buying opportunity at its current stock price.

ASML’s valuation hasn’t been this low since 2023

Compared to the last five years, ASML trades at a historically low price-to-earnings (P/E) ratio and a forward P/E ratio.

ASML PE Ratio Chart

ASML PE Ratio data by YCharts.

With expectations for ASML at low levels, investors shouldn’t be surprised if its valuation rises sometime over the next year, particularly if management’s commentary becomes more bullish as demand increases in line with chipmakers’ efforts to expand their production capacity.

This could lift ASML back into its more normal valuation range in the mid-30s, which is perfectly acceptable given its growth level, considering that it has no direct competition.

ASML is a great stock to buy now and hold for several years or longer, allowing you to reap the benefits of chipmakers increasing their production capacity. Just because the market isn’t that bullish on ASML now, that doesn’t mean it won’t be in the future. This rare moment offers an ideal opportunity to load up on shares of a stock that I believe is one of the best values in the market right now.



Source link

Continue Reading

AI Research

AI’s not ‘reasoning’ at all – how this team debunked the industry hype

Published

on


Pulse/Corbis via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • We don’t entirely know how AI works, so we ascribe magical powers to it.
  • Claims that Gen AI can reason are a “brittle mirage.”
  • We should always be specific about what AI is doing and avoid hyperbole.

Ever since artificial intelligence programs began impressing the general public, AI scholars have been making claims for the technology’s deeper significance, even asserting the prospect of human-like understanding. 

Scholars wax philosophical because even the scientists who created AI models such as OpenAI’s GPT-5 don’t really understand how the programs work — not entirely. 

Also: OpenAI’s Altman sees ‘superintelligence’ just around the corner – but he’s short on details

AI’s ‘black box’ and the hype machine

AI programs such as LLMs are infamously “black boxes.” They achieve a lot that is impressive, but for the most part, we cannot observe all that they are doing when they take an input, such as a prompt you type, and they produce an output, such as the college term paper you requested or the suggestion for your new novel.

In the breach, scientists have applied colloquial terms such as “reasoning” to describe the way the programs perform. In the process, they have either implied or outright asserted that the programs can “think,” “reason,” and “know” in the way that humans do. 

In the past two years, the rhetoric has overtaken the science as AI executives have used hyperbole to twist what were simple engineering achievements. 

Also: What is OpenAI’s GPT-5? Here’s everything you need to know about the company’s latest model

OpenAI’s press release last September announcing their o1 reasoning model stated that, “Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem,” so that “o1 learns to hone its chain of thought and refine the strategies it uses.”

It was a short step from those anthropomorphizing assertions to all sorts of wild claims, such as OpenAI CEO Sam Altman’s comment, in June, that “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence.”

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The backlash of AI research

There is a backlash building, however, from AI scientists who are debunking the assumptions of human-like intelligence via rigorous technical scrutiny. 

In a paper published last month on the arXiv pre-print server and not yet reviewed by peers, the authors — Chengshuai Zhao and colleagues at Arizona State University — took apart the reasoning claims through a simple experiment. What they concluded is that “chain-of-thought reasoning is a brittle mirage,” and it is “not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching.” 

Also: Sam Altman says the Singularity is imminent – here’s why

The term “chain of thought” (CoT) is commonly used to describe the verbose stream of output that you see when a large reasoning model, such as GPT-o1 or DeepSeek V1, shows you how it works through a problem before giving the final answer.

That stream of statements isn’t as deep or meaningful as it seems, write Zhao and team. “The empirical successes of CoT reasoning lead to the perception that large language models (LLMs) engage in deliberate inferential processes,” they write. 

But, “An expanding body of analyses reveals that LLMs tend to rely on surface-level semantics and clues rather than logical procedures,” they explain. “LLMs construct superficial chains of logic based on learned token associations, often failing on tasks that deviate from commonsense heuristics or familiar templates.”

The term “chains of tokens” is a common way to refer to a series of elements input to an LLM, such as words or characters. 

Testing what LLMs actually do

To test the hypothesis that LLMs are merely pattern-matching, not really reasoning, they trained OpenAI’s older, open-source LLM, GPT-2, from 2019, by starting from scratch, an approach they call “data alchemy.”

arizona-state-2025-data-alchemy

Arizona State University

The model was trained from the beginning to just manipulate the 26 letters of the English alphabet, “A, B, C,…etc.” That simplified corpus lets Zhao and team test the LLM with a set of very simple tasks. All the tasks involve manipulating sequences of the letters, such as, for example, shifting every letter a certain number of places, so that “APPLE” becomes “EAPPL.”

Also: OpenAI CEO sees uphill struggle to GPT-5, potential for new kind of consumer hardware

Using the limited number of tokens, and limited tasks, Zhao and team vary which tasks the language model is exposed to in its training data versus which tasks are only seen when the finished model is tested, such as, “Shift each element by 13 places.” It’s a test of whether the language model can reason a way to perform even when confronted with new, never-before-seen tasks. 

They found that when the tasks were not in the training data, the language model failed to achieve those tasks correctly using a chain of thought. The AI model tried to use tasks that were in its training data, and its “reasoning” sounds good, but the answer it generated was wrong. 

As Zhao and team put it, “LLMs try to generalize the reasoning paths based on the most similar ones […] seen during training, which leads to correct reasoning paths, yet incorrect answers.”

Specificity to counter the hype

The authors draw some lessons. 

First: “Guard against over-reliance and false confidence,” they advise, because “the ability of LLMs to produce ‘fluent nonsense’ — plausible but logically flawed reasoning chains — can be more deceptive and damaging than an outright incorrect answer, as it projects a false aura of dependability.”

Also, try out tasks that are explicitly not likely to have been contained in the training data so that the AI model will be stress-tested. 

Also: Why GPT-5’s rocky rollout is the reality check we needed on superintelligence hype

What’s important about Zhao and team’s approach is that it cuts through the hyperbole and takes us back to the basics of understanding what exactly AI is doing. 

When the original research on chain-of-thought, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” was performed by Jason Wei and colleagues at Google’s Google Brain team in 2022 — research that has since been cited more than 10,000  times — the authors made no claims about actual reasoning. 

Wei and team noticed that prompting an LLM to list the steps in a problem, such as an arithmetic word problem (“If there are 10 cookies in the jar, and Sally takes out one, how many are left in the jar?”) tended to lead to more correct solutions, on average. 

google-2022-example-chain-of-thought-prompting

Google Brain

They were careful not to assert human-like abilities. “Although chain of thought emulates the thought processes of human reasoners, this does not answer whether the neural network is actually ‘reasoning,’ which we leave as an open question,” they wrote at the time. 

Also: Will AI think like humans? We’re not even close – and we’re asking the wrong question

Since then, Altman’s claims and various press releases from AI promoters have increasingly emphasized the human-like nature of reasoning using casual and sloppy rhetoric that doesn’t respect Wei and team’s purely technical description. 

Zhao and team’s work is a reminder that we should be specific, not superstitious, about what the machine is really doing, and avoid hyperbolic claims. 





Source link

Continue Reading

Trending