Connect with us

AI Research

Artificial Intelligence and Abuse of Dominance in EU Law

Published

on


The rise of artificial intelligence (AI) and its widespread availability raises questions regarding how it might facilitate EU competition law violations. This issue is complex due to two characteristics of AI systems highlighted under their definition in the EU AI Act: AI systems (1) operate with varying levels of autonomy and (2) infer from the input they receive how to generate outputs such as predictions, recommendations or decisions that can influence physical or virtual environments.

This blog post examines how the rise of AI could affect the application of Article 102 of the Treaty on the Functioning of the European Union (TFEU), which prohibits abusive behavior by dominant undertakings.

Abuse of Dominance

Under EU law, an undertaking is dominant if it can act independently from its competitors, customers and consumers to an appreciable extent. Having a dominant position alone does not violate EU competition law; an abuse must be identified.

There is no exhaustive list of categories of abuse under Article 102 TFEU. Examples of abusive conduct typically include foreclosing competitors or exploiting consumers.

Defining Markets

The first step in investigating possible abusive behavior is to define the relevant market to determine whether the company in question holds a dominant position. Market definition can be challenging in AI markets. The European Commission’s 2024 market definition notice provides limited guidance on defining such markets.

AI presents unique challenges for market definition, especially due to its reliance on various technologies used to create tools with a wide range of potential end-use applications.

Another key challenge for market definition is the risk of concluding too quickly that a firm has a dominant position, especially when concerns over market power concentration in AI markets are closely tied to geopolitical issues and the race for AI dominance between the United States, Europe and China. For example, the United States appeared to have a significant advantage with several prominent AI companies and significant investment projects seemingly benefiting a few major companies with a strong presence in the digital space, raising concerns among competition authorities that US firms might primarily control AI markets. However, DeepSeek’s breakthrough shortly after these investment announcements demonstrates that such concerns may be overstated and that large players are not immune to competition. On the contrary, DeepSeek’s market entry highlighted the dynamic nature of AI markets and their vulnerability to unexpected, large-scale disruptions.

Abusing Dominance Through AI

Among many AI-related potential Article 102 (or analogous national law) concerns that they might investigate, competition authorities in Europe are particularly concerned that dominant companies may abuse their position by using AI systems for self-preferential purposes or by seemingly engaging in pricing abuse practices.

  • Self-preferencing. A dominant company may arguably abuse its dominant position by programming AI systems to favor its own products or services over those of competitors.
    •  The European Commission fined Google €2.42 billion for allegedly illegally favoring its own comparison-shopping service. Unlike its competitors, Google’s service was allegedly not subjected to the company’s generic search algorithm, and the European Commission concluded that this led to unfair demotion of rival services.
    • The European Commission has accepted commitments from Amazon regarding its use of nonpublic data on sellers’ activities on Amazon’s marketplace. The European Commission found that Amazon’s use of sellers’ nonpublic business data to inform its own retail decisions was detrimental to effective competition. According to the European Commission, this practice enabled Amazon to adjust its prices to undermine rivals’ opportunities to compete effectively in retail markets.
  • Pricing abuses. A dominant company may purportedly abuse its dominant position through AI-based pricing for exclusionary or exploitative purposes, e.g., by employing AI to engage in predatory pricing (i.e., below-cost prices) or price discrimination (i.e., unjustifiably selling identical or similar goods or services at different prices.
    • Predatory pricing. Dominant firms may arguably use AI systems to target customers that are at risk of switching to a competitor or are price-sensitive, offering predatory prices to retain or attract these customers at the expense of competitors. Assessing alleged predatory pricing is particularly challenging due to the difficulty in proving prices are below cost.
    • Price discrimination. AI tools enable businesses to adjust prices based on various factors, such as consumer preferences and past purchases. With access to large datasets, businesses can change prices in real time, raising concerns about the potential harm to consumer welfare. Under EU law, however, demonstrating exploitative abuse requires a high standard of proof. For personalized pricing to be considered abusive, it would at the very least be necessary to demonstrate that (1) price discrimination is frequent; (2) the AI system consistently targets specific consumer groups; (3) there are no objective justifications; and (4) there is harm to social welfare absent a world without the alleged conduct.

Essential Inputs

Dominant firms seemingly could be especially likely to abuse their position by foreclosing access to essential inputs, such as IT components and data.

  • Access to IT components. Specialized chips are required to train foundation models for generative AI. If one company holds a dominant position in the market for such chips, it may abuse its position in various ways. Here are a few examples:
    • Rebates. A dominant AI chip manufacturer may grant rebates in exchange for a buyer’s commitment to purchase all or a substantial part of its requirements from the dominant manufacturer.
    • Refusal to supply. A dominant AI chipmaker may refuse to supply its chips to a company with which it competes in a downstream market. Such refusal could constitute an abuse if access to the chip is indispensable, the refusal is likely to eliminate all effective competition in the downstream market, and the refusal cannot be objectively justified.
    • Discriminatory behavior. A dominant AI chipmaker may offer preferential pricing and early access to its chips for its affiliated companies while charging independent AI players significantly higher prices and delaying their orders.
  • Access to data. Data is a key pillar of AI. For AI to function effectively, it requires good quality and abundant data so that it can be trained to identify patterns and relationships. Data protection requirements may make it particularly difficult for smaller players—with limited resources and budgets to ensure compliance—to gain access to sufficient personal data to develop leading AI products. Companies with access to larger datasets may be expected to develop superior AI models, potentially leading to allegations of market dominance and anticompetitive creation of barriers to entry. This may lead to accusations of denial of access to data or providing discriminatory access.

Tying and Bundling

Another example of abusive conduct is where a dominant company may attempt to foreclose its competitors by tying or bundling. “Tying” refers to a practice where the seller requires customers to purchase one product (the tying product) to obtain another product (the tied product). “Bundling” refers to offering discounts conditioned on the customer buying a package of two or more products from the supplier.

  • Tying. Tying may be contractual or technical.
    • Contractual tying occurs when the customer who purchases the tying product undertakes to also purchase the tied product rather than alternatives offered by competitors. This is most common in “traditional” markets. A possible example of contractual tying is where a software provider with a dominant position contractually requires that its AI tool only be used in conjunction with its software platform.
    • Technical tying occurs when two products are physically integrated or designed in a way that makes it impossible to use one without the other. For example, a dominant company might integrate its AI solutions into specific software, search engines or mobile devices, ensuring that these AI solutions cannot be separated or used independently.
  • Bundling. A typical form of bundling, often referred to as a multiproduct rebate, involves offering products as part of a bundle at a more attractive price compared to purchasing them separately. For example, a dominant company may provide better pricing when customers purchase its software packaged with its AI tools rather than buying software services alone.
  • Commitments. One area that has received substantial scrutiny is allegations of suppliers tying software products. Concerns about this practice could trigger competition authorities to impose fines or require commitments. For example, in July 2023, the European Commission initiated a formal antitrust investigation to determine whether Microsoft’s distribution of Teams violated EU competition rules. Almost a year later, the European Commission indicated its preliminary view that Microsoft may have abused its dominant position in the market for productivity software by tying Teams to its core productivity applications. In response, Microsoft proposed several commitments to address these concerns. Specifically, Microsoft offered to provide versions of Office 365 without Teams at a reduced price, allow customers to switch to suites without Teams, enhance interoperability for Teams’ competitors with other Microsoft products, and enable customers to transfer their data out of Teams to facilitate use of competing solutions. The European Commission’s assessment of these commitments is ongoing.

For more information on this or other AI matters, please contact one of the authors.

The authors would like to thank Simon Lelouche for his assistance in preparing this blog post.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Nvidia says ‘We never deprive American customers in order to serve the rest of the world’ — company says GAIN AI Act addresses a problem that doesn’t exist

Published

on


The bill, which aimed to regulate shipments of AI GPUs to adversaries and prioritize U.S. buyers, as proposed by U.S. senators earlier this week, made quite a splash in America. To a degree, Nvidia issued a statement claiming that the U.S. was, is, and will remain its primary market, implying that no regulations are needed for the company to serve America.

“The U.S. has always been and will continue to be our largest market,” a statement sent to Tom’s Hardware reads. “We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips. While it may have good intentions, this bill is just another variation of the AI Diffusion Rule and would have similar effects on American leadership and the U.S. economy.”



Source link

Continue Reading

AI Research

OpenAI Projects $115 Billion Cash Burn by 2029

Published

on


OpenAI has sharply raised its projected cash burn through 2029 to $115 billion, according to The Information. This marks an $80 billion increase from previous estimates, as the company ramps up spending to fuel the AI behind its ChatGPT chatbot.

The company, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, about $1.5 billion higher than its earlier forecast. The surge in spending comes as OpenAI seeks to maintain its lead in the rapidly growing artificial intelligence market.


To control these soaring costs, OpenAI plans to develop its own data center server chips and facilities to power its technology.


The company is partnering with U.S. semiconductor giant Broadcom to produce its first AI chip, which will be used internally rather than made available to customers, as reported by The Information.


In addition to this initiative, OpenAI has expanded its partnership with Oracle, committing to a 4.5-gigawatt data center capacity to support its growing operations.


This is part of OpenAI’s larger plan, the Stargate initiative, which includes a $500 billion investment and is also supported by Japan’s SoftBank Group. Google Cloud has also joined the group of suppliers supporting OpenAI’s infrastructure.


OpenAI’s projected cash burn will more than double in 2024, reaching over $17 billion. It will continue to rise, with estimates of $35 billion in 2027 and $45 billion in 2028, according to The Information.

Tags





Source link

Continue Reading

AI Research

PromptLocker scared ESET, but it was an experiment

Published

on


The PromptLocker malware, which was considered the world’s first ransomware created using artificial intelligence, turned out to be not a real attack at all, but a research project at New York University.

On August 26, ESET announced that detected the first sample of artificial intelligence integrated into ransomware. The program was called PromptLocker. However, as it turned out, it was not the case: researchers from the Tandon School of Engineering at New York University were responsible for creating this code.

The university explained that PromptLocker — is actually part of an experiment called Ransomware 3.0, which was conducted by a team from the Tandon School of Engineering. A representative of the school told the publication that a sample of the experimental code was uploaded to the VirusTotal platform for malware analysis. It was there that ESET specialists discovered it, mistaking it for a real threat.

According to ESET, the program used Lua scripts generated on the basis of strictly defined instructions. These scripts allowed the malware to scan the file system, analyze the contents, steal selected data, and perform encryption. At the same time, the sample did not implement destructive capabilities — a logical step, given that it was a controlled experiment.

Nevertheless, the malicious code did function. New York University confirmed that their AI-based simulation system was able to go through all four classic stages of a ransomware attack: mapping the system, identifying valuable files, stealing or encrypting data, and creating a ransomware message. Moreover, it was able to do this on various types of systems — from personal computers and corporate servers to industrial controllers.

Should you be concerned? Yes, but with an important caveat: there is a big difference between an academic proof-of-concept demonstration and a real attack carried out by malicious actors. However, such research can be a good opportunity for cybercriminals, as it shows not only the principle of operation but also the real costs of its implementation.



Source link

Continue Reading

Trending