The effective protection of trademark rights is essential for
preserving commercial identity and protecting consumers from
misleading or counterfeit products. However, in recent
years—particularly with the acceleration of digitalization,
traditional enforcement methods have become increasingly
inadequate. The global expansion of e-commerce platforms has made
it easier for counterfeit goods to circulate online, complicating
efforts by trademark owners to safeguard their rights. In this
evolving landscape, artificial intelligence (AI) technologies offer
a new and promising approach by enhancing the detection and
prevention of trademark infringements.
Until now, trademark owners have tried to protect their rights
using various methods. Classic approaches in cases of infringement
have included tools such as notice and takedown procedures, as well
as civil and criminal litigation. However, the vast volume of
online content, the rapid expansion of e-commerce platforms,
digital piracy, and the rise of international infringements have
made it increasingly difficult to combat trademark violations with
traditional methods alone. In this context, AI-powered solutions
are beginning to meet the speed and scale required for effective
trademark protection.
Advantages and Opportunities
The innovations that AI brings to trademark protection are
fundamentally based on its capacity to analyze vast amounts of
data. Technologies such as image recognition, natural language
processing, and machine learning enable real-time monitoring and
analysis of online platforms to detect potential infringements. For
example, visual recognition systems capable of identifying
trademark logos can scan millions of product images to detect
similar or counterfeit uses. Likewise, voice recognition
technologies can identify unauthorized uses of non-traditional
trademarks, such as sound marks. These tools can also automate
tasks such as generating cease-and-desist letters, submitting
complaints to digital platforms, and mapping networks of
counterfeit products.
One of the most significant advantages AI offers is its ability
to conduct comprehensive monitoring at high speed, low cost, and in
multiple languages—enabling businesses to protect their
trademarks on a global scale. These advancements empower trademark
owners to act more proactively and strategically, reducing both
time and legal expenses. In this way, AI facilitates the automation
of infringement detection, counterfeit tracking, and monitoring of
suspicious domain name registrations. This allows human resources
to focus on more complex cases and ensures that resources are
allocated efficiently and effectively.
Disadvantages and Legal Challenges
Despite the significant potential AI offers, its implementation
also presents several legal and technical challenges. One of the
most critical issues is the variation in trademark laws across
different jurisdictions. For AI to effectively conduct global
monitoring, it must be capable of complying with local legal
frameworks. A particular use that constitutes infringement in one
country may be entirely lawful in another. This necessitates the
customization and continual updating of AI algorithms on a
country-by-country basis.
Another key challenge involves the concept of fair use. AI
systems may struggle to distinguish between genuine infringement
and legitimate fair use, potentially misclassifying lawful
activities as violations of trademark rights.
Finally, the cost-benefit balance must also be considered.
Implementing AI solutions involves significant costs, including
initial setup, ongoing maintenance, and the need for high-quality
data. While the cost-benefit ratio tends to favor large
enterprises, smaller businesses may find the investment less
economically viable.
Ethics and Privacy
The use of AI systems powered by big data raises significant
ethical and privacy concerns. During the monitoring of
user-generated content, personal data may also be
processed—potentially triggering obligations under various
data protection laws, such as the Turkish Personal Data Protection
Law and the European General Data Protection Regulation (GDPR).
Accordingly, AI-based systems must adhere to core data protection
principles, including data minimization, transparency, and purpose
limitation, and must not infringe upon the rights of data
subjects.
In cases involving automated decision-making (ADM), it is
crucial to implement appropriate safeguards to protect individuals.
Moreover, there is a real risk that erroneous decisions by AI
systems could lead to the removal of lawful content. Therefore,
such systems must be carefully designed to account for legal
exceptions, including fair use.
Equally important is the need to prevent algorithmic bias and
ensure that human oversight remains an integral part of the
decision-making process. AI is not merely a technological
tool—it plays an increasingly influential role in enforcement
strategies. For this reason, AI systems must be transparent, fair,
and auditable. Failing to meet these standards could lead to
serious ethical concerns, such as the violation of individual
rights under the guise of trademark enforcement.
Hybrid Approach: The Collaboration Between Artificial
Intelligence and Human Intelligence
AI is extremely successful in analyzing large volumes of data,
conducting extensive online searches, and automating routine tasks.
However, it currently does not seem feasible for AI to replace
human intelligence in areas that require legal interpretation,
contextual assessment, and ethical sensitivity. Therefore, a hybrid
approach that combines the speed and scalability advantages offered
by AI with the common sense and legal intuition provided by human
expertise stands out as the most viable path.
In this collaborative model, AI scans, classifies, and performs
a preliminary analysis of potentially infringing content before
forwarding it to human experts. Humans then assess this content in
greater depth to ensure the correct legal decisions are made. This
approach prevents false positives and allows nuanced
cases—such as fair use or criticism—to be properly
distinguished. Moreover, this collaboration plays a critical role
not only in legal accuracy but also in maintaining the legitimacy
of technology in the eyes of society. Human oversight can ensure
that AI decisions are fair, transparent, and aligned with societal
values. Therefore, when the power of AI is combined with the
supervision of human judgment, trademark protection becomes not
only more effective but also more ethical.
Future Outlook and Conclusion
In the future, AI may evolve into systems that not only detect
existing infringements but also predict potential infringements in
advance. Dynamic content monitoring tools, algorithms that analyze
market trends, and AI-powered platforms that support lawyers in
litigation processes will further advance the process of trademark
enforcement. However, the successful implementation of these
developments depends on the use of technology within legal and
ethical boundaries. In this process, not only technology but also
human expertise must be integrated into the process to develop a
fair, effective, and sustainable protection strategy.
In conclusion, AI-supported brand protection systems have become
an inevitable necessity in today’s digital world. The correct
application of these technologies will enable brand owners to
protect their rights more effectively, while also increasing
consumer safety. However, at the heart of this entire process must
be a transparent and responsible understanding of technology that
is balanced with human common sense.
References
Dennis Collopy, Artificial Intelligence and Intellectual
Property Enforcement Overview of Challenges and Opportunities,
2024, Access Link:
https://www.wipo.int/edocs/mdocs/enforcement/en/wipo_ace_16/wipo_ace_16_15_presentation.pdf.
Vera Albino, Artificial Intelligence, Intellectual
Property and Judicial System, 2023, International In-house Counsel
Journal.
Piotr Majer, AI Development Costs – 8 Must-Know Factors
to Assess, 2024, Access Link:
https://www.softkraft.co/ai-costs/.
A.V. Pokrovskaya, Intellectual property rights
infringement on e-commerce marketplaces: Application of AI
technologies, new challenges, 2024, E3S Web Conf.
INTA, Artificial Intelligence (AI) Usage In Trademark
Clearance And Enforcement, 2021, Access Link:
https://www.inta.org/wp-content/uploads/public-files/advocacy/committee-reports/INTA-EIC-AI-AI-Usage-in-Trademark-Clearance-and-Enforcement-April-2021.pdf.
Abraham Cohn, Protecting Trademarks in the Age of AI:
Navigating the Future of Brand Security, 2025, Access Link:
https://www.linkedin.com/pulse/protecting-trademarks-age-ai-navigating-future-brand-security-cohn-gwhce/.
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
Earlier this month, footage was released of one of Will Smith’s gigs which was allegedly AI-generated.
Snopes agreed that the crowd shots featured ‘some AI manipulation’. You can watch the video below:
Will Smith is being accused of posting a video that features AI-generated shots of fans cheering in the crowd during his tour pic.twitter.com/1Zvmp1p8MgAugust 27, 2025
Eagle-eyed viewers who paused the footage spotted some telltale signs: namely, that the AI ‘fans’ in the video looked less like humans and more like, well, alien creatures in a horror movie who are desperate to suck out your soul. Their hands were elongated and had more fingers than the children of incestuous relationships, while their blurred facial features resembled melted candles in the shape of ghouls.
Nonetheless, it turns out the emotive slogans were real and were held by real Smith fans, such as Patric and Géraldine of Switzerland, who held up a sign saying “‘You Will Make It’ helped me survive cancer. Thx Will’. And to be fair to Smith, it appears that the massive crowds in the video were real: his team had merely used AI to turn still images into short videos.
Green Day laughed at Smith on Instagram, posting a shot of their fans at a gig with the caption: “Don’t need AI for our crowds”.
However, though his motive seems to be simply generating AI videos from stills, Smith’s is unlikely to be the last example we see of performers using AI footage of fans. Every music artist wants a full-to-bursting, over-emotional stadium crowd who are hysterical with joy at seeing their idol(s). So if you, unlike Smith, personally can’t get real footage of that, then why not fake it? (Probably because the internet is full of merciless, critical sleuths who are going to roast you until you’re a smoking heap of charred remains.)
Donald Trump’s team have allegedly paid extras to appear at his rallies to fill spare stadium seats, but that’s expensive and also risky as people might not show up – or, even worse for the team, Democrats might turn up. Generating AI footage is far cheaper, even if it burns trees.
The best camera deals, reviews, product advice, and unmissable photography news, direct to your inbox!
You could also make your crowds as attractive, young, unisex, and ethnically diverse as you want – even if the pause button does reveal them to be more horrifying than the zombies in I Am Legend.
Reading materials and fliers at the Sacramento Works job training and resources center in Sacramento on April 23, 2024. The center provides help and resources to job seekers, business and employers in Sacramento County. Photo by Miguel Gutierrez Jr., CalMatters
This story was originally published by CalMatters. Sign up for their newsletters.
After three years of trying to give Californians the right to know when AI is making a consequential decision about their lives and to appeal when things go wrong, Assemblymember Rebecca Bauer-Kahan said she and her supporters will have to wait again, until next year.
The San Ramon Democrat announced Friday that Assembly Bill 1018, which cleared the Assembly and two Senate committees, has been designated a two-year bill, meaning it can return as part of the legislative session next year. That move will allow more time for conversations with Gov. Gavin Newsom and more than 70 opponents. The decision came in the final hours of the California Legislative session, which ends today.
Her bill would require businesses and government agencies to alert individuals when automated systems are used to make important decisions about them, including for apartment leases, school admissions, and, in the workplace, hiring, firing, promotions, and disciplinary actions. The bill also covers decisions made in education, health care, criminal justice, government benefits, financial services, and insurance.
“This pause reflects our commitment to getting this critical legislation right, not a retreat from our responsibility to protect Californians,” Bauer-Kahan said in a statement shared with CalMatters.
The pause comes at a time when politicians in Washington D.C. continue to oppose AI regulation that they say could stand in the way of progress. Last week, leaders of the nation’s largest tech companies joined President Trump at a White House dinner to further discuss a recent executive order and other initiatives to prevent AI regulation. Earlier this year, Congress tried and failed to pass a moratorium on AI regulation by state governments.
When an automated system makes an error, AB 1018 gives people the right to have that mistake rectified within 60 days. It also reiterates that algorithms must give “full and equal” accommodations to everyone, and cannot discriminate against people based on characteristics like age, race, gender, disability, or immigration status. Developers must carry out impact assessments to, among other things, test for bias embedded in their systems. If an impact assessment is not conducted on an AI system, and that system is used to make consequential decisions about people’s lives, the developer faces fines of up to $25,000 per violation, or legal action by the attorney general, public prosecutors, or the Civil Rights Department.
Amendments made to the bill in recent weeks exempted generative AI models from coverage under the bill, which could prevent it from impacting major AI companies or ongoing generative AI pilot projects carried out by state agencies. The bill was also amended to delay a developer auditing requirement to 2030, and to clarify that the bill intends to address evaluating a person and making predictions or recommendations about them.
An intense legislative fight
Samantha Gordon, a chief program officer at TechEquity, a sponsor of the bill, said she’s seen more lobbyists attempt to kill AB 1018 this week in the California Senate than for any other AI bill ever. She said she thinks AB 1018 had a pathway to passage but the decision was made to pause in order to work with the governor, who ends his second and final term next year.
“There’s a fundamental disagreement about whether or not these tools should face basic scrutiny of testing and informing the public that they’re being used,” Gordon said.
Gordon thinks it’s possible tech companies will use their “unlimited amount of money” to fight the bill next year..
“But it’s clear,” she added, “that Americans want these protections — poll after poll shows Americans want strong laws on AI and that voluntary protections are insufficient.”
AB 1018 faced opposition from industry groups, big tech companies, the state’s largest health care provider, venture capital firms, and the Judicial Council of California, a policymaking body for state courts.
A coalition of hospitals, Kaiser Permanente, and health care software and AI company Epic Systems urged lawmakers to vote no on 1018 because they argued the bill would negatively influence patient care, increase costs, and require developers to contract with third-party auditors to assess compliance by 2030.
A coalition of business groups opposed the bill because of generalizing language and concern that compliance could be expensive for businesses and taxpayers. The group Technet, which seeks to shape policy nationwide and whose members include companies like Apple, Google, Nvidia, and OpenIAI, argued that AB 1018 would stifle job growth, raise costs, and punish the fastest growing industries in the state in a video ad campaign.
Venture capital firm Andreessen Horowitz, whose founder Marc Andreessen supported the re-election of President Trump, oppose the bill due to costs and due to the fact that the bill seeks to regulate AI in California and beyond.
A policy leader in the state judiciary said in an alert sent to lawmakers urging a no vote this week that the burden of compliance with the bill is so great that the judicial branch is at risk of losing the ability to use pretrial risk assessment tools like the kind that assign recidivism scores to sex offenders and violent felons. The state Judicial Council, which makes policy for California courts, estimates that AB 1018 passage will cost the state up to $300 million a year. Similar points were made in a letter to lawmakers last month.
Why backers keep fighting
Exactly how much AB 1018 could cost taxpayers is still a big unknown, due to contradictory information from state government agencies. An analysis by California legislative staff found that if the bill passes it could cost local agencies, state agencies, and the state judicial branch hundreds of millions of dollars. But a California Department of Technology report covered exclusively by CalMatters concluded in May that no state agencies use high risk automated systems, despite historical evidence to the contrary. Bauer-Kahan said last month that she was surprised by the financial impact estimates because CalMatters reporting found that automated decisionmaking system use was not widespread at the state level.
Support for the bill has come from unions who pledged to discuss AI in bargaining agreements, including the California Nurses Association and the Service Employees International Union, and from groups like the Citizen’s Privacy Coalition, Consumer Reports, and the Consumer Federation of California.
Coauthors of AB 1018 include major Democratic proponents of AI regulation in the California Legislature, including Assembly majority leader Cecilia Aguilar-Curry of Davis, author of a bill passed and on the governor’s desk that seeks to stop algorithms from raising prices on consumer goods; Chula Vista Senator Steve Padilla, whose bill to protect kids from companion chatbots awaits the governor’s decision; and San Diego Assemblymember Chris Ward, who previously helped pass a law requiring state agencies to disclose use of high-risk automated systems and this year sought to pass a bill to prevent pricing based on your personal information.
The anti-discrimination language in AB 1018 is important because tech companies and their customers often see themselves as exempt from discrimination law if the discrimination is done by automated systems, said Inioluwa Deborah Raji, an AI researcher at UC Berkeley who has audited algorithms for discrimination and advised government officials in Sacramento and Washington D.C. about how AI can harm people. She questions whether state agencies have the resources to enforce AB 1018, but also likes the disclosure requirement in the bill because “I think people deserve to know, and there’s no way that they can appeal or contest without it.”
“I need to know that an AI system was the reason I wasn’t able to rent this house. Then I can at an individual level appeal and contest. There’s something very valuable about that.”
“It’s disappointing this [AB 1018] isn’t the priority for AI policy folks at this time,” she told CalMatters. “I truly hope the fourth time is the charm.”
A number of other bills with union backing were also considered by lawmakers this session that sought to protect workers from artificial intelligence. For the third year in a row, a bill to require a human driver in commercial delivery trucks in autonomous vehicles failed to become law. Assembly Bill 1331, which sought to prevent surveillance of workers with AI-powered tools in private spaces like locker or lactation rooms and placed limitations on surveillance in breakrooms, also failed to pass.
But another measure, Senate Bill 7 passed the legislature and is headed to the governor. It requires employers to disclose plans to use an automated system 30 days prior to doing so and make annual requests data used by an employer for discipline or firing. In recent days, author Senator Jerry McNerney amended the law to remove the right to appeal decisions made by AI and eliminate a prohibition against employers making predictions about a worker’s political beliefs, emotional state, or neural data. The California Labor Federation supported similar bills in Massachusetts, Vermont, Connecticut, and Washington.