Connect with us

AI Insights

Ottawa weighs plans on AI, copyright as OpenAI fights Ontario court jurisdiction

Published

on


Canada’s artificial intelligence minister is keeping a close watch on court cases in Canada and the U.S. to determine next steps for Ottawa’s regulatory approach to AI. 

Some AI companies have claimed early wins south of the border, and OpenAI is now fighting the jurisdiction of an Ontario court to hear a lawsuit by news publishers.

Evan Solomon’s office said in a statement he plans to address copyright “within Canada’s broader AI regulatory approach, with a focus on protecting cultural sovereignty and how [creators] factor into this conversation.”

But there are no current plans for a standalone copyright bill, as Solomon’s office is “closely monitoring the ongoing court cases and market developments” to help chart the path forward.

It’s unclear how long it will take for those court cases to determine whether artificial intelligence companies can use copyrighted content to train their AI products. 

WATCH | Canadian news organizations sue ChatGPT creator

Canadian news organizations, including CBC, sue ChatGPT creator

CBC/Radio-Canada, Postmedia, Metroland, the Toronto Star, the Globe and Mail, and The Canadian Press have launched a joint lawsuit against ChatGPT creator OpenAI, for using news content to train its ChatGPT generative artificial intelligence system. The news organizations say OpenAI breaches copyright by ‘scraping content’ from their websites.

The sole Canadian case to pose the question was launched late last year by a coalition of news publishers and the Ontario Superior Court is set to hear a jurisdictional challenge in September.

The coalition — which includes The Canadian Press, Torstar, the Globe and Mail, Postmedia and CBC/Radio-Canada — is suing OpenAI for using news content to train its generative artificial intelligence system.

The news publishers argue OpenAI is breaching copyright by scraping large amounts of content from Canadian media, and then profiting from the use of that content without permission or compensation. 

They said in court filings that OpenAI has “engaged in ongoing, deliberate and unauthorized misappropriation of [their] valuable news media works.”

“Rather than seek to obtain the information legally, OpenAI has elected to brazenly misappropriate the News Media Companies’ valuable intellectual property and convert it for its own uses, including commercial uses, without consent or consideration.”

OpenAI challenging jurisdiction

OpenAI has denied the allegations, and previously said its models are trained on publicly available data, and “grounded in fair use and related international copyright principles.”

The company, which is headquartered in San Francisco, is challenging the jurisdiction of the Ontario court to hear the case. 

It argued in a court filing that it’s not located in Ontario and does not do business in the province.

WATCH | U.S. media companies sue OpenAI in late 2023:

New York Times sues OpenAI, Microsoft for copyright infringement

The New York Times is suing OpenAI and Microsoft, accusing them of using millions of the newspaper’s articles without permission to help train artificial intelligence technologies.

OpenAI also argued the Copyright Act doesn’t apply outside of Canada.

OpenAI is asking the court to seal some documents in the case. The court is scheduled to hold a hearing on the sealing motion on July 30, according to a schedule outlined in court documents.

It asked the court to seal documents containing “commercially sensitive” information, including about its corporate organization and structure, its web crawling and fetching processes and systems, and its “model training and inference processes, systems, resource allocations and/or cost structures.”

“The artificial intelligence industry is highly competitive and developing at a rapid pace,” says an affidavit submitted by the company. “Competitors in this industry are many and range from large, established technology companies such as Google and Amazon, to smaller startups seeking to establish a foothold in the industry.

“As recognized leaders in the artificial intelligence industry, competitors and potential competitors to the defendants would benefit from having access to confidential information of the defendants.”

A lawyer for the news publishers provided information on the court deadlines, but did not provide comment on the case.

Numerous lawsuits dealing with AI systems and copyright are underway in the United States, some dating back to 2023. In late June, AI companies won victories in two of those cases. 

In a case launched by a group of authors, including comedian Sarah Silverman, a judge ruled AI systems’ use of published work was fair use and the authors didn’t demonstrate that use would result in market dilution.

A person holds a phone displaying the word, 'OpenAI,' in front of a computer screen displaying text.
OpenAI is being sued by a coalition of media companies for using news content to train its generative AI system. (Michael Dwyer/The Associated Press)

But the judge also said his ruling affects only those specific authors — whose lawyers didn’t make the right arguments — and does not mean Meta’s use of copyrighted material to train its systems was legal. Judge Vince Chhabria noted in his summary judgment that in “the grand scheme of things, the consequences of this ruling are limited.”

In a separate U.S. case, a judge ruled the use by AI company Anthropic of published books without permission to train its systems was fair use. But Judge William Alsup also ruled Anthropic “had no entitlement to use pirated copies.” 

Jane Ginsburg, a professor at Columbia University’s law school who studies intellectual property and technology, said it would be too simplistic to just look at the cases as complete wins for the AI companies.

“I think both the question of how much weight to give the pirate nature of the sources, and the question of market dilution, are going to be big issues in other cases.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

University of North Carolina hiring Chief Artificial Intelligence Officer

Published

on


The University of North Carolina (UNC) System Office has announced it is hiring a Chief Artificial Intelligence Officer (CAIO) to provide strategic vision, executive leadership, and operational oversight for AI integration across the 17-campus system.

Reporting directly to the Chief Operating Officer, the CAIO will be responsible for identifying, planning, and implementing system-wide AI initiatives. The role is designed to enhance administrative efficiency, reduce operational costs, improve educational outcomes, and support institutional missions across the UNC system.

The position will also act as a convenor of campus-level AI leads, data officers, and academic innovators, with a brief to ensure coherent strategies, shared best practices, and scalable implementations. According to the job description, the role requires coordination and diplomacy across diverse institutions to embed consistent policies and approaches to AI.

The UNC System Office includes the offices of the President and other senior administrators of the multi-campus system. Nearly 250,000 students are enrolled across 16 universities and the NC School of Science and Mathematics.

System Office staff are tasked with executing the policies of the UNC Board of Governors and providing university-wide leadership in academic affairs, financial management, planning, student affairs, and government relations. The office also has oversight of affiliates including PBS North Carolina, the North Carolina Arboretum, the NC State Education Assistance Authority, and University of North Carolina Press.

The new CAIO will work under a hybrid arrangement, with at least three days per week onsite at the Dillon Building in downtown Raleigh.

UNC’s move to appoint a CAIO reflects a growing trend of U.S. universities formalizing AI integration strategies at the leadership level. Last month, Rice University launched a search for an Assistant Director for AI and Education, tasked with leading faculty-focused innovation pilots and embedding responsible AI into classroom practice.

The ETIH Innovation Awards 2026



Source link

Continue Reading

AI Insights

Pre-law student survey unmasks fears of artificial intelligence taking over legal roles

Published

on


“We’re no longer talking about AI just writing contracts or breaking down legalese. It is reshaping the fundamental structure of legal work. Our future lawyers are smart enough to see that coming. We want to provide them this data so they can start thinking about how to adapt their skills for a profession that will look very different by the time they enter it,” said Arush Chandna, Juris Education founder, in a statement.

Juris Education noted that law schools are already integrating legal tech, ethics, and prompt engineering into curricula. The American Bar Association’s 2024 AI and Legal Education Survey revealed that 55 percent of US law schools were teaching AI-specific classes and 83 percent enabled students to learn effective AI tool use through clinics.

Juris Education’s director of advising Victoria Inoyo pointed out that AI could not replicate human communication skills.

“While AI is reshaping the legal industry, the rise of AI is less about replacement and more about evolution. It won’t replace the empathy, judgment, and personal connection that law students and lawyers bring to complex issues,” she said. “Future law students should focus on building strong communication and interpersonal skills that set them apart in a tech-enhanced legal landscape. These are qualities AI cannot replace.”

Juris Education’s survey obtained responses from 220 pre-law students. The challenge of maintaining work-life balance was cited by 21.8 percent of respondents as their primary career concern; increasing student debt juxtaposed against low job security was the third most prevalent concern with 17.3 percent of respondents citing it as their biggest career fear.



Source link

Continue Reading

AI Insights

Trust in Businesses’ Use of AI Improves Slightly

Published

on


WASHINGTON, D.C. — About a third (31%) of Americans say they trust businesses a lot (3%) or some (28%) to use artificial intelligence responsibly. Americans’ trust in the responsible use of AI has improved since Gallup began measuring this topic in 2023, when just 21% of Americans said they trusted businesses on AI. Still, just under half (41%) say they do not trust businesses much when it comes to using AI responsibly, and 28% say they do not trust them at all.

###Embeddable###

These findings from the latest Bentley University-Gallup Business in Society survey are based on a web survey with 3,007 U.S. adults conducted from May 5-12, 2025, using the probability-based Gallup Panel.

Most Americans Neutral on Impact of AI

When asked about the net impact of AI — whether it does more harm than good — Americans are increasingly neutral about its impact, with 57% now saying it does equal amounts of harm and good. This figure is up from 50% when Gallup first asked this question in 2023. Meanwhile, 31% currently say they believe AI does more harm than good, down from 40% in 2023, while a steady 12% believe it does more good than harm.

###Embeddable###

The decline from 2023 to 2025 in the percentage of Americans who believe AI will do more harm than good is driven by improvements in attitudes among older Americans. Generally speaking, older Americans are less concerned than younger Americans when it comes to AI’s total impact on society. While skepticism about AI and its impact exists across all age groups, it tends to be higher among younger Americans.

Majority of Americans Are Concerned About AI Impact on Jobs

Those who believe AI will do more harm than good may be thinking at least partially about the technology’s impact on the job market. The majority (73%) of Americans believe AI will reduce the total number of jobs in the United States over the next 10 years, a rate that has remained stable over the past three years in which Gallup has asked this question.

###Embeddable###

Younger Americans aged 18 to 29 are slightly more optimistic about the potential of AI to create more jobs. Fourteen percent of those aged 18 to 29 say AI will lead to an increase in the total number of jobs, compared with 9% of those aged 30 to 44, 7% of those aged 45 to 59 and 6% of those aged 60 and over.

Bottom Line

As AI becomes more common in personal and professional settings, Americans report increased confidence that businesses will use it responsibly and are more comfortable with its overall impact.

Even so, worries about AI’s effect on jobs persist, with nearly three-quarters of Americans believing the technology will reduce employment opportunities in the next decade. Younger adults are somewhat more optimistic about the potential for job creation, but they, too, remain cautious. Still, concerns about ethics, accountability and the potential unintended consequences of AI are top of mind for many Americans.

These results underscore the challenge businesses face as they deploy AI: They must not only demonstrate the technology’s benefits but also show, through transparent practices, that it will not come at the expense of workers or broader public trust. How businesses address these concerns will play a central role in shaping whether AI is ultimately embraced or resisted in the years ahead.

Learn more about how the Bentley University-Gallup Business in Society research works.

Learn more about how the Gallup Panel works.

###Embeddable###



Source link

Continue Reading

Trending