Connect with us

AI Research

WatSPEED brings industry leaders together to advance Canada’s AI readiness | Waterloo News

Published

on


As artificial intelligence (AI) continues to reshape the global economy, WatSPEED is helping Canadian business and technology leaders strategically position themselves for the future. 

The University of Waterloo’s professional, executive and corporate education arm recently convened senior leaders from across sectors for a one-day, in-person course: Operationalizing Generative AI: Executive Insights and Applications. Delivered in partnership with the iSchool Institute at the University of Toronto, the program offered attendees a practical exploration of how generative AI tools can be implemented to drive real value in their organizations. 

From left to right: Dr. Jimmy Lin, co-director of the Waterloo Data and AI Institute, Eily Hickson, head of global data and AI strategy at Sanofi, Dr. Ali Vahdat, director of Applied AI Research at Thomson Reuters Labs, and Javed Mostafa, professor and dean of Faculty of Information at the University of Toronto.

From theory to application 

Led by Dr. Jimmy Lin, co-director of the Waterloo Data and AI Institute and one of the world’s most cited AI scholars, the course examined how large language models (LLMs) can enhance productivity, support data-driven decision-making and improve client and employee experiences. 

Lin’s sessions framed prompt engineering as an emerging essential skill, comparable to the early days of digital literacy.  

“At some point, you had to learn how to search Google,” says Lin, who is also the Cheriton Chair in Software Systems at Waterloo’s Cheriton School of Computer Science. “Today, it’s exactly the same thing for prompt engineering.” 

Throughout the day, Lin emphasized the need for a culture of adaptability and lifelong learning as generative AI transforms how organizations structure work, evaluate talent and plan for the future.  

“The transformational impact of LLMs will be no less than that of the steam engine, than electricity, than the internet,” he notes. “In fact, it will probably be more.” 

Building executive AI literacy 

With participants from health care, finance, technology and government, the course fostered rich dialogue around the practical realities of AI integration, including enterprise-scale deployment, regulatory compliance and responsible implementation. 

Eily Hickson, head of global data and AI strategy at Sanofi, shared use cases spanning pharmaceutical research and development, accessibility technologies and clinical diagnostics. She highlighted how AI could help detect cancer earlier, alleviate long-term pain and enhance independence for people with disabilities.  

“If you live in the ‘scary,’ you’ll never get to the ‘good,’” Hickson says. “I’m cautiously optimistic about the benefits — from curative medicine to quality-of-life improvements.” 

She also emphasized the importance of executive alignment and storytelling in ensuring long-term AI success. 

“If you don’t have executive buy-in, you can have the best strategies, but if the mindset isn’t ready to receive them, it will fall flat,” Hickson says. “AI has to be wrapped into every layer of your business, from how you run a portfolio to how you tell your story to investors.” 

Eily Hickson speaking in front of a group of business and technology leaders

Eily Hickson shares use cases spanning pharmaceutical research and development, accessibility technologies and clinical diagnostics.

From hype to implementation: Leadership that drives change 

Designed to move beyond AI hype and focus squarely on real-world implementation, Operationalizing Generative AI equipped leaders with the frameworks needed to evaluate tools, assess cost–performance trade-offs and identify both opportunities and risks across business functions. 

In a closing panel discussion, moderated by Dr. Javed Mostafa, dean of the Faculty of Information at the University of Toronto, panelists explored organizational models for AI adoption, building internal capability and managing the tension between innovation and accountability. 

“LLMs are like a bright student that hasn’t done the reading,” Lin says. “At a glance you think, ‘this isn’t bad,’ but then it completely falls apart under closer evaluation.” 

Supporting Canada’s innovation ecosystem 

“Generative AI is not just a trend,” says Aaron Pereira, executive director of WatSPEED. “It represents a fundamental shift in how organizations operate and compete. This type of programming is tailored to helping industry leaders prepare for what’s next, ensuring they have the skills and knowledge to lead through disruption.” 

WatSPEED continues to support Canadian organizations in preparing for the future of work through professional and executive education tailored to the realities of rapid technological, societal and economic change. By working closely with industry partners and drawing on Waterloo’s renowned research expertise, WatSPEED helps organizations stay competitive, resilient and future-ready. 

To learn more about WatSPEED’s innovative AI programming, visit watspeed.uwaterloo.ca.  



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

OpenAI announces new safety measures for teens and users in crisis on ChatGPT

Published

on


OpenAI announced Tuesday it is implementing new safeguards for teenagers and people in crisis using ChatGPT, as the artificial intelligence company faces a wrongful death lawsuit from a California family.

The company said it is improving its models to better recognize signs of mental and emotional distress. OpenAI added that work is already underway, with some changes moving quickly, while others will take more time.

“You won’t need to wait for launches to see where we’re headed,” the company said in a statement posted to its website.

RELATED STORY | ChatGPT’s dark side: New report details alarming responses to teens seeking help

The focus areas will include expanding interventions, making it easier to reach emergency services, and strengthening protections for teens, according to OpenAI.

The changes come as the AI giant faces a wrongful death lawsuit brought by the family of a California teenager who died by suicide.

The lawsuit alleges the teen was able to bypass the chatbot’s current guardrails, with the system occasionally affirming self-destructive thoughts that included suicidal ideation.

This story was reported on-air by a journalist and has been converted to this platform with the assistance of AI. Our editorial team verifies all reporting on all platforms for fairness and accuracy.





Source link

Continue Reading

AI Research

University-based AI research – USBE and Information Technology

Published

on


The Artificial Intelligence Institute for Next Generation Food Systems at the University of California, Davis, was one of the seven original Artificial Intelligence institutes announced in August 2020. 

The Artificial Intelligence Institute for Next Generation Food Systems is funded by the U.S. Department of Agriculture’s National Institute for Food and Agriculture.

At the same time, the National Science Foundation leads the overall Artificial Intelligence Institutes program. 

Recently, UC Davis announced that the  National Science Foundation has awarded the institution $5 million over five years to run the Artificial Intelligence Institutes Virtual Organization, a community hub for AI institutes established by the federal government. 

The Artificial Intelligence Institutes Virtual Organization is part of a $100 million public-private investment in AI recently announced by the National Science Foundation. 

In December 2024, the Artificial Intelligence Institutes Virtual Organization received $1.75 million from Google.org to support AI education, including AI curriculum for K-16 and workforce training, AI-assisted learning, and summer programs in AI for high school teachers and students. 

As of July 29, 2025, the Artificial Intelligence Institutes Virtual Organization was a virtual organization with support from the National Science Foundation, run by staff from the Artificial Intelligence Institute for Next Generation Food Systems (AIFS) at UC Davis. With the new investment, it will become a National Science Foundation-branded community hub.  

The Artificial Intelligence Institutes Virtual Organization began as an effort to coordinate activities among the original federal AI institutes, including the Artificial Intelligence Institute for Next Generation Food Systems (AIFS) at the University of California, Davis, and then to share knowledge with new institutes as they were established. 

It has expanded into a virtual hub that supports all the institutes, including organizing an annual summit for the leadership of AI institutes. 

Under the new contract, the Artificial Intelligence Institutes’ Virtual Organization will provide events and venues that bring the AI Institutes’ personnel and other stakeholders together, creating mechanisms for cross-institute connection. 

It will also foster the development of new public-private partnerships and promote a positive interest in university-based AI research, as well as the development and use of AI for the greater good.  





Source link

Continue Reading

AI Research

New Illinois law restricts use of AI in psychotherapy

Published

on


Less than a year ago, Illinois state legislator Bob Morgan heard from a group of social workers. They asked him to look into artificial intelligence therapy bots. 

Morgan said he heard “story after story of new apps and new examples of AI therapy bots that are really providing bad advice — and sometimes dangerous advice” to individuals dealing with substance abuse, psychosis, suicidal ideation, and other life-or-death issues. 

In one particular example from a therapist, Morgan said a chatbot told a person with an addiction to take more drugs “because it felt good in the moment.” 

So, the state representative got to work drafting a bill that bans therapists from using AI other than for administrative purposes, like notetaking or scheduling. The law also says chatbots cannot diagnose or treat mental illness — or market themselves as if they do. 

“We’re stepping in and saying, if you’re an AI bot pretending to be a therapist, that is inappropriate, and we’re going to shut that down,” Morgan said. 

Illinois is not the first state to pass legislation regarding the use of AI in psychotherapy. Utah and Nevada have passed laws this year to rein in the use of chatbots and other tools in mental health treatment. 

This “patchwork approach” by the states is likely to continue, according to Vaile Wright, senior director of health care innovation at the American Psychological Association. 

Wright said a federal approach is preferred because “then you would have some uniformity and greater specificity across the different states that could have better outcomes.”

But such regulation is unlikely, given efforts this summer by the U.S. House of Representatives to ban states from regulating artificial intelligence for a decade as part of President Donald Trump’s One Big Beautiful Bill Act. The U.S. Senate eventually voted to strike that provision from the bill. 

Federal laws aside, Wright said AI therapy bans like Illinois’ don’t address one of the biggest issues plaguing her field: people going to generative AI platforms like ChatGPT and Character AI for mental health support. 

“They call themselves companions; they say they’ll help with your loneliness. But when you read the fine print, they very clearly say, ‘We are not a therapeutic aid,’” Wright said. 

She added that the business model for these platforms is to keep visitors on by being validating and reinforcing. 

“Basically, they’re telling you exactly what you want to hear. This is the antithesis of therapy,” she said.

However, Wright does see a future where mental health chatbots are “rigorously tested, rooted in psychological science, co-created with experts, and they’ll have humans monitoring the interactions,” she said. 

Such tools — if federally regulated — could help fill a void in the U.S.’s growing mental health crisis, Wright added.

Until then, licensed psychologists like Michelle Kalnasy Powell say they will watch and wait.

“I am skeptical about AI,” she said, adding that Illinois’ ban is a good starting point, but it may not go far enough.

Kalnasy Powell currently uses AI for billing, but not for taking notes during sessions with patients. Some of her peers use dictation software that complies with patient confidentiality laws. 

“Even then, I question when I read the terms of service,” she said. “It’s kind of like sending a session off into the ether. What are you doing with that content? Is it really deleted?” 

She added that her work is very personal and vulnerable. 

“It is a privilege and an honor to be able to hear people’s stories, both the joy, the happiness and the sorrow,” she said. “[If] that would somehow wind up out there in a way that none of us could predict, but wind up harming the client — I’m just not willing to risk it.”

Related Topics



Source link

Continue Reading

Trending