Tools & Platforms
How should universities teach leadership now that teams include humans and autonomous AI agents?

Unlike teaching students how to use artificial intelligence (AI) and other technologies, which often feels like fighting a losing battle to keep up with constant change, teaching leadership used to be far more stable. The popularity and effectiveness of certain leadership styles undergo gradual shifts, but nothing too dramatic. The past 20 years, for example, have seen a transition from more authoritarian styles towards people-focused and motivating leadership, such as transformational leadership.
As an academic, it was comforting to know I just had to tweak my slides every year rather than change whole lectures. There were no alarms and no surprises when teaching leadership.
Generative AI has shattered that calm. Within a short time, GenAI has gone from being a tool in the workplace to being a teammate. The modern leader no longer leads only humans, as they have done for centuries, but also autonomous AI agents. An autonomous AI agent, which uses GenAI and other information systems, can operate independently (as the name suggests) as part of a team, communicating with other team members and “thinking” for itself. For example, the autonomous AI agent can participate in a meeting and receive verbal instructions for complex tasks. As educators, our comfort zone of reciting stories of great leaders from history has ceased to be sufficient. How would Napoleon or Churchill have managed autonomous AI agents? We don’t know.
So, how should university teachers prepare a new generation of modern leaders to approach these mixed teams? Teaching leadership styles that are effective at motivating people is no longer enough. In addition, students must now learn how to build their team’s trust in AI, then they will need to know how to combine leadership styles in a way that gets the most out of both humans and AI.
Build trust with a clear vision of what the role of AI is
First, leaders need to become ambidextrous, as comfortable leading humans as they are leading on which of various technologies are used. Traditionally, some leaders have been focused on day-to-day operational and tactical decisions, while others focused on more strategic decisions. To lead autonomous AI agents, the leader must build trust in them among the team. The team members must have confidence in the autonomous AI, meaning they must believe in its ability. The leader must be clear on its use and build a consensus around this. The team must be put on a sustainable trajectory for change. Failing to identify the right role for AI will keep the team stuck in uncertainty.
Leadership students must learn the different forms of AI, their strengths and weaknesses. They must also learn the typical concerns that various stakeholders have and how to take specific steps to build trust. These topics should be discussed in class first so students appreciate the context in which they will lead. Students can practise analysing a case study and developing a vision for the role of AI and how to build trust.
Choose the right combination of leadership styles
While more time must be spent on understanding emerging technology, the established approaches to leadership still need to be covered in class.
The most popular leadership styles taught at university today are servant, transactional and transformational. All three are effective in motivating a team. The servant style focuses on providing support; transactional focuses on an enticing reward for the work done; and transformational creates a shared vision of the future. These popular leadership styles are still the ones to focus on when leading mixed teams of humans and autonomous agents, but the leader needs to think about how to combine them to get the best result (see Table 1).
Don’t know how AI will be used, clear on goals and journey Transactional and servant |
Don’t know how AI will be used, not clear on goals and journey Servant |
Know how AI will be used, clear on goals and journey Transactional |
Know how AI will be used, not clear on goals and journey Transactional and transformational |
Transactional and transformational leaderships’ combined impact on AI and trust
Given the volatile times we live in, a leader may find themselves in a situation where they know how they will use AI, but they are not entirely clear on the goals and journey. In a teaching context, students can be given scenarios where they must lead a team, including autonomous AI agents, to achieve goals. They can then analyse the situations and decide what leadership styles to apply and how to build trust in their human team members. Educators can illustrate this decision-making process using a table (see above).
They may need to combine transactional leadership with transformational leadership, for example. Transactional leadership focuses on planning, communicating tasks clearly and an exchange of value. This works well with both humans and automated AI agents.
Transformational leadership prioritises creating a shared vision to change something, inspiring and motivating people to go beyond their narrow personal interests. As AI and automation replace some human interaction, class discussion could cover the emotional void and isolation left over and how, in the age of AI, a leader needs deep, emotionally engaged relationships powered by servant or transformational leadership. These strong emotional bonds can complement the well-crafted exchange of value arranged with transactional leadership.
Some clarity on the future of teaching leadership
An effective course on leadership today must convey the importance of leading on technology as well as people, building trust in the technology, and finding the best combination of leadership styles to get the most out of humans and automated AI agents.
Alex Zarifis is a lecturer in information systems at the University of Southampton, UK. His latest book is Leadership with AI and Trust: Adapting Popular Leadership Styles for AI (De Gruyter, 2025).
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
Tools & Platforms
AI And Creativity: Hero, Villain – Or Something Far More Nuanced? – New Technology

As part of SXSW’s first tever UK edition, Lewis Silkin
brought together a packed room to hear five esharp minds –
photographer-advocate Isabelle Doran, tech founder Guy Gadney,
licensing entrepreneur Benjamin Woollams, Commercial partners Laura
Harper and Phil Hughes – wrestle with one deceptively simple
question: is AI a hero or a villain in the creative world?
Spoiler: it’s neither. Over sixty fast-paced minutes, the
panel dug into the real-world impact of generative models, the gaps
in current law and the uneasy economics facing everyone from
freelancers to broadcasters. We’ve distilled the conversation
into six take-aways that matter to anyone who creates, commissions
or monetises content.
1. Generative AI is already taking work – fast
“Generative AI is competing with creators in their
place of work,” warned Isabelle Doran, citing her
Association of Photographers’ latest survey. In September 2024,
30% of respondents had lost a commission to AI; five months later
that figure ehit 58%. The fallout runs wider than photographers.
When a shoot is cancelled, stylists, assistants and post-production
teams stand idle too – a ripple effect the panel believes
that policy-makers ignore at their peril.
2. Yet the tech also unlocks new forms of storytelling
Guy Gadney was quick to balance the gloom: “It’s a
proper tsunami in the sense of the breadth and volume that’s
changing,” he said, “but it also lets us ask
what stories we can tell now that we couldn’t
before.” His company, Charismatic AI, is building tools
that let writers craft interactive narratives at a speed and scale
unheard of two years ago. The opportunity, he argued, lies in
marrying that capability with fair economic models rather than
trying to “block the tide“.
3. The law isn’t a free-for-all – but it is
fragmenting
Laura Harper cut through the noise: “The status quo at
the moment is uncertain and it depends on what country you’re
operating in.” In the UK, copyright can subsist in
computer-generated works; in the US, it can’t. EU rules require
commercial text-and-data miners to respect opt-outs; UK law
doesn’t – yet. Add pergent notions of “fair
use” and you get a regulatory patchwork that leaves creators
guessing and investors hesitating.
4. Transparency is the missing link
Phil Hughes nailed the practical blocker: “We can’t
build sensible licensing schemes until we know what data went into
each model.” Without a statutory duty to disclose
training sets, claims for compensation – or even informed
consent – stall. Isabelle Doran backed him up, pointing to
Baroness Kidron’s amendment that would force openness via the
UK’s Data Act. The Lords have now sent that proposal back to
the Commons five times; every extra week, more unlicensed works are
scraped.
5. Collective licensing could spread the load
Inpidual artists can’t negotiate with OpenAI on equal terms,
but Benjamin Woollams sees hope in a pooled approach. “Any
sort of compensation is probably where we should start,”
he said, arguing for collective rights management to mirror how
music collecting societies handle radio play. At True Rights
he’s developing pricing tools to help influencers understand
usage clauses before they sign them – a practical step
towards standardisation in a market famous for anything but.
6. Personality rights may be the next frontier
Copyright guards expression; it doesn’t stop a model cloning
your voice, gait or mannerisms. “We need to strengthen
personality rights,” Isabelle Doran urged, echoing calls
from SAG-AFTRA and beyond. Think passing off on steroids: a legal
shield for the look, sound and biometric data that make a performer
unique. Laura Harper agreed – but reminded us that
recognition is only half the battle. Enforcement mechanisms,
cross-border by default, must follow fast.
Where does that leave us?
AI is not marching creators to the cliff edge, but it is forcing
a reckoning. The panel’s consensus was clear:
- We can’t uninvent generative tools – nor should
we. - Creators deserve both transparency and a cut of the value
chain. - Government must move quickly, or the UK risks watching
leverage, investment and talent drift overseas
As Phil Hughes put it in closing:
“We all know artificial intelligence has unlocked
extraordinary possibilities across the creative industries. The
question is whether we’re bold enough and organised enough to
make sure those possibilities pay off for the people whose
imagination feeds the machine.”
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
Tools & Platforms
Tampere University GPT-Lab launches AI project with City of Pori

GPT-Lab, part of Tampere University’s Faculty of Information Technology and Communication Sciences in Finland, has begun collaborating with the City of Pori Unemployment Services on the Generative Artificial Intelligence in Business Support (GENT) project.
The initiative will test how AI-driven automation can improve efficiency and reliability in public sector services.
According to a LinkedIn post from GPT-Lab, the kickoff meeting on 3 September brought together project researchers and city representatives to align objectives and set the project roadmap. The work will focus on automating routine inquiries and case handling to reduce the workload of staff, speed up responses to citizens, and free time for tasks that require human expertise.
The project is designed to improve the efficiency, accessibility, and reliability of unemployment services. It will also provide a framework for the responsible use of AI in the public sector.
The GENT project, funded by the Satakuntaliitto Regional Council of Satakunta and led by Tampere University, runs until September 2026. Its broader aim is to bring generative AI expertise to companies and organizations in the Satakunta region. Researchers will work directly with businesses to co-create AI-assisted experiments that enhance productivity, investment efficiency, and competitiveness.
Solutions and materials developed through these experiments will be shared with all companies in the region and can be adapted to individual needs. The project also highlights cooperation between SMEs, public services, and research institutions in Finland.
The ETIH Innovation Awards 2026
Tools & Platforms
Shanghai tech conference showcases AI in action

The 2025 Inclusion Conference on the Bund, which spotlighted the real-world application of artificial intelligence, embodied intelligence, and advanced technologies across various industries and aspects of daily life, kicked off on Wednesday in the Huangpu World Expo Park in Shanghai.
Industry leaders, researchers, and enthusiasts gathered to explore the latest advancements and discuss the future of technology.
Xeonova, a Hefei-based commercial fusion company, aims to accelerate fusion energy development through AI, according to Wang Ge, chief scientist at the firm.
“We are working to integrate AI into our current fusion engineering process,” Wang said. The company aims to use AI to build digital twins of fusion reactors, enabling rapid iteration and optimization in a virtual environment.
In addition to energy, AI applications in robotics garnered significant attention. Boulhol Clement from France, working in social media in Shanghai, said he was excited about the future and impressed by the AI and robot technology.
“I really like all the technology stuff with AI and with robots, and I think some robots are very impressive,” Clement said, highlighting the potential of robots in various fields, including rescue operations.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi