Connect with us

Tools & Platforms

AI plays hefty role in tackling cargo theft, Werner exec says

Published

on


This audio is auto-generated. Please let us know if you have feedback.

With the rise of cargo theft incidents, carriers need a toolbox of varied solutions to both prevent incidents before they occur and to address problems afterward.  

In 2024, incidents of cargo theft were up 27% year over year, reaching historic highs exceeding $1 billion, according to a July 16 testimony from David Glawe, president and CEO of the National Insurance Crime Bureau.

Glawe said that figure is set to rise another 22% in 2025 while “other estimates suggest that cargo losses may reach up to $35 billion annually.”

The rise of cargo theft has been a focus of the trucking sector, drawing attention through industry warnings and reportscongressional hearings and carrier interventions. One tool carriers are increasingly highlighting to combat cargo theft is artificial intelligence.

For example, Landstar System said in a May earnings call it was investing significantly in technology and AI and stressed how ongoing vigilance is needed.

“It’s playing a pretty hefty role,” said Werner Enterprises SVP of Logistics Jordan Strawn during an Aug. 28 webinar.

For Werner, technology and AI investment are nothing new. Besides using tech to combat cargo theft, Werner is also focusing on technological advances to fuel its logistics growth,, including through its branded EDGE TMS.

The company is also scaling the use of conversational AI calling and notifications for reminders and communication with new hires, associates and brokerage carriers, CEO Derek Leathers said in a Q2 earnings call.

Leveraging AI before theft happens

Werner conducts a carrier vetting process that is extensive when deciding to do business and “we’re leveraging AI to understand who it is we’re working with,” Strawn said.

Being able to know who you’re working with before accepting business can help lower the risk of cargo theft. Data is consumed during the vetting process, after which AI is leveraged to understand what the information means and if it makes sense for Werner to move a load.

One strategy the company employs is to analyze what a carrier does within Werner’s network.

“If they’re moving a load from Dallas, I think we typically see them in the Southeast, and the furthest they stretch out is maybe out of the Southwest,” Strawn said. Then, if that same customer wants a load that’s coming from the Northeast going to the West Coast, Werner leverages AI to consume and aggregate the information so it can make an informed decision about whether to take on the load

Leveraging tech after theft happens

“There’s a really cool tech out there right now that helps us on the backside if equipment gets stolen, or if a load gets stolen, that’s on our third-party carrier,” Strawn said.

The tech Strawn is referring to leverages computer imagery. The third-party carrier has placed cameras all over major corridors throughout the United States, specifically focusing on areas where it expects to take in the most useful information.

Through those cameras, Werner can see snapshots of commercial vehicles to identify information such as MC, DOT, truck, trailer and tag numbers. When a load comes up missing, Werner can go to the database created by the system and enter details obtained from a shipper or surveillance. 

“It’s been extremely successful. We’ve actually been able to locate and send authorities to go and stop these trucks while they’re in motion, because we followed them where they’re going,” Strawn said.

Other carriers Werner works with are becoming more open to technology, such as electronic logging device integrations.

In the past, Werner found carriers frequently unwilling to share information, but the Omaha, Nebraska-based business is leveraging ELD integrations in its TMS platform to identify where a truck is and when it arrives to its destination. This provides Werner access to data confirming a trucking is moving along a route appropriately while allowing it to safeguard against sharing information that should remain in-house, Strawn said. 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Reimagining Education In The Age Of AI

Published

on


At the Executive Technology Board, we’ve been reflecting on how #education must evolve in a world shaped by #AI. Much of what worked in the past is no longer sufficient: Higher education was designed for a time when information was scarce, when expertise was locked away in libraries, and when tools for discovery and application were limited. Today, knowledge is instantly accessible, and AI is not just a new tool—it is reshaping the very definition of work.

The pace of change is extraordinary. Traditional curricula are being obsoleted faster than ever before. Skills that once served as a career foundation are now outdated within years, sometimes months. And AI is not only transforming existing roles but also creating entirely new categories of work whilst at the same time, rendering others obsolete. The question is no longer whether education needs to change, but how quickly it can.

Last week in London, we had the privilege of hosting the Dean and Associate Dean in computer sciences at Northeastern University. The conversation was inspiring – NU’s work offers a glimpse of how education can be reimagined for this new era. Their perspective underscored a vital truth: preparing students for the world ahead requires moving beyond incremental updates to curricula and toward a fundamental rethinking of what education itself should be.

From that discussion and broader reflections at our global technology think tank, three imperatives stand out:

1. From centers of learning to facilitators of learning

Universities can no longer view themselves simply as repositories of knowledge. Instead, they must evolve into facilitators of continuous learning. This means building systems and cultures that are agile — able to incorporate new knowledge, tools, and practices as quickly as industries themselves are evolving. Students won’t succeed because they memorized a body of facts; they’ll succeed because they developed the capacity to adapt, unlearn, and relearn.

2. AI as a multidisciplinary foundation

One of the most exciting promises of AI lies at the intersections: healthcare and AI, design and AI, sustainability and AI, law and AI. The future of innovation will come less from siloed expertise and more from multidisciplinary collaboration. Universities must therefore embed AI literacy across disciplines — not just computer science programs but also the social sciences, arts, and professional schools. Students of law, medicine, business, and even the humanities need a working fluency in AI, because it will define their fields as much as it will define technology itself.

3. Integrating real-world experience into the curriculum

Learning cannot remain confined to the classroom. The most effective education models are those that blend theory with real-world practice. Northeastern’s co-op program is a leading example: students alternate between classroom study and full-time industry roles, graduating not only with degrees but also with significant hands-on experience. This kind of integration is no longer optional — it’s essential in an era where employers expect graduates to contribute immediately, and where technologies shift too quickly for classroom learning alone to keep pace.

Building an ecosystem for lifelong learning

Perhaps the biggest shift we need to embrace is that education no longer ends at graduation. In the age of AI, every professional will need to continuously refresh their skills, adapt to new tools, and reinvent themselves over the course of their career. Universities, industry, and policymakers must collaborate to create a true ecosystem for lifelong learning. Micro-credentials, modular certifications, and continuous access to new learning pathways will be the new norm.

This is not just a challenge for academia. It is a collective responsibility. Businesses must invest in workforce development. Policymakers must create frameworks that support reskilling at scale. Universities must reinvent their models. And learners themselves must take ownership of continuous growth. The future of education will not be about teaching students what to learn, but preparing them how to learn—continuously, adaptively, and across disciplines.



Source link

Continue Reading

Tools & Platforms

Why your boss (but not you) should be replaced by an AI

Published

on


Elon Musk is rarely out of the news these days. Widely acknowledged to be the world’s richest man, he’s also known for running a number of major companies.

The trouble is, some of those companies haven’t been doing so well lately.

Twitter (now known as X) is said to have lost around 75 per cent of its value during Musk’s time as CEO.

Meanwhile, sales of Teslas, the electric cars made by another company Musk is currently CEO of, are said to be slumping despite a wider increase in registrations of electric vehicles generally.

One of Tesla’s major investors has publicly called for Musk to step down as CEO and there have been rumours (denied by the company) that the board might be seeking to replace him. But if someone else were to take his place, who’s to say they’d do any better?

Maybe Musk is doing the best any human could under the circumstances. Maybe, given the demands of the job, a human just isn’t enough. But could an artificial intelligence (AI) be up to the task?

In an attempt to answer this question, I asked a large-language model (LLM) AI directly, giving it this prompt:

“You are the CEO of an electric vehicle firm with a turnover of $100 billion, selling nearly 2 million cars a year and with over 100,000 employees. What would be your strategy for growing the business over the next five years?”

The AI replied, almost instantly, with a 350-word answer beginning: “…I’d implement a five-year growth strategy focused on scaling intelligently, deepening ecosystem control and pioneering innovation – all while navigating geopolitical, environmental and technological shifts…”

It then proceeded to outline what it called “a strategic blueprint across six core pillars”, with the focus being to sell more cars into unpenetrated markets.

I know next to nothing about running a car company (I’m a robotics professor at a UK university), but the strategy looked plausible, scaled to the outlook and size of a multinational company, and attuned to global opportunities.

I would say that I was impressed, except that I’m used to LLMs providing these kinds of well-informed answers to challenging questions.

But as well as being a robotics professor, I’m also a co-founder of two UK robotics startups, though I’ve never been a CEO.

Given this background, I tried the AI again, this time with the prompt:

“You are the CEO of a small startup robotics company with enough budget to cover running costs for 12 months. You need to choose between either investing in research and development or pushing hard to gain sales with your current product. Which should it be?”

Once again, the AI gave me a rapid and reasoned response: “As the CEO of a small startup robotics company with only 12 months of runway, you must prioritise customer validation and sales traction with your product, rather than diverting heavily into research and development. Here’s the reasoning and how to execute the right strategy…”

I’m in a (slightly) better position to assess this advice and can say that I found it credible, both in terms of what needed to be done and how to execute.

So, going back to the big question: could an AI actually do a CEO’s job? Or, to look at this another way, what kind of intelligence, artificial or otherwise, do you need to be a great CEO?

Read more:

Intangible skills

In 2023, the international management consultancy McKinsey published an article on what makes a successful CEO. The CEO’s main task, as McKinsey sees it, is to develop the company’s strategy and then ensure that its resources are suitably deployed to execute that strategy.

It’s a tough job and many human CEOs fail. McKinsey reported that only three out of five new CEOs met company expectations during their first 18 months in the role.

We’ve already seen that AIs can be strategic and, given the right information, can formulate and articulate a business plan, so they might be able to perform this key aspect of the CEO’s role. But what about the other skills a good corporate leader should have?

Creativity and social intelligence tend to be the traits that people assume will ensure humans keep these top jobs.

People skills are also identified by McKinsey as important for CEOs, as well as the ability to see new business opportunities that others might miss – kind of creative insight AIs currently lack, not least because they get most of their training data second-hand from us.

Many companies are already using AI as a tool for strategy development and execution, but you need to drive that process with the right questions and critically assess the results. For this, it still helps to have direct, real-world experience.

Calculated risk

Another way of looking at the CEO replacement question is not what makes a good CEO, but what makes a bad one?

Because if AI could just be better than some of the bad CEOs (remember, two out of five don’t meet expectations), then AI might be what’s needed for the many companies labouring under poor leadership.

Sometimes the traits that help people become corporate leaders may actually make it harder for them to be a good CEO: narcissism, for example.

People skills, as well as the ability to assess situations and think strategically, are sought-after traits in a CEO – Photo credit: Getty Images

This kind of strong self-belief might help you progress your career, but when you get to CEO, you need a broader perspective so you can think about what’s good for the company as a whole.

A growing scientific literature also suggests that those who rise to the top of the corporate ladder may be more likely to have psychopathic tendencies (some believe that the global financial crisis of 2007 was triggered, in part, by psychopathic risk-taking and bad corporate behaviour).

In this context AI leadership has the potential to be a safer option with a more measured approach to risk.

Other studies have looked at bias in company leadership. An AI could be less biased, for instance, hiring new board members based on their track record and skills, and without prejudging people based on gender or ethnic bias.

We should, however, be wary that the practice of training AIs on human data means that they can inherit our biases too.

A good CEO is also a generalist; they need to be flexible and quick to analyse problems and situations.

In my book, The Psychology of Artificial Intelligence, I’ve argued that although AI has surpassed humans in some specialised domains, more fundamental progress is needed before AI could be said to have the same kind of flexible, general intelligence as a person.

In other words, we may have some of the components needed to build our AI CEO, but putting the parts together is a not-to-be-underestimated challenge.

Funnily enough, human CEOs, on the whole, are big AI enthusiasts.

A 2025 CEO survey by consultancy firm PwC found that “more than half (56 per cent) tell us that generative AI [the kind that appeared in 2022 and can process and respond to requests made with conversational language] has resulted in efficiencies in how employees use their time, while around one-third report increased revenue (32 per cent) and profitability (34 per cent).”

So CEOs seem keen to embrace AI, but perhaps less so when it comes to the boardroom – according to a PwC report from 2018, out of nine job categories, “senior officials and managers” were deemed to be the least likely to be automated.

Returning to Elon Musk, his job as the boss of Tesla seems pretty safe for now. But for anyone thinking about who’ll succeed him as CEO, you could be forgiven for wondering if it might be an AI rather than one of his human boardroom colleagues.

Read more:



Source link

Continue Reading

Tools & Platforms

OpenAI Backs AI-Animated Film for 2026 Cannes Festival

Published

on


OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. Credit: Focal Foto / Wikimedia Commons / CC BY-SA 4.0

OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. The tech company aims to prove its AI technology can revolutionize Hollywood filmmaking with faster production timelines and significantly lower costs. 

The movie titled “Critterz” will be about woodland creatures that go on an adventure after their village is damaged by a stranger. The film’s producers are aiming for a global theatrical release after the premiere at the Cannes Film Festival. 

The project has a budget of less than US$30 million and a production timeline of nine months. This is a comparable and significant difference, given that most mainstream animated movies have budgets in the range of US$100 to US$200 million, whilst also having a three-year development and production cycle. 

OpenAI-backed ‘Critterz’ set for release at the Cannes Film Festival

Chad Nelson, a creative specialist at OpenAI, originally began developing Critterz as a short film three years ago, using the company’s DALL-E image generation tool to develop the concept. Nelson has now partnered with the London-based Vertigo Films and studio Native Foreign in Los Angeles to expand the project into a feature film. 

In the news release that announced OpenAI’s backing of the film, Nelson said: “OpenAI can say what its tools do all day long, but it’s much more impactful if someone does it,” adding, “That’s a much better case study than me building a demo.” Crucially, however, the film’s production will not be entirely AI-generated, as it will blend AI technology with human work. 

Human artists will draw sketches that will be fed into OpenAI’s tools such as GPT-5, the Large Language Model (LLM) on which ChatGPT is built, as well as other image-generating AI models. Human actors will voice the characters. 

Critterz has some of the writing team behind the smash hit ‘Paddington in Peru’

Despite having some of the writing team behind the hit film Paddington in Peru, it comes at a time of intense legal fights between Hollywood studios and AI and other tech companies over intellectual property rights. 

Studios such as Disney, Universal, and Warner Bros. have filed for copyright infringement suits against Midjourney, another AI firm, alleging that they illegally used their characters to train its image generation engine. Critterz will be funded by Vertigo’s Paris-based parent company, Federation Studios, with some 30 contributors set to share profits. 

Crucially, however, Critterz will not be the first feature film ever made with generative AI. Last year, “DreadClub: Vampire’s Verdict” was released and is widely considered to be the first feature film entirely made by generative AI. It had a budget of US$405. 



Source link

Continue Reading

Trending