Connect with us

AI Insights

Tolstoy’s Complaint: Mission Command in the Age of Artificial Intelligence

Published

on


What will become of battlefield command in the years ahead? This question is at the heart of the US Army’s once-in-a-generation reforms now underway. In search of answers, the Army looks to Ukraine. Events there suggest at least two truths. One is that decentralized command, which the US Army calls mission command and claims as its mode, will endure as a virtue. A second is that future commanders will use artificial intelligence to inform every decision—where to go, whom to kill, and whom to save. The recently announced Army Transformation Initiative indicates the Army intends to act on both.

But from these lessons there arises a different dilemma: How can an army at once preserve a culture of decentralized command and integrate artificial intelligence into its every task? Put another way, if at all echelons commanders rely on artificial intelligence to inform decisions, do they not risk just another form of centralization, not at the top, but within an imperfect model? To understand this dilemma and to eventually resolve it, the US Army would do well to look once again to the Ukrainian corner of the map, though this time as a glance backward two centuries, so that it might learn from a young redleg in Crimea named Leo Tolstoy.

What Tolstoy Saw

Before he became a literary titan, Leo Tolstoy was a twenty-something artillery officer. In 1854 he found himself in besieged port of Sevastopol, then under relentless French and British shelling, party to the climax of the Crimean War. When not tending to his battery on the city’s perilous Fourth Bastion, Tolstoy wrote dispatches about life under fire for his preferred journal in Saint Petersburg, The Contemporary. These dispatches, read across literate Russia for their candor and craft, made Tolstoy famous. They have since been compiled as The Sebastopol Sketches and are considered by many to be the first modern war reportage. Their success confirmed for Tolstoy that to write was his life’s calling, and when the Crimean War ended, he left military service so that he might do so full-time.

But once a civilian Tolstoy did not leave war behind, at least not as a subject matter. Until he died, he mined his time in uniform for the material of his fiction. In that fiction, most prominently the legendary accounts of the battles of Austerlitz and Borodino found in War and Peace, one can easily detect what he thought of command. Tolstoy’s contention is that the very idea of command itself is practically a fiction, so tenuous is the relationship between what commanders visualize, describe, and direct and what in fact happens on the battlefield. The worst officers in Tolstoy’s stories do great harm by vainly supposing they understand battles at hand when they in fact haven’t the faintest idea of what’s going on. The best officers are at peace with their inevitable ignorance and rather than fighting it, gamely project a calm that inspires their men. Either way, most officers wander the battlefield, blinded by gun smoke or folds in the earth, only later making up stories to explain what happened, stories others wrongly take as credible witness testimony.

Command or Hallucination?

Students of war may wonder whether Tolstoy was saying anything Carl von Clausewitz had not already said in On War, published in 1832. After all, there Clausewitz made famous allowances for the way the unexpected and the small both shape battlefield outcomes, describing their effects as “friction,” a term that still enjoys wide use in the US military today. But the friction metaphor itself already hints at one major difference between Clausewitz’s understanding of battle and Tolstoy’s. For Clausewitz all the things that go sideways in war amount to friction impeding the smooth operation of a machine at work on the battlefield, a machine begotten of an intelligent design and consisting of interlocking parts that fail by exception. As Tolstoy sees it, there is no such machine, except in the imagination of largely ineffectual senior leaders, who, try as they might, cannot realize their designs on the battlefield.

Tolstoy thus differed from Clausewitz by arguing that commanders not only fail to anticipate friction, but outright hallucinate. They see patterns on the battlefield where there are none and causes where there is only coincidence. In War and Peace, Pyotr Bagration seeks permission to start the battle at Austerlitz when it is already lost, Moscow burns in 1814 not because Kutuzov ordered it but because the firefighters fled the city, and Russians’ masterful knockout flank at Tarutino occurs not in accordance with a preconceived plan but by an accident of logistics. Yet historians and contemporaries alike credit Bagration and Kutuzov for the genius of these events—to say nothing of Napoleon, whom Tolstoy casts as a deluded egoist, “a child, who, holding a couple of strings inside a carriage, thinks he is driving it.”

Why then, per Tolstoy, do the commanders and historians credit such plans with unrelated effects? Tolstoy answers this in a typical philosophical passage of War and Peace: “The human mind cannot grasp the causes of events in their completeness,” but “the desire to find those causes is implanted in the human soul.” People, desirous of coherence but unable to espy the many small causes of events, instead see grand things and great men that are not there. Here Tolstoy makes a crucial point—it is not that there are no causes of events, just that the causes are too numerous and obscure for humans to know. These causes Tolstoy called “infinitesimals,” and to find them one must “leave aside kings, ministers, and generals” and instead study “the small elements by which the masses are moved.”

This is Tolstoy’s complaint. He lodged it against the great man theorists of history, then influential, who supposed great men propelled human events through genius and will. But it also can be read as a strong case for mission command, for Tolstoy’s account of war suggests that not only is a decentralized command the best sort of command—it is the only authentic command at all. Everything else is illusory. High-echelon commanders’ distance from the fight, from the level of the grunt or the kitchen attendant, allows their hallucinations to persist unspoiled by reality far longer than those below them in rank. The leader low to the ground is best positioned to integrate the infinitesimals into an understanding of the battlefield. That integration, as Isaiah Berlin writes in his great Tolstoy essay “The Hedgehog and Fox,” is more so “artistic-psychological” work than anything else. And what else are the “mutual trust” and “shared understanding,” which Army doctrine deems essential to mission command, but the products of an artful, psychological process?

From Great Man Theory to Great Model Theory

Perhaps no one needs Tolstoy to appreciate mission command. Today American observers see everywhere on the battlefields of Ukraine proof of its wisdom. They credit the Ukrainian armed forces with countering their Russian opponents’ numerical and material superiority by employing more dynamic, decentralized command and control, which they liken to the US Army’s own style. Others credit Ukrainians’ use of artificial intelligence for myriad battlefield functions, and here the Ukrainians are far ahead of the US Army. Calls abound to catch up by integrating artificial intelligence into data-centric command-and-control tools, staff work, and doctrine. The relationship between these two imperatives, to integrate artificial intelligence and preserve mission command, has received less attention.

At first blush, artificial intelligence seems a convincing answer to Tolstoy’s complaint. In “The Hedgehog and the Fox” Isaiah Berlin summarized that complaint this way:

Our ignorance is due not to some inherent inaccessibility of the first causes, only their multiplicity, the smallness of the ultimate units, and our own inability to see and hear and remember and record and co-ordinate enough of the available material. Omniscience is in principle possible even to empirical beings, but, of course, in practice unattainable.

Can one come up with a better pitch for artificial intelligence than that? Is not artificial intelligence’s alleged value proposition for the commander its ability to integrate all the Tolstoyan infinitesimals, those “ultimate units,” then project it, perhaps on a wearable device, for quick reference by the dynamic officer pressed for time by an advancing enemy? Put another way, can’t a great model deliver on the battlefield what a great man couldn’t?

The trouble is threefold. Whatever model or computer vision or multimodal system we call “artificial intelligence” and incorporate into a given layer of a command-and-control platform represents something like one mind, but not many minds, so each instance wherein a leader outsources analysis to that artificial intelligence is another instance of centralization. Second, the models we have are disposed to patterns and to hubris, so are more a replication than a departure from the hallucinating commanders Tolstoy so derided. Finally, leaders may reject the evidence of their eyes and ears in deference to artificial intelligence because it enjoys the credibility of dispassionate computation, thereby forgoing precisely the ground-level inputs that Tolstoy pointed out were most important for understanding battle.

Consider the centralization problem. Different models may be in development for different uses across the military, but the widespread fielding of any artificial intelligence–enabled command-and-control system risks proliferating the same model across the operational army. If the purpose of mission command were strictly to hasten battlefield decisions by replicating the mind of a higher command within junior leaders, then the threat of centralization would be irrelevant because artificial intelligence would render mission command obsolete. But Army Doctrinal Pamphlet 6-0 lists as mission command’s purpose also the levering of “subordinate ingenuity”—something that centralization denies. In aggregate one risks giving every user the exact same coach, if not the exact same commander, however brilliant that coach or commander might be.

Such a universal coach, like a universal compass or rifle, might not be so bad, were it not for the tendency of that universal coach to hallucinate. That large language models make things up and then confidently present them as truth is not news, but it is also not going away. Nor is those models’ basic function, which is to seek patterns and then extend them. Computer vision likewise produces false positives. This “illusion of thinking,” to paraphrase recent research, severely limits the capacity of artificial intelligence to tackle novel problems or process novel environments. Tolstoy observes that during the invasion of Russia “a war began which did not follow any previous traditions of war,” yet Napoleon “did not cease to complain . . . that the war was being carried on contrary to all the rules—as if there were any rules for killing people.” In this way Tolstoy ascribes Napoleon’s disastrous defeat at Borodino precisely to the sort of error artificial intelligence is prone to make—the faulty assumption that the rules that once applied extend forward mechanically. There is thus little difference between the sort of prediction for which models are trained and the picture of Napoleon in War and Peace on the eve of his arrival in Moscow. He imagined a victory that the data on which he had trained indicated he ought expect but that ultimately eludes him.

Such hallucinations are compounded by models’ systemic overconfidence. Research suggests that, like immature officers, models prefer to confidently proffer an answer than confess they just do not know. It is then not hard to imagine artificial intelligence processing incomplete reports of enemy behavior on the battlefield, deciding that the behavior conforms to a pattern, filling in gaps the observed data leaves, then confidently predicting an enemy course of action disproven by what a sergeant on the ground is seeing. It is similarly not hard to imagine a commander directing, at the suggestion of an artificial intelligence model, the creation of an engagement area anchored to hallucinated terrain or queued by a nonexistent enemy patrol. In the aggregate, artificial intelligence might effectively imagine entire scenarios like the ones on which it was trained playing out on a battlefield where it can detect little more than the distant, detonating pop of an explosive-laden drone.

To be fair, uniformed advocates of artificial intelligence have said explicitly that no one wants to replace human judgment. Often those advocates speak instead of artificial intelligence informing, enhancing, enabling, or otherwise making more efficient human commanders. Besides, any young soldier will point out that human commanders make all the same mistakes. Officers need no help from machines to spook at a nonexistent enemy or to design boneheaded engagement areas. So what’s the big deal with using artificial intelligence?

The issue is precisely that we regard artificial intelligence as more than human and so show it a deference researchers call “automation bias.” It’s all but laughable today to ascribe to any human the genius for seeing through war’s complexity that great man theorists once vested in Napoleon. But now many invest similar faith in the genius of artificial intelligence. Sam Altman of OpenAI refers to his project as the creation of “superintelligence.” How much daylight is there between the concept of superintelligence and the concept of the great man? We thus risk treating artificial intelligence as the Napoleon that Napoleon could not be, the genius integrator of infinitesimals, the protagonist of the histories that Tolstoy so effectively demolished in War and Peace. And if we regard artificial intelligence as the great man of the history, can we expect a young lieutenant to resist its recommendations?

What Is to Be Done?

Artificial intelligence, in its many forms, is here to stay. The Army cannot afford in this interwar moment a Luddite reflex. It must integrate artificial intelligence into its operations. Anybody who has attempted to forecast when a brigade will be ready for war or when a battalion will need fuel resupply or when a soldier will need a dental checkup knows how much there is to be gained from narrow artificial intelligence, which promises to gain immense efficiencies in high-iteration, structured, context-independent tasks. Initiatives like Next Generation Command and Control promise as much. But the risks to mission command posed by artificial intelligence are sizable. Tolstoy’s complaint is of great use to the Army as it seeks to understand and mitigate those risks.

The first way to mitigate the risk artificial intelligence poses to mission command is to limit the use of it those high-volume, simple tasks. Artificial intelligence is ill-suited for low-volume, highly complex, context-dependent, deeply human endeavors—a good description of warfare—and so its role in campaign design, tactical planning, the analysis of the enemy, and the leadership of soldiers should be small. Its use in such endeavors is limited to expediting calculations of the small inputs human judgment requires. This notion of human-machine teaming in war is not new (it has been explored well by others, including Major Amanda Collazzo via the Modern War Institute). But amid excitement for it, the Army risks forgetting that it must carefully draw and jealously guard the boundary between human and machine. It must do so not only for ethical reasons, but because, as Tolstoy showed to such effect, command in battle humbles the algorithmic mind—of man or machine. Put in Berlin’s terms, command remains “artistic-psychological” work, and that work, even now, remains human work. Such caution does not require a ban on machine learning and artificial intelligence in simulations or wargames, which would be self-sabotage, but it does require that officers check any temptation to outsource the authorship of campaigns or orders to a model—something which sounds obvious now, but soon may not.

The second way is to program into the instruction of Army leaders a healthy skepticism of artificial intelligence. This might be done first by splitting the instruction of students into analog and artificial intelligence–enabled segments, not unlike training mortarmen to plan fire missions with a plotting board as well as a ballistic computer. Officers must first learn to write plans and direct their execution without aid before incorporating artificial intelligence into the process. Their ability to do so must be regularly recertified throughout their careers. Classes on machine learning that highlight the dependency of models on data quality must complement classes on intelligence preparation of the battlefield. Curriculum designers will rightly point out that curricula are already overstuffed, but if artificial intelligence–enabled command and control is as revolutionary as its proponents suggest, it demands a commensurate change in the way we instruct our commanders.

The third way to mitigate the risks posed is to program the same skepticism of artificial intelligence into training. When George Marshall led the Infantry School during the interwar years, he and fellow instructor Joseph Stilwell forced students out of the classroom and into the field for unscripted exercises, providing them bad maps so as to simulate the unpredictability of combat. Following their example, the Army should deliberately equip leaders during field exercises and wargames with hallucinatory models. Those leaders should be evaluated on their ability to recognize when the battlefield imagined by their artificial intelligence–enabled command-and-control platforms and the battlefield they see before them differ. And when training checklists require that for a unit to be fully certified in a task it must perform that task under dynamic, degraded conditions, “degraded” must come to include hallucinatory or inoperable artificial intelligence.

Even then, Army leaders must never forget what Tolstoy teaches us: that command is a contingent, human endeavor. Often battles represent idiosyncratic problems of their own, liable to defy patterns. Well-trained young leaders’ proximity to those problems is an asset rather than a liability. For that proximity they can spot on the battlefield infinitesimally small things that great data ingests cannot capture. A philosophy of mission command, however fickle and at times frustrating, best accommodates the insights that arise from that proximity. Only then can the Army see war’s Tolstoyan infinitesimals through the gun smoke and have any hope of integrating them.

Theo Lipsky is an active duty US Army captain. He is currently assigned as an instructor to the Department of Social Sciences at the US Military Academy at West Point. He holds a master of public administration from Columbia University’s School of International and Public Affairs and a bachelor of science from the US Military Academy. His writing can be found at theolipsky.substack.com.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Sgt. Zoe Morris, US Army



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Intro robotics students build AI-powered robot dogs from scratch

Published

on


Equipped with a starter robot hardware kit and cutting-edge lessons in artificial intelligence, students in CS 123: A Hands-On Introduction to Building AI-Enabled Robots are mastering the full spectrum of robotics – from motor control to machine learning. Now in its third year, the course has students build and enhance an adorable quadruped robot, Pupper, programming it to walk, navigate, respond to human commands, and perform a specialized task that they showcase in their final presentations.

The course, which evolved from an independent study project led by Stanford’s robotics club, is now taught by Karen Liu, professor of computer science in the School of Engineering, in addition to Jie Tan from Google DeepMind and Stuart Bowers from Apple and Hands-On Robotics. Throughout the 10-week course, students delve into core robotics concepts, such as movement and motor control, while connecting them to advanced AI topics.

“We believe that the best way to help and inspire students to become robotics experts is to have them build a robot from scratch,” Liu said. “That’s why we use this specific quadruped design. It’s the perfect introductory platform for beginners to dive into robotics, yet powerful enough to support the development of cutting-edge AI algorithms.”

What makes the course especially approachable is its low barrier to entry – students need only basic programming skills to get started. From there, the students build up the knowledge and confidence to tackle complex robotics and AI challenges.

Robot creation goes mainstream

Pupper evolved from Doggo, built by the Stanford Student Robotics club to offer people a way to create and design a four-legged robot on a budget. When the team saw the cute quadruped’s potential to make robotics both approachable and fun, they pitched the idea to Bowers, hoping to turn their passion project into a hands-on course for future roboticists.

“We wanted students who were still early enough in their education to explore and experience what we felt like the future of AI robotics was going to be,” Bowers said.

This current version of Pupper is more powerful and refined than its predecessors. It’s also irresistibly adorable and easier than ever for students to build and interact with.

“We’ve come a long way in making the hardware better and more capable,” said Ankush Kundan Dhawan, one of the first students to take the Pupper course in the fall of 2021 before becoming its head teaching assistant. “What really stuck with me was the passion that instructors had to help students get hands-on with real robots. That kind of dedication is very powerful.”

Code come to life

Building a Pupper from a starter hardware kit blends different types of engineering, including electrical work, hardware construction, coding, and machine learning. Some students even produced custom parts for their final Pupper projects. The course pairs weekly lectures with hands-on labs. Lab titles like Wiggle Your Big Toe and Do What I Say keep things playful while building real skills.

CS 123 students ready to show off their Pupper’s tricks. | Harry Gregory

Over the initial five weeks, students are taught the basics of robotics, including how motors work and how robots can move. In the next phase of the course, students add a layer of sophistication with AI. Using neural networks to improve how the robot walks, sees, and responds to the environment, they get a glimpse of state-of-the-art robotics in action. Many students also use AI in other ways for their final projects.

“We want them to actually train a neural network and control it,” Bowers said. “We want to see this code come to life.”

By the end of the quarter this spring, students were ready for their capstone project, called the “Dog and Pony Show,” where guests from NVIDIA and Google were present. Six teams had Pupper perform creative tasks – including navigating a maze and fighting a (pretend) fire with a water pick – surrounded by the best minds in the industry.

“At this point, students know all the essential foundations – locomotion, computer vision, language – and they can start combining them and developing state-of-the-art physical intelligence on Pupper,” Liu said.

“This course gives them an overview of all the key pieces,” said Tan. “By the end of the quarter, the Pupper that each student team builds and programs from scratch mirrors the technology used by cutting-edge research labs and industry teams today.”

All ready for the robotics boom

The instructors believe the field of AI robotics is still gaining momentum, and they’ve made sure the course stays current by integrating new lessons and technology advances nearly every quarter.

A water jet is mounted on this "firefighter" Pupper

This Pupper was mounted with a small water jet to put out a pretend fire. | Harry Gregory

Students have responded to the course with resounding enthusiasm and the instructors expect interest in robotics – at Stanford and in general – will continue to grow. They hope to be able to expand the course, and that the community they’ve fostered through CS 123 can contribute to this engaging and important discipline.

“The hope is that many CS 123 students will be inspired to become future innovators and leaders in this exciting, ever-changing field,” said Tan.

“We strongly believe that now is the time to make the integration of AI and robotics accessible to more students,” Bowers said. “And that effort starts here at Stanford and we hope to see it grow beyond campus, too.”



Source link

Continue Reading

AI Insights

Why Infuse Asset Management’s Q2 2025 Letter Signals a Shift to Artificial Intelligence and Cybersecurity Plays

Published

on


The rapid evolution of artificial intelligence (AI) and the escalating complexity of cybersecurity threats have positioned these sectors as the next frontier of investment opportunity. Infuse Asset Management’s Q2 2025 letter underscores this shift, emphasizing AI’s transformative potential and the urgent need for robust cybersecurity infrastructure to mitigate risks. Below, we dissect the macroeconomic forces, sector-specific tailwinds, and portfolio reallocation strategies investors should consider in this new paradigm.

The AI Uprising: Macro Drivers of a Paradigm Shift

The AI revolution is accelerating at a pace that dwarfs historical technological booms. Take ChatGPT, which reached 800 million weekly active users by April 2025—a milestone achieved in just two years. This breakneck adoption is straining existing cybersecurity frameworks, creating a critical gap between innovation and defense.

Meanwhile, the U.S.-China AI rivalry is fueling a global arms race. China’s industrial robot installations surged from 50,000 in 2014 to 290,000 in 2023, outpacing U.S. adoption. This competition isn’t just about economic dominance—it’s a geopolitical chess match where data sovereignty, espionage, and AI-driven cyberattacks now loom large. The concept of “Mutually Assured AI Malfunction (MAIM)” highlights how even a single vulnerability could destabilize critical systems, much like nuclear deterrence but with far less predictability.

Cybersecurity: The New Infrastructure for an AI World

As AI systems expand into physical domains—think autonomous taxis or industrial robots—so do their vulnerabilities. In San Francisco, autonomous taxi providers now command 27% market share, yet their software is a prime target for cyberattacks. The decline in AI inference costs (outpacing historical declines in electricity and memory) has made it cheaper to deploy AI, but it also lowers the barrier for malicious actors to weaponize it.


Tech giants are pouring capital into AI infrastructure—NVIDIA and Microsoft alone increased CapEx from $33 billion to $212 billion between 2014 and 2024. This influx creates a vast, interconnected attack surface. Investors should prioritize cybersecurity firms that specialize in quantum-resistant encryption, AI-driven threat detection, and real-time infrastructure protection.

The Human Element: Skills Gaps and Strategic Shifts

The demand for AI expertise is soaring, but the workforce is struggling to keep pace. U.S. AI-related IT job postings have surged 448% since 2018, while non-AI IT roles have declined by 9%. This bifurcation signals two realities:
1. Cybersecurity skills are now mission-critical for safeguarding AI systems.
2. Ethical AI development and governance are emerging as compliance priorities, particularly in regulated industries.

The data will likely show a stark divergence, reinforcing the need for investors to back training platforms and cybersecurity firms bridging this skills gap.

Portfolio Reallocation: Where to Deploy Capital

Infuse’s insights suggest three actionable strategies:

  1. Core Holdings in Cybersecurity Leaders:
    Target firms like CrowdStrike (CRWD) and Palo Alto Networks (PANW), which excel in AI-powered threat detection and endpoint security.

  2. Geopolitical Plays:
    Invest in companies addressing data sovereignty and cross-border compliance, such as Palantir (PLTR) or Cloudflare (NET), which offer hybrid cloud solutions.

  3. Emerging Sectors:
    Look to quantum computing security (e.g., Rigetti Computing (RGTI)) and AI governance platforms like DataRobot (NASDAQ: MGNI), which help enterprises audit and validate AI models.

The Bottom Line: AI’s Growth Requires a Security Foundation

The “productivity paradox” of AI—where speculative valuations outstrip tangible ROI—is real. Yet, cybersecurity is one area where returns are measurable: breaches cost companies millions, and defenses reduce risk. Investors should treat cybersecurity as the bedrock of their AI investments.

As Infuse’s letter implies, the next decade will belong to those who balance AI’s promise with ironclad security. Position portfolios accordingly.

JR Research



Source link

Continue Reading

AI Insights

5 Ways CFOs Can Upskill Their Staff in AI to Stay Competitive

Published

on


Chief financial officers are recognizing the need to upskill their workforce to ensure their teams can effectively harness artificial intelligence (AI).

According to a June 2025 PYMNTS Intelligence report, “The Agentic Trust Gap: Enterprise CFOs Push Pause on Agentic AI,” all the CFOs surveyed said generative AI has increased the need for more analytically skilled workers. That’s up from 60% in March 2024.

“The shift in the past year reflects growing hands-on use and a rising urgency to close capability gaps,” according to the report.

The CFOs also said the overall mix of skills required across the business has changed. They need people who have AI-ready skills: “CFOs increasingly need talent that can evaluate, interpret and act on machine-generated output,” the report said.

The CFO role itself is changing. According to The CFO, 27% of job listings for chief financial officers now call for AI expertise.

Notably, the upskill challenge is not limited to IT. The need for upskilling in AI affects all departments, including finance, operations and compliance. By taking a proactive approach to skill development, CFOs can position their teams to work alongside AI rather than compete with it.

The goal is to cultivate professionals who can critically assess AI output, manage risks, and use the tools to generate business value.

Among CEOs, the impact is just as pronounced. According to a Cisco study, 74% fear that gaps in knowledge will hinder decisions in the boardroom and 58% fear it will stifle growth.

Moreover, 73% of CEOs fear losing ground to rivals because of IT knowledge or infrastructure gaps. One of the barriers holding back CEOs are skills shortages.

Their game plan: investing in knowledge and skills, upgrading infrastructure and enhancing security.

Here are some ways companies can upskill their workforce for AI:

Ensure Buy-in by the C-Suite

  • With leadership from the top, AI learning initiatives will be prioritized instead of falling by the wayside.
  • Allay any employee concerns about artificial intelligence replacing them so they will embrace the use and management of AI.

Build AI Literacy Across the Company

  • Invest in AI training programs: Offer structured training tailored to finance to help staff understand both the capabilities and limitations of AI models, according to CFO.university.
  • Promote AI fluency: Focus on both technical skills, such as how to use AI tools, and conceptual fluency of AI, such as understanding where AI can add value and its ethical implications, according to the CFO’s AI Survival Guide.
  • Create AI champions: Identify and develop ‘AI champions’ within the team who can bridge the gap between finance and technology, driving adoption and supporting peers, according to Upflow.

Integrate AI Into Everyday Workflows

  • Start with small, focused projects such as expense management to demonstrate value and build confidence.
  • Foster a culture where staff can explore AI tools, automate repetitive tasks, and share learnings openly.

Encourage Continuous Learning

Make learning about AI a continuous process, not a one-time event. Encourage staff to stay updated on AI trends and tools relevant to finance.

  • Promote collaboration between finance, IT, and other departments to maximize AI’s impact and share best practices.

Tap External Resources

  • Partner with universities and providers: Tap into external courses, certifications, and workshops to supplement internal training.
  • Consider tapping free or low-cost resources, such as online courses and AI literacy programs offered by tech companies (such as Grow with Google). These tools can provide foundational understanding and help employees build confidence in using AI responsibly.

Read more:

CFOs Move AI From Science Experiment to Strategic Line Item

3 Ways AI Shifts Accounts Receivable From Lagging to Leading Indicator

From Nice-to-Have to Nonnegotiable: How AI Is Redefining the Office of the CFO



Source link

Continue Reading

Trending