What will become of battlefield command in the years ahead? This question is at the heart of the US Army’s once-in-a-generation reforms now underway. In search of answers, the Army looks to Ukraine. Events there suggest at least two truths. One is that decentralized command, which the US Army calls mission command and claims as its mode, will endure as a virtue. A second is that future commanders will use artificial intelligence to inform every decision—where to go, whom to kill, and whom to save. The recently announced Army Transformation Initiative indicates the Army intends to act on both.
But from these lessons there arises a different dilemma: How can an army at once preserve a culture of decentralized command and integrate artificial intelligence into its every task? Put another way, if at all echelons commanders rely on artificial intelligence to inform decisions, do they not risk just another form of centralization, not at the top, but within an imperfect model? To understand this dilemma and to eventually resolve it, the US Army would do well to look once again to the Ukrainian corner of the map, though this time as a glance backward two centuries, so that it might learn from a young redleg in Crimea named Leo Tolstoy.
What Tolstoy Saw
Before he became a literary titan, Leo Tolstoy was a twenty-something artillery officer. In 1854 he found himself in besieged port of Sevastopol, then under relentless French and British shelling, party to the climax of the Crimean War. When not tending to his battery on the city’s perilous Fourth Bastion, Tolstoy wrote dispatches about life under fire for his preferred journal in Saint Petersburg, The Contemporary. These dispatches, read across literate Russia for their candor and craft, made Tolstoy famous. They have since been compiled as The Sebastopol Sketches and are considered by many to be the first modern war reportage. Their success confirmed for Tolstoy that to write was his life’s calling, and when the Crimean War ended, he left military service so that he might do so full-time.
But once a civilian Tolstoy did not leave war behind, at least not as a subject matter. Until he died, he mined his time in uniform for the material of his fiction. In that fiction, most prominently the legendary accounts of the battles of Austerlitz and Borodino found in War and Peace, one can easily detect what he thought of command. Tolstoy’s contention is that the very idea of command itself is practically a fiction, so tenuous is the relationship between what commanders visualize, describe, and direct and what in fact happens on the battlefield. The worst officers in Tolstoy’s stories do great harm by vainly supposing they understand battles at hand when they in fact haven’t the faintest idea of what’s going on. The best officers are at peace with their inevitable ignorance and rather than fighting it, gamely project a calm that inspires their men. Either way, most officers wander the battlefield, blinded by gun smoke or folds in the earth, only later making up stories to explain what happened, stories others wrongly take as credible witness testimony.
Command or Hallucination?
Students of war may wonder whether Tolstoy was saying anything Carl von Clausewitz had not already said in On War, published in 1832. After all, there Clausewitz made famous allowances for the way the unexpected and the small both shape battlefield outcomes, describing their effects as “friction,” a term that still enjoys wide use in the US military today. But the friction metaphor itself already hints at one major difference between Clausewitz’s understanding of battle and Tolstoy’s. For Clausewitz all the things that go sideways in war amount to friction impeding the smooth operation of a machine at work on the battlefield, a machine begotten of an intelligent design and consisting of interlocking parts that fail by exception. As Tolstoy sees it, there is no such machine, except in the imagination of largely ineffectual senior leaders, who, try as they might, cannot realize their designs on the battlefield.
Tolstoy thus differed from Clausewitz by arguing that commanders not only fail to anticipate friction, but outright hallucinate. They see patterns on the battlefield where there are none and causes where there is only coincidence. In War and Peace, Pyotr Bagration seeks permission to start the battle at Austerlitz when it is already lost, Moscow burns in 1814 not because Kutuzov ordered it but because the firefighters fled the city, and Russians’ masterful knockout flank at Tarutino occurs not in accordance with a preconceived plan but by an accident of logistics. Yet historians and contemporaries alike credit Bagration and Kutuzov for the genius of these events—to say nothing of Napoleon, whom Tolstoy casts as a deluded egoist, “a child, who, holding a couple of strings inside a carriage, thinks he is driving it.”
Why then, per Tolstoy, do the commanders and historians credit such plans with unrelated effects? Tolstoy answers this in a typical philosophical passage of War and Peace: “The human mind cannot grasp the causes of events in their completeness,” but “the desire to find those causes is implanted in the human soul.” People, desirous of coherence but unable to espy the many small causes of events, instead see grand things and great men that are not there. Here Tolstoy makes a crucial point—it is not that there are no causes of events, just that the causes are too numerous and obscure for humans to know. These causes Tolstoy called “infinitesimals,” and to find them one must “leave aside kings, ministers, and generals” and instead study “the small elements by which the masses are moved.”
This is Tolstoy’s complaint. He lodged it against the great man theorists of history, then influential, who supposed great men propelled human events through genius and will. But it also can be read as a strong case for mission command, for Tolstoy’s account of war suggests that not only is a decentralized command the best sort of command—it is the only authentic command at all. Everything else is illusory. High-echelon commanders’ distance from the fight, from the level of the grunt or the kitchen attendant, allows their hallucinations to persist unspoiled by reality far longer than those below them in rank. The leader low to the ground is best positioned to integrate the infinitesimals into an understanding of the battlefield. That integration, as Isaiah Berlin writes in his great Tolstoy essay “The Hedgehog and Fox,” is more so “artistic-psychological” work than anything else. And what else are the “mutual trust” and “shared understanding,” which Army doctrine deems essential to mission command, but the products of an artful, psychological process?
From Great Man Theory to Great Model Theory
Perhaps no one needs Tolstoy to appreciate mission command. Today American observers see everywhere on the battlefields of Ukraine proof of its wisdom. They credit the Ukrainian armed forces with countering their Russian opponents’ numerical and material superiority by employing more dynamic, decentralized command and control, which they liken to the US Army’s own style. Others credit Ukrainians’ use of artificial intelligence for myriad battlefield functions, and here the Ukrainians are far ahead of the US Army. Calls abound to catch up by integrating artificial intelligence into data-centric command-and-control tools, staff work, and doctrine. The relationship between these two imperatives, to integrate artificial intelligence and preserve mission command, has received less attention.
At first blush, artificial intelligence seems a convincing answer to Tolstoy’s complaint. In “The Hedgehog and the Fox” Isaiah Berlin summarized that complaint this way:
Our ignorance is due not to some inherent inaccessibility of the first causes, only their multiplicity, the smallness of the ultimate units, and our own inability to see and hear and remember and record and co-ordinate enough of the available material. Omniscience is in principle possible even to empirical beings, but, of course, in practice unattainable.
Can one come up with a better pitch for artificial intelligence than that? Is not artificial intelligence’s alleged value proposition for the commander its ability to integrate all the Tolstoyan infinitesimals, those “ultimate units,” then project it, perhaps on a wearable device, for quick reference by the dynamic officer pressed for time by an advancing enemy? Put another way, can’t a great model deliver on the battlefield what a great man couldn’t?
The trouble is threefold. Whatever model or computer vision or multimodal system we call “artificial intelligence” and incorporate into a given layer of a command-and-control platform represents something like one mind, but not many minds, so each instance wherein a leader outsources analysis to that artificial intelligence is another instance of centralization. Second, the models we have are disposed to patterns and to hubris, so are more a replication than a departure from the hallucinating commanders Tolstoy so derided. Finally, leaders may reject the evidence of their eyes and ears in deference to artificial intelligence because it enjoys the credibility of dispassionate computation, thereby forgoing precisely the ground-level inputs that Tolstoy pointed out were most important for understanding battle.
Consider the centralization problem. Different models may be in development for different uses across the military, but the widespread fielding of any artificial intelligence–enabled command-and-control system risks proliferating the same model across the operational army. If the purpose of mission command were strictly to hasten battlefield decisions by replicating the mind of a higher command within junior leaders, then the threat of centralization would be irrelevant because artificial intelligence would render mission command obsolete. But Army Doctrinal Pamphlet 6-0 lists as mission command’s purpose also the levering of “subordinate ingenuity”—something that centralization denies. In aggregate one risks giving every user the exact same coach, if not the exact same commander, however brilliant that coach or commander might be.
Such a universal coach, like a universal compass or rifle, might not be so bad, were it not for the tendency of that universal coach to hallucinate. That large language models make things up and then confidently present them as truth is not news, but it is also not going away. Nor is those models’ basic function, which is to seek patterns and then extend them. Computer vision likewise produces false positives. This “illusion of thinking,” to paraphrase recent research, severely limits the capacity of artificial intelligence to tackle novel problems or process novel environments. Tolstoy observes that during the invasion of Russia “a war began which did not follow any previous traditions of war,” yet Napoleon “did not cease to complain . . . that the war was being carried on contrary to all the rules—as if there were any rules for killing people.” In this way Tolstoy ascribes Napoleon’s disastrous defeat at Borodino precisely to the sort of error artificial intelligence is prone to make—the faulty assumption that the rules that once applied extend forward mechanically. There is thus little difference between the sort of prediction for which models are trained and the picture of Napoleon in War and Peace on the eve of his arrival in Moscow. He imagined a victory that the data on which he had trained indicated he ought expect but that ultimately eludes him.
Such hallucinations are compounded by models’ systemic overconfidence. Research suggests that, like immature officers, models prefer to confidently proffer an answer than confess they just do not know. It is then not hard to imagine artificial intelligence processing incomplete reports of enemy behavior on the battlefield, deciding that the behavior conforms to a pattern, filling in gaps the observed data leaves, then confidently predicting an enemy course of action disproven by what a sergeant on the ground is seeing. It is similarly not hard to imagine a commander directing, at the suggestion of an artificial intelligence model, the creation of an engagement area anchored to hallucinated terrain or queued by a nonexistent enemy patrol. In the aggregate, artificial intelligence might effectively imagine entire scenarios like the ones on which it was trained playing out on a battlefield where it can detect little more than the distant, detonating pop of an explosive-laden drone.
To be fair, uniformed advocates of artificial intelligence have said explicitly that no one wants to replace human judgment. Often those advocates speak instead of artificial intelligence informing, enhancing, enabling, or otherwise making more efficient human commanders. Besides, any young soldier will point out that human commanders make all the same mistakes. Officers need no help from machines to spook at a nonexistent enemy or to design boneheaded engagement areas. So what’s the big deal with using artificial intelligence?
The issue is precisely that we regard artificial intelligence as more than human and so show it a deference researchers call “automation bias.” It’s all but laughable today to ascribe to any human the genius for seeing through war’s complexity that great man theorists once vested in Napoleon. But now many invest similar faith in the genius of artificial intelligence. Sam Altman of OpenAI refers to his project as the creation of “superintelligence.” How much daylight is there between the concept of superintelligence and the concept of the great man? We thus risk treating artificial intelligence as the Napoleon that Napoleon could not be, the genius integrator of infinitesimals, the protagonist of the histories that Tolstoy so effectively demolished in War and Peace. And if we regard artificial intelligence as the great man of the history, can we expect a young lieutenant to resist its recommendations?
What Is to Be Done?
Artificial intelligence, in its many forms, is here to stay. The Army cannot afford in this interwar moment a Luddite reflex. It must integrate artificial intelligence into its operations. Anybody who has attempted to forecast when a brigade will be ready for war or when a battalion will need fuel resupply or when a soldier will need a dental checkup knows how much there is to be gained from narrow artificial intelligence, which promises to gain immense efficiencies in high-iteration, structured, context-independent tasks. Initiatives like Next Generation Command and Control promise as much. But the risks to mission command posed by artificial intelligence are sizable. Tolstoy’s complaint is of great use to the Army as it seeks to understand and mitigate those risks.
The first way to mitigate the risk artificial intelligence poses to mission command is to limit the use of it those high-volume, simple tasks. Artificial intelligence is ill-suited for low-volume, highly complex, context-dependent, deeply human endeavors—a good description of warfare—and so its role in campaign design, tactical planning, the analysis of the enemy, and the leadership of soldiers should be small. Its use in such endeavors is limited to expediting calculations of the small inputs human judgment requires. This notion of human-machine teaming in war is not new (it has been explored well by others, including Major Amanda Collazzo via the Modern War Institute). But amid excitement for it, the Army risks forgetting that it must carefully draw and jealously guard the boundary between human and machine. It must do so not only for ethical reasons, but because, as Tolstoy showed to such effect, command in battle humbles the algorithmic mind—of man or machine. Put in Berlin’s terms, command remains “artistic-psychological” work, and that work, even now, remains human work. Such caution does not require a ban on machine learning and artificial intelligence in simulations or wargames, which would be self-sabotage, but it does require that officers check any temptation to outsource the authorship of campaigns or orders to a model—something which sounds obvious now, but soon may not.
The second way is to program into the instruction of Army leaders a healthy skepticism of artificial intelligence. This might be done first by splitting the instruction of students into analog and artificial intelligence–enabled segments, not unlike training mortarmen to plan fire missions with a plotting board as well as a ballistic computer. Officers must first learn to write plans and direct their execution without aid before incorporating artificial intelligence into the process. Their ability to do so must be regularly recertified throughout their careers. Classes on machine learning that highlight the dependency of models on data quality must complement classes on intelligence preparation of the battlefield. Curriculum designers will rightly point out that curricula are already overstuffed, but if artificial intelligence–enabled command and control is as revolutionary as its proponents suggest, it demands a commensurate change in the way we instruct our commanders.
The third way to mitigate the risks posed is to program the same skepticism of artificial intelligence into training. When George Marshall led the Infantry School during the interwar years, he and fellow instructor Joseph Stilwell forced students out of the classroom and into the field for unscripted exercises, providing them bad maps so as to simulate the unpredictability of combat. Following their example, the Army should deliberately equip leaders during field exercises and wargames with hallucinatory models. Those leaders should be evaluated on their ability to recognize when the battlefield imagined by their artificial intelligence–enabled command-and-control platforms and the battlefield they see before them differ. And when training checklists require that for a unit to be fully certified in a task it must perform that task under dynamic, degraded conditions, “degraded” must come to include hallucinatory or inoperable artificial intelligence.
Even then, Army leaders must never forget what Tolstoy teaches us: that command is a contingent, human endeavor. Often battles represent idiosyncratic problems of their own, liable to defy patterns. Well-trained young leaders’ proximity to those problems is an asset rather than a liability. For that proximity they can spot on the battlefield infinitesimally small things that great data ingests cannot capture. A philosophy of mission command, however fickle and at times frustrating, best accommodates the insights that arise from that proximity. Only then can the Army see war’s Tolstoyan infinitesimals through the gun smoke and have any hope of integrating them.
Theo Lipsky is an active duty US Army captain. He is currently assigned as an instructor to the Department of Social Sciences at the US Military Academy at West Point. He holds a master of public administration from Columbia University’s School of International and Public Affairs and a bachelor of science from the US Military Academy. His writing can be found at theolipsky.substack.com.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Sgt. Zoe Morris, US Army