Measuring Military Output: Deterrence, Readiness, and What Actually Matters
January 15, 2026
Key Narrative
The defense policy conversation is dominated by input metrics—how much we spend, how many ships we have, personnel counts. But these tell us little about what matters: can our military deter adversaries and, if deterrence fails, win? This post argues for output-oriented metrics and explores what those might look like.
The central tension: deterrence is impossible to measure directly (you can’t observe wars that didn’t happen), yet it’s the primary purpose of peacetime military spending. Readiness—the ability to deploy and fight on short notice—is measurable but imperfect. Victory in conflict is the ultimate test but comes too late as a feedback mechanism.
Outline
I. Introduction: The Input Fallacy
- Defense debates focus on budgets, platforms, headcounts
- These are inputs, not outputs
- Analogy to business: revenue vs. customer value created
- The question we should ask: what is the military for?
II. The Primary Outputs of Military Power
- Deterrence: Preventing conflicts through credible threat
- Nuclear deterrence (relatively clear)
- Conventional deterrence (murky)
- Extended deterrence to allies
- Warfighting capability: Winning if deterrence fails
- Coercive diplomacy: Shaping behavior short of war
- Reassurance: Calming allies, preventing proliferation
III. The Measurement Problem
- Deterrence success is invisible (counterfactual)
- Readiness metrics exist but are gamed
- Combat performance is the ultimate test—but delayed feedback
- Political incentives favor visible inputs over intangible outputs
IV. Candidate Output Metrics
- Readiness indicators
- Time to deploy X capability to Y region
- Mission-capable rates (with caveats)
- Training quality assessments
- Wargame performance
- Limitations and gaming concerns
- Value as stress tests
- Adversary behavior
- Changes in opponent posture, exercises, investments
- Intelligence assessments of adversary perceptions
- Expert assessments
- Net assessments, competitive analysis
- Delphi-method forecasting
V. A Framework for Output-Oriented Thinking
- Define strategic objectives clearly
- Map capabilities to objectives
- Identify leading indicators of effectiveness
- Build feedback loops (exercises, wargames, red teams)
VI. Implications for Policy
- Shift from “how much” to “how effective”
- Incentivize readiness over acquisition
- Invest in measurement and assessment capabilities
- Accept uncertainty but demand rigor
VII. Conclusion
- Outputs are harder to measure but more important
- The goal is decision-useful information, not precision
- A military that can articulate its outputs is more likely to achieve them
Suggested Sources
Academic & Policy Research
- Stephen Biddle, Military Power: Explaining Victory and Defeat in Modern Battle (2004)
- RAND Corporation reports on readiness and force assessment
- Congressional Budget Office analyses of defense spending
- Michael O’Hanlon, The Science of War (2009)
Historical Case Studies
- Andrew Krepinevich, The Army and Vietnam (on measurement failure)
- Eliot Cohen, Military Misfortunes (on organizational learning)
- Thomas Schelling, Arms and Influence (classic on deterrence)
Data Sources
- GAO reports on readiness and maintenance
- DOD Selected Acquisition Reports
- IISS Military Balance (comparative data)
Contemporary Analysis
- War on the Rocks (essays on defense policy)
- Acquisition Research Program at Naval Postgraduate School
- Center for Strategic and International Studies defense analyses
Comments