Friday, August 12, 2022

Scaling Down: Preparing for Smaller Wars

In January 1950, President Harry Truman requested the Department of State and the Department of Defense to jointly compose a document regarding U.S. objectives in both diplomatic and military concerns. In April, he received the report, a top-secret document titled NSC-68.

This document remained classified until 1975, but is now available to the reading public. It shaped much of American strategic and geopolitical thought throughout the 1950s and 1960s. It addressed both strategy and ideology.

NSC-68 also included references to the nation’s founding texts from the 1700s, including the Declaration of Independence, the Constitution, the Bill of Rights, and the Federalist Papers.

The report’s authors were concerned to distinguish between, on the one hand, massive wars of annihilation on a global scale, and on the other hand, smaller regional conflicts:

The mischief may be a global war or it may be a Soviet campaign for limited objectives. In either case we should take no avoidable initiative which would cause it to become a war of annihilation, and if we have the forces to defeat a Soviet drive for limited objectives it may well be to our interest not to let it become a global war.

It was therefore incumbent upon the United States military establishment to be prepared for both types of conflict. But the U.S. military in 1950 was not ready, as author Russell Weigley writes:

NSC-68 suggested a danger of limited war, of Communist military adventures designed not to annihilate the West but merely to expand the periphery of the Communist domains, limited enough that an American riposte of atomic annihilation would be disproportionate in both morality and expediency. To retaliate against a Communist military initiative on any but an atomic scale, the American armed forces in 1950 were ill equipped. Ten understrength Army divisions and eleven regimental combat teams, 671 Navy ships, two understrength Marine Corps divisions, and forty-eight Air Force wings (the buildup not yet having reached the old figure of fifty-five) were stretched thinly around the world.

It would not be fitting to respond, e.g., to the Soviet blockade of Berlin by unleashing America’s arsenal. Although some military strategists in the late 1940s saw the atomic bomb as the answer to nearly any tactical question, it was now becoming clear that America should have a full conventional force as well.

The Air Force atomic striking force, embodied now in eighteen wings of the Strategic Air Command, was the only American military organization possessing a formidable instant readiness capacity. So much did Americans, including the government, succeed in convincing themselves that the atomic bomb was a sovereign remedy for all military ailments, so ingrained was the American habit of thinking of war in terms of annihilative victories, that occasional warnings of limited war went more than unheeded, and people, government, and much of the military could scarcely conceive of a Communist military thrust of lesser dimensions than World War III.

So it happened, then, that in June 1950, when North Korea attacked South Korea, the United States was in possession of a large nuclear arsenal, but a barely serviceable — if at all serviceable — infantry. The United States was prepared for global atomic war, but the Soviet Socialists chose smaller proxy wars — Korea, Vietnam — and even smaller military maneuvers to quell uprisings — Berlin 1953, Hungary 1956, Prague 1968.

America’s brief romance with the atomic bomb was over. By the mid-1950s, it was clear that the United States needed a full conventional force alongside its nuclear arsenal.

This would require a bit of a scramble to make up for years in the late 1940s during which the conventional forces were allowed to languish. The Korean War included a U.S. Army which was underfunded and undersized.

In the postwar decades, the United States needed to have both a strategic nuclear force as well as sufficient conventional forces in the traditional Army, Navy, Air Force, and Marines.

Wednesday, August 3, 2022

The Best President Ever?

On a regular basis, every few years, journalists will assemble a group of historians or political scientists and ask them to sort through the presidents of the United States, and come up with a list of the top ten, or the bottom ten, or to rank all of them from best to worst, or to select the single best-ever, or worst-ever, president.

Such efforts are sometimes interesting, but in the end, they are meaningless.

These processes are hopelessly subjective, and reveal, at most, the personal preferences and partialities of the researchers involved. Because these types of surveys have been going on for years, one can trace their contradictory results which expose their sheer non-confirmability and un-verifiability.

Writing in 2012, Robert Merry traced the flip-flops and reversals of such surveys:

Consider Dwight Eisenhower, initially pegged by historians as a mediocre two-termer. In 1962, a year after Ike relinquished the presidency, a poll by Harvard’s Arthur Schlesinger Sr. ranked him 22nd — between Chester A. Arthur, largely a presidential nonentity, and Andrew Johnson, impeached by the House and nearly convicted by the Senate. Republicans were outraged; Democrats laughed. By the time a 1981 poll was taken, however, Eisenhower had moved up to 12th. The following year he was ninth. In three subsequent polls he came in 11th, 10th and eighth.

The academics did a similar turnaround, and did an about-face on another famous president:

Academics initially slammed Reagan, as they had Eisenhower. One survey of 750 historians taken between 1988 and 1990 ranked him as “below average.” A 1996 poll ranked him at 25th, between George H.W. Bush, the one-termer who succeeded him, and that selfsame Chester Arthur. Reagan's standing is now on the rise.

If the search for the “best ever” president, or even the “top ten” presidents, is an empty pursuit, can scholars give more meaningful results? Perhaps: while it is meaningless to say that Calvin Coolidge is a “good” or “bad” president, it is meaningful to say that he lowered taxes, lowered the national debt, and reduced the federal government’s spending. Such statements are verifiable and quantifiable.

Historians can give us meaningful data when they research specific and measurable details about a president, instead of merely trying to assign him a relative rank as “better than” or “worse than” some other president.

It is observable, and therefore material, the President Polk’s management of the Mexican-American war impacted presidential elections after the war’s end in 1848.

Such observations are not only more reliable and objective, but also protect scholars from ending up with the proverbial “egg on the face” of declaring some president to be “good” or “bad” and then find themselves facing stiff opposition to such judgments. One example of academics hastily praising a president, only to find themselves slowly retracting such glowing evaluations, is the case of Woodrow Wilson.

Wilson’s high marks from historians belie the fact that voters in 1920 delivered to his party one of the starkest repudiations ever visited upon an incumbent party. Similarly, historians consistently mark Harding as a failure, though he presided over remarkably good times and was very popular.

Exactly as scholars revised their estimates of Eisenhower and Reagan upward, so now they are reconsidering Harding in a more favorable light. Wilson’s reputation, meanwhile, has descended.

In sum, it is more important to gather data about a president than to evaluate him.

Writing about a president should emphasize, not general impressions, but rather observable, measurable, verifiable, and quantifiable data. That’s how serious historians work. Reports about presidents should be full of dates, places, specific actions, and the names of other individuals with whom that president interacted.

Such a method would lead to the “best ever” texts about presidents!