Saturday, December 31, 2022

Reasons To Be Cheerful — Part 1

The nature of human communication includes a temptation to focus on bad news. This is not a recent development, fostered by the internet and cable TV. Thousands of years ago, it was already known that bad news travels fast.

The effort to conscientiously focus on good news is a mental discipline which will reward the person who practices it. Charles Calomiris reported the following in December 2022 in the Wall Street Journal:

The percentage of people living in poverty fell from 32% in 1947 to 15% in 1967 to only 1.1% in 2017. Opportunities created by economic growth, and government-sponsored social programs funded by that growth, produced broadly shared prosperity: 94% of households in 2017 would have been at least as well off as the top quintile in 1967. Bottom-quintile households enjoy the same living standards as middle-quintile households, and on a per capita basis the bottom quintile has a 3% higher income. Top-quintile households receive income equal to roughly four times the bottom (and only 2.2 times the lowest on a per capita basis), not the 16.7 proportion popularly reported.

“Real income of the bottom quintile,” Calomiris adds, “grew more than 681% from 1967 to 2017.” He concludes: “Average living standards have improved dramatically.”

If these data seem unfamiliar, it is because of that principle which dictates that the media, left unchecked, tend to focus on bad news. The reader who is regularly exposed to the typical news media will have been so bombarded with negative reports that good news will seem counterintuitive.

Readers may even have developed an automatic skepticism about any good news. Yet pleasant developments do, in reality, take place.

What does this all mean? That in the United States, wage-earners in all categories have experienced increases in their standards of living, and that those in the lowest categories are catching up to the middle and upper classes.

While there is a form of income inequality, if one measures pre-tax earned income, the situation looks quite different if one measures post-tax income from all sources: this is because those earning larger incomes pay a larger percentage of their income in taxes, and those earning smaller incomes receive a larger share of unearned income.

Calomiris continues:

The equality of consumption between the bottom quintile (in which only 36% of prime-age persons work) and the middle quintile (in which 92% of prime-age persons work) is a striking finding.

The savvy reader will be aware of the news media’s tendency to amplify or invent some types of problems. Worth noting is also a tendency to downplay or ignore types of problems.

Tuesday, December 13, 2022

When the Budget Decides the War: U.S. Defense Spending and the Korean Conflict

The second half of 1949 and the first half of 1950 formed a twelve-month period of history which was a traumatic year for global peace and diplomacy. In late 1949, two events shook the world: the communists took over China and the Soviet Socialists used their espionage network inside the United States to steal the intellectual property needed to build an atomic bomb. In early 1950, a select group of leaders inside the U.S. government received a secret document, titled NSC-68, which unsettled them with its revelations and evaluations of the world military scene. Finally, in June 1950, North Korea, backed by both communist China and the Soviet Socialists, make a surprise attack on South Korea, starting a war which would eventually kill millions of people.

Prior to 1949, there was a seemingly reasonable hope that the world would be able to experience a time of protracted peace. To be sure, the Cold War tensions between the USSR and the western Allies were real and detectable, but America’s monopoly on atomic weapons was assumed to be the trump card which would prevent massive Soviet aggression.

Anticipating peace, the U.S. had begun dismantling its military.

The addition of China to the communist bloc also strengthened the Soviet position. Sino-Soviet relations would remain strong for several years after 1949. In the mid-1950s, those relations would cool.

It was not obvious that the Soviet Socialists were using their spy network inside the U.S. to gain nuclear weapons technology. When they conducted a weapons test in August 1949, exploading an atomic bomb, and when U.S. intelligence agencies confirmed this event was confirmed in September 1949, the global balance of power shifted. The USSR was emboldened, and used ruthless military power to squash uprisings of freedom fighters in Berlin in June 1953, in Hungary in 1956, and in Prague in 1968.

In light of these events, President Truman requested that the National Security Council (NSC) write a comprehensive document, detailing the global military situation. The report, titled NSC-68, analyzed the state of the world, projected possible future scenarios, and advised steps which the U.S. could take in order to be ready for those scenarios. A small number of officials within the U.S. government read the text, written jointly by members of the Department of State and the Department of Defense; the text disturbed the readers: their hopes for a few quiet and peaceful years were dashed.

Instead of dismantling its military, circumstances forced the U.S. to build up its military.

Based on NSC-68, the Joint Chiefs of Staff (JCS) developed a scenario called Joint Outline War Plan Reaper. This was in essence a plan for World War III, and it would go into effect if and when the Soviets attacked. The consensus among the military leaders was that the Soviets would probably attack in Europe. The war in Korea was considered to be a “secondary” theater, as historian William Donnelly writes:

In September 1950, the JCS made their recommendations for a military buildup based on NSC 68. The active Army would expand from ten to eighteen divisions by Fiscal Year 1952, with its active duty personnel strength increasing from 593,526 to 1,567,000. With the end of the Korean War expected in 1951, the active duty personnel strength would fall to 1,355,000 by Fiscal Year 1954, but the number of active divisions would remain at eighteen so that the Army could meet the demands of Joint Outline War Plan Reaper.

Although acknowledging the importance of the Pacific Rim, the plan anticipated a major Soviet offensive, crossing what was then the border between East Germany and West Germany. NATO and U.S. forces calculated that the Soviets would have the advantage in the first few days of the war, so the strategy was to let the USSR extend itself as far as the Rhine (Rhein). At that point, the western powers would have organized a defensive line. The Soviet would have spent their initial energy and would face longer supply lines across unfamiliar territory.

Like all war plans, Reaper was a collection of hypotheticals, as historian William Donnelly explains:

The Joint Outline War Plan was the JCS plan for World War III with the Soviet Union, and Reaper was the first version of the plan prepared in light of NSC 68’s recommendations. Reaper called for ten Army divisions stationed in the Zone of the Interior (ZI — the contemporary term for the continental United States), four in Japan, and four in Europe at the start of the war. Like previous Joint Outline War Plans, Reaper foresaw that the initial Soviet advantage in ground forces meant that the Air Force and the Navy would play the dominant roles in early operations. The ten Army divisions and their supporting units in the ZI would form the General Reserve, portions of which could be deployed overseas early in the war, but whose most important function would be to serve as the cadre for a massive expansion of the Army. The four divisions in Japan would defend it from a Soviet invasion while the four divisions in Europe, along with their allies in the North Atlantic Treaty Organization (NATO), would conduct a delaying operation that, in conjunction with air attacks, would halt the Soviet Army along the Rhine. The U.S. Army, drawing on the resources made available by a World War II-type national mobilization, would expand and then launch a second crusade in Europe, ending the war after four years with a Soviet surrender and a force of eighty divisions.

The implications of War Plan Reaper and the assumptions and attitudes which shaped it were these: The Korean conflict would receive limited resources, and the U.S. would need to engage in substantial defense spending and a military buildup.

Unlike WW2, when the combat operations of the military held a high priority, the Korean conflict would be supplied, manned, and funded around preparations for a speculative war plan. The military units in Korea, where actual fighting was happening in real time, received less funding, because resources were being diverted to a buildup in Europe and in the continental United States.

Not only did supply shortages and a “manpower dilemma” (Donnelly’s phrase) impact the field effectiveness of the U.S. Army in Korea, but morale also understandably suffered. Many of the soldiers in Korea were conscripts who stayed no longer than they were required to do so, while the army sent more experienced soldiers and officers to Europe.

Morale deteriorated further when it became clear that the goal for the NATO and United Nations (UN) coalition, including the United States, was an armistice or ceasefire, not a victory.

Budgetary considerations had a major influence on both the strategies and the tactics of the coalition supporting South Korea in war.

Wednesday, December 7, 2022

Facing the Greatest Danger Without the Greatest Resources: Balancing the Korean Conflict with the Global Cold War

Following WW2, the United States began to reduce the size of its armed forces. The number of men in the military was reduced. Spending on research and development for new weapons was reduced. Procurement of current weapon systems was reduced. The overall budget for defense spending was reduced.

The war was over. The principal enemies, Japan and Germany, had been thoroughly smashed and were occupied by Allied troops. The United States alone possessed the technology to manufacture and use atomic weapons, giving it an unsurpassable advantage over any competing nation.

America felt secure. There was no need for large military spending or for a large and well-equipped army.

So, from the time that WW2 ended in late 1945 — a ceasefire took place in August, and both sides signed the final surrender papers in September — the United States optimistically anticipated a time of peace. There were no obvious threats of major military action on the horizon, so disassembling the U.S. military seemed like a sensible thing to do.

Four events would startle this calm attitude.

First, the USSR obtained from its network of espionage agents the American technology needed to assemble its own nuclear weapons. In late August 1949, the Soviet Socialists conducted a test, exploding for the first time their own atomic bomb. By early September, the U.S. intelligence agencies confirmed this reality. Suddenly, the U.S. was not the only nation on earth possessing nuclear weapons. This changed the balance of power suddenly and dramatically. The Soviet Socialists no longer needed to restrain themselves in their plans to take over and oppress other smaller nations.

Second, in late 1949, the Communist Party won the Chinese Civil War. This war had started in 1927, dragging on for many years, and had paused during WW2. A few of the freedom fighters who had resisted the Communists fled to the island of Formosa, and set up their own small country, called Taiwan or “free China.” Communist China, or “mainland China,” allied itself closely with the Soviet Socialists during the first several years of its existence.

Third, in January 1950, President Truman asked the National Security Council (NSC) to compose a report about the world’s geopolitical situation. The document, known as NSC-68, was kept secret until it was declassified in 1975. It alarmed the few leaders who had permission to read it. It persuaded readers that, instead of dismantling the military, the U.S. needed to be ready to face major threats.

Finally, in June 1950, North Korea, with substantial support from Communist China, and a smaller amount of support from the Soviet Socialists, launched a surprise attack on South Korea. This began the Korean War, which would ultimately cost the lives of more than a million human beings. Although the majority of military support for North Korea came from China, the Soviet Socialists led the political and strategic impulse behind the war. The procurement of nuclear weapons emboldened the USSR, in which Stalin was still ruling. The global counterforce was the North Atlantic Treaty Organization (NATO), created in 1949, a collective mutual security system, based on a large alliance between thirty nations. These nations pledged to help defend each other, and the major threat was obviously the Soviet Socialists.

The globe was not as safe as had been hoped, as historian William Donnelly writes:

President Truman and his senior advisors quickly concluded that the North Korean invasion on 25 June 1950 demonstrated that the Soviet Union was beginning to take greater risks as predicted by NSC 68. American intelligence estimates stated that the Soviets were not likely to initiate a general war until they had built up conventional and nuclear forces to the point where they could be confident of overrunning Western Europe and deterring an American nuclear response. NSC 68 had warned that 1954 would be the year of maximum danger of a general war. Preventing that war decisively colored the U.S. response to the invasion of South Korea. North Korea’s aggression had to be repulsed lest it encourage further local attacks, but the United States would limit its military commitments on the peninsula in case the attack actually was a Soviet effort to weaken America’s ability to defend the crucial areas of Western Europe and Japan. American leaders decided that the United States would avoid a wider war in Asia, undertake a massive buildup of conventional and nuclear forces to defend crucial areas, use much of that buildup to create a credible conventional defense in Europe, supply its allies with large amounts of military aid, and do all this by 1954 without causing irreparable harm to the American economy.

The thinking about defense spending changed significantly between 1948 and 1951. Although thinking can change quickly, the physical realities change slower. The events of 1949 and 1950 were shocking. There was a lag time between those events and the implementation of plans for a military buildup.

One of the implications of the situation was that the nations supporting South Korea — which included several NATO countries as well as several United Nations countries — fought the Korean War “on the cheap.” Many of these nations were still repairing themselves from WW2, both economically and in terms of physical infrastructure. They were not available for a massive war effort.

The limited military and fiscal resources had to not only support a war in Korea, but develop a global defense system at the same time. Massive amounts of money were required for the research and development of missiles, jet airplanes, and nuclear weapons, as well as the usual conventional forces.

There wasn’t a lot of money left over to fund the Korean War.

Not only was there a lack of money for equipment, supplies, research, and development, but rather there was also, in the words of William Donnelly, a “manpower dilemma.” Soldiers not only have to be paid, but rather also trained, clothed, fed, and sheltered.

The army was experiencing a “massive expansion,” but given the tasks it faced, the needs for men were still greater than the supply. This was especially so regarding leadership positions like non-commissioned officers (NCO). There was a large supply of enlisted men, given the realities of conscription. But draftees remain only as long as they must, and so there was a high rate of turnover among footsoldiers, making leadership even more important. Yet it was precisely among NCO ranks that there was a manpower shortage.

The United States fought the Korean War on a shoestring budget.

Manpower shortages, high rates of turnover, and a high percentage of draftees among the soldiers led to morale problems. Also detracting from morale was the fact that top-level leadership was deciding to treat Korea as a “secondary theater,” with Europe still seen as the likely place for a face-to-face confrontation with the Soviets. A further dampener was the selection of armistice, rather than victory, as the goal in Korean: this was hardly inspiring to already-skeptical draftees who didn’t want to be in the army in the first place.

Friday, August 12, 2022

Scaling Down: Preparing for Smaller Wars

In January 1950, President Harry Truman requested the Department of State and the Department of Defense to jointly compose a document regarding U.S. objectives in both diplomatic and military concerns. In April, he received the report, a top-secret document titled NSC-68.

This document remained classified until 1975, but is now available to the reading public. It shaped much of American strategic and geopolitical thought throughout the 1950s and 1960s. It addressed both strategy and ideology.

NSC-68 also included references to the nation’s founding texts from the 1700s, including the Declaration of Independence, the Constitution, the Bill of Rights, and the Federalist Papers.

The report’s authors were concerned to distinguish between, on the one hand, massive wars of annihilation on a global scale, and on the other hand, smaller regional conflicts:

The mischief may be a global war or it may be a Soviet campaign for limited objectives. In either case we should take no avoidable initiative which would cause it to become a war of annihilation, and if we have the forces to defeat a Soviet drive for limited objectives it may well be to our interest not to let it become a global war.

It was therefore incumbent upon the United States military establishment to be prepared for both types of conflict. But the U.S. military in 1950 was not ready, as author Russell Weigley writes:

NSC-68 suggested a danger of limited war, of Communist military adventures designed not to annihilate the West but merely to expand the periphery of the Communist domains, limited enough that an American riposte of atomic annihilation would be disproportionate in both morality and expediency. To retaliate against a Communist military initiative on any but an atomic scale, the American armed forces in 1950 were ill equipped. Ten understrength Army divisions and eleven regimental combat teams, 671 Navy ships, two understrength Marine Corps divisions, and forty-eight Air Force wings (the buildup not yet having reached the old figure of fifty-five) were stretched thinly around the world.

It would not be fitting to respond, e.g., to the Soviet blockade of Berlin by unleashing America’s arsenal. Although some military strategists in the late 1940s saw the atomic bomb as the answer to nearly any tactical question, it was now becoming clear that America should have a full conventional force as well.

The Air Force atomic striking force, embodied now in eighteen wings of the Strategic Air Command, was the only American military organization possessing a formidable instant readiness capacity. So much did Americans, including the government, succeed in convincing themselves that the atomic bomb was a sovereign remedy for all military ailments, so ingrained was the American habit of thinking of war in terms of annihilative victories, that occasional warnings of limited war went more than unheeded, and people, government, and much of the military could scarcely conceive of a Communist military thrust of lesser dimensions than World War III.

So it happened, then, that in June 1950, when North Korea attacked South Korea, the United States was in possession of a large nuclear arsenal, but a barely serviceable — if at all serviceable — infantry. The United States was prepared for global atomic war, but the Soviet Socialists chose smaller proxy wars — Korea, Vietnam — and even smaller military maneuvers to quell uprisings — Berlin 1953, Hungary 1956, Prague 1968.

America’s brief romance with the atomic bomb was over. By the mid-1950s, it was clear that the United States needed a full conventional force alongside its nuclear arsenal.

This would require a bit of a scramble to make up for years in the late 1940s during which the conventional forces were allowed to languish. The Korean War included a U.S. Army which was underfunded and undersized.

In the postwar decades, the United States needed to have both a strategic nuclear force as well as sufficient conventional forces in the traditional Army, Navy, Air Force, and Marines.

Wednesday, August 3, 2022

The Best President Ever?

On a regular basis, every few years, journalists will assemble a group of historians or political scientists and ask them to sort through the presidents of the United States, and come up with a list of the top ten, or the bottom ten, or to rank all of them from best to worst, or to select the single best-ever, or worst-ever, president.

Such efforts are sometimes interesting, but in the end, they are meaningless.

These processes are hopelessly subjective, and reveal, at most, the personal preferences and partialities of the researchers involved. Because these types of surveys have been going on for years, one can trace their contradictory results which expose their sheer non-confirmability and un-verifiability.

Writing in 2012, Robert Merry traced the flip-flops and reversals of such surveys:

Consider Dwight Eisenhower, initially pegged by historians as a mediocre two-termer. In 1962, a year after Ike relinquished the presidency, a poll by Harvard’s Arthur Schlesinger Sr. ranked him 22nd — between Chester A. Arthur, largely a presidential nonentity, and Andrew Johnson, impeached by the House and nearly convicted by the Senate. Republicans were outraged; Democrats laughed. By the time a 1981 poll was taken, however, Eisenhower had moved up to 12th. The following year he was ninth. In three subsequent polls he came in 11th, 10th and eighth.

The academics did a similar turnaround, and did an about-face on another famous president:

Academics initially slammed Reagan, as they had Eisenhower. One survey of 750 historians taken between 1988 and 1990 ranked him as “below average.” A 1996 poll ranked him at 25th, between George H.W. Bush, the one-termer who succeeded him, and that selfsame Chester Arthur. Reagan's standing is now on the rise.

If the search for the “best ever” president, or even the “top ten” presidents, is an empty pursuit, can scholars give more meaningful results? Perhaps: while it is meaningless to say that Calvin Coolidge is a “good” or “bad” president, it is meaningful to say that he lowered taxes, lowered the national debt, and reduced the federal government’s spending. Such statements are verifiable and quantifiable.

Historians can give us meaningful data when they research specific and measurable details about a president, instead of merely trying to assign him a relative rank as “better than” or “worse than” some other president.

It is observable, and therefore material, the President Polk’s management of the Mexican-American war impacted presidential elections after the war’s end in 1848.

Such observations are not only more reliable and objective, but also protect scholars from ending up with the proverbial “egg on the face” of declaring some president to be “good” or “bad” and then find themselves facing stiff opposition to such judgments. One example of academics hastily praising a president, only to find themselves slowly retracting such glowing evaluations, is the case of Woodrow Wilson.

Wilson’s high marks from historians belie the fact that voters in 1920 delivered to his party one of the starkest repudiations ever visited upon an incumbent party. Similarly, historians consistently mark Harding as a failure, though he presided over remarkably good times and was very popular.

Exactly as scholars revised their estimates of Eisenhower and Reagan upward, so now they are reconsidering Harding in a more favorable light. Wilson’s reputation, meanwhile, has descended.

In sum, it is more important to gather data about a president than to evaluate him.

Writing about a president should emphasize, not general impressions, but rather observable, measurable, verifiable, and quantifiable data. That’s how serious historians work. Reports about presidents should be full of dates, places, specific actions, and the names of other individuals with whom that president interacted.

Such a method would lead to the “best ever” texts about presidents!

Tuesday, July 26, 2022

The Role of Nationalism in History: Unclear

Historians, politicians, and news media use the word ‘nationalism’ frequently. Although the term is often used with passion, its exact meaning is frequently unclear. One reason for this ambiguity is that, as the designation is used, it has more than one meaning.

There are at least two distinct, different, and mutually exclusive definitions for ‘nationalism’ and this ambivalence is responsible for misunderstandings, disagreements, and quarrels.

One the one hand, ‘nationalism’ can refer to a malignant ideology: a value system in which the power and growth of the nation-state are the ultimate goals, transcending all other potential morals. To understand this malign type of nationalism, the reader must understand first what a nation-state is.

A “nation” is a group of people who have a shared cultural identity — an ethnic group. This mutuality often relates to a shared language, history, or religion, as well as other aspects of culture: food, clothing, holidays, and the arts — music, architecture, literature, etc.

A “state” is a geographical territory with an independent and sovereign government: a piece of land with its own ruling system.

A “nation-state” is when a nation and a state are coextensive. There are nations which are not states, and there are states which are not nations. But in some cases, the state is the nation, and the nation is the state: the two are identical.

When an individual embraces the malevolent form of nationalism, the nation-state becomes the highest goal and value for this person. In such a case, all other potential values are demoted to lower rankings. The practical effect of this system is that the individual will sacrifice anything if by so sacrificing, the nation-state is strengthened.

Because this malicious type of nationalism demands that the nation-state is the ultimate value, it stands opposed to any other value which people might ordinarily cite as an ultimate value: family, friends, duty, honor, God, faith, religion, art, etc. Therefore it is impossible for a person who embraces this dangerous form of nationalism to accomplish the true duties of friendship, family, religious faith, etc., because such a person will ultimately be required to oppose those things when the needs and desires of the nation-state demand such opposition.

This harmful type of nationalism can even lead to wars and to cruelty. It can lead the nation-state to violate the human rights and civil rights of both its own citizens and citizens of other nation-states.

On the other hand, there is a benign and beneficial type of nationalism which is akin to a healthy patriotism. This form of nationalism enables the individual to appreciate and celebrate the culture and accomplishments of her or his nation-state. This kind of nationalism is an affection for one’s own nation-state. Importantly, this sort of nationalism does not oppose, but rather even requires, a respect and even an affection for other nation-states. It is impossible to truly respect one’s own nation-state without also respecting other nation-states.

This wholesome type of nationalism creates unity as together the citizens of the nation-state honor and work toward the maintenance of their nation’s culture. This cheerful variety of nationalism is edifying because it seeks to build the nation, and is peaceful, because, being constructive, it must necessarily oppose war, which is essentially destructive.

Such a gladdening kind of nationalism encourages each individual to find self-respect and self-worth, because respect for one’s self and one’s nation are coextensive. Even in circumstances in which one might disagree with one’s government, one can still have affection for one’s nation. To hate one’s nation is indirectly but inevitably to hate one’s self; to hate one’s self will eventually lead one to hate one’s nation. If one opposes one’s government, one can do so out of fondness for the nation: one desires the best for one’s nation, and in some conditions, that could include adjustments to the government.

So the word ‘nationalism’ can refer to two different things — things which are not only different, but opposed to each other. It is inevitable that disagreements and misunderstandings will arise around this term, given its ambiguity.

To look then specifically at the United States, one must first pose the question, whether the USA is a nation-state or not. It is in any case a state, but is it a nation? This is a debatable question. One the one hand, the extent of diversity among heritages, religions, spoken languages, and ethnic cultures might point to the conclusion that rather than being a nation, the United States is a collection of nations. On the other hand, one could argue that, since 1776, a diverse group of nations have built a common heritage which transcends the cultural backgrounds from which they came, and have thereby produced a new nation.

If one adopts the view that the USA is a patchwork of multiple nations, then one can say that what Americans created, fostered, nurtured, and celebrated since 1776 is a state. Rather than building an identity around a nation, according to this interpretation, Americans built an identity around ideas like liberty and equality. On this understanding, then, the United States would not have embraced nationalism, because there is no nation-state to be the centerpiece or raison d’etre for a nationalist ideology.

But if the USA isn’t a nation in this sense, and so can’t have nationalism in this sense, the question poses itself, about a mere state — a state which is a state without simultaneously being a nation: what can it have as a source of encouragement for its people? For what can they have affection? What concept will show them their place among the other nations in the world?

If one is not a part of a nation-state, of what is one a part? What will substitute for the patriotism and esprit de corps which allow one to build diplomacy and alliances with other nations?

The options may not be appetizing, as historian Jill Lepore writes:

The United States, thought by some to have never known nationalism, was now said to be beyond nationalism. A politics of identity replaced a politics of nationality. In the end, they weren’t very different from each other. Nor did identity politics dedicate a new generation of intellectuals to the study of the nation or a new generation of Americans to a broader understanding of Americanism.

If the USA isn’t a nation-state, and as such can’t have a nationalism, then the space left empty by the absence of nationalism might be filled by a divisive and bitter “identity politics.” Nationalism provides a narrative. Identity politics provides no narrative, or rather provides only a narrative of sins and grievances: there is no forgiveness in the narratives provided by identity politics.

Jill Lepore goes on to say that if a healthy patriotism is absent, then identity politics will provide only a “history that can’t find a source of inspiration in the nation’s past and therefore can’t really plot a path forward to power.”

The confused and confusing discussions about nationalism arise because there are two different types of nationalism: On the one hand, there’s a violent and warlike nationalism which demands the supremacy of the nation-state. On the other hand, there’s a peaceful and diplomatic nationalism which teaches the individual to appreciate her or his own nation and have affection for it, which in turn leads to appreciation for other nations and a diplomatic desire for peace.

Clearly, the belligerent version of nationalism is to be avoided, but in the absence of the healthy patriotism which is the desirable form of nationalism, something quite dangerous can emerge.

Saturday, July 23, 2022

When Good Advice Is Ignored: Economic Policy During the Ford Administration

David Stockman was elected to the U.S. House of Representatives in November 1976 and took office in January 1977. Prior to that, he’d worked since 1970 as a congressional staffer. Stockman was first elected to office in the same election which ended the electoral career of President Gerald R. Ford.

Ford had become president upon the resignation of President Richard Nixon. Ford had been Nixon’s vice president. Before becoming vice president, Ford had been elected, and then many times re-elected to the House of Representatives. Both Ford and Stockman had spent most of their lives in Michigan, and had represented that state in Congress.

Stockman would go on to have a multi-faceted and famous career after the November 1976 election, ultimately becoming the Director of the Office of Management and Budget from January 1981 to August 1985, being a part of the administration of President Ronald Reagan.

The political dynamics which influenced monetary, fiscal, and budgetary policy during Stockman’s tenure in the Office of Management and Budget (OMB) were eye-opening to him, and ultimately ended his political career, and — some observers might suggest — turned him into a bit of a bitter cynic. Stockman learned that political interests — i.e., elected officials doing what they need to do in order to get reelected — will usually trump the prudent advice of serious economists.

In the case of the Reagan administration, a simple formula — cut taxes and cut government spending — seemed to gain widespread agreement and approval, until the time came when specific and real budget cuts had to be identified. At that point, legislators worked to make sure that their pet projects — “set asides” and “pork barrel” items — were not going to be cut. With each legislator defending some expenditure, no expenditure was left to be reduced.

The predictable result of taking the first step — tax cuts — without taking the second step — spending cuts — was an increasing deficit. Stockman knew that deficits are harmful, and the plan had been to avoid them. But the political reality was that no legislator would embrace a spending cut that affected his electoral base. The tax cuts during the Reagan administration fueled economic growth and wage increases for the working class, but in the long run, the national debt increased during those same years, laying the foundation for problems in the future.

In hindsight, Stockman saw that this bitter experience in the 1980s was foreshadowed by President Ford’s similar experience in the 1970s. Stockman writes:

After assuming the presidency in August 1974, Gerald Ford had started off on the right foot. As a fiscally orthodox Midwestern Republican, he had been frightened by the recent runaway inflation and repulsed by the insanity of the Nixon freeze and the ever-changing wage and price control rules and phases which followed. Ford had also been just plain embarrassed by Nixon’s five straight years of large budget deficits.

President Nixon had hoped to fix the inflation problem by regulation: wage and price controls. Retailers could not set the prices of the goods on their shelves: the government did. Employers could not decide how much to pay their workers: the government did. The result was shortages of consumer products and the development of numerous workarounds designed to help businesses sidestep the government regulations. The inflation problem continued unabated.

President Ford, when he took office, saw the error of Nixon's policies and determined not to repeat the folly of Nixon’s economic regulation and Nixon’s deficits.

Ford’s first approach was to try to be serious about disciplined budgeting and to eliminate wasteful spending. The government’s budget for any one year should be the amount needed, and no more, to carry out the legislated tasks given by Congress to the executive branch.

Stockman recalls Ford’s attempt to instill sobriety into the Congressional budgeting process:

So for a brief moment in the fall of 1974, he launched a campaign to get back to the basics. Ford proposed to jettison the notion that the budget was an economic policy tool, and demanded that Washington return to the sober business of responsibly managing the spending and revenue accounts of the federal government.

Because he understood that a balanced budget and the avoidance of deficits was one of the primary responsibilities of the government, President Ford was even willing to consider a temporary tax increase. To obtain a responsible balanced budget, spending cuts must always be the primary method, but occasionally tax increases must be employed as well. If managed correctly, the tax increases would ultimately lead to tax cuts in the future, if the budget were brought under control.

The trends and fashions of politics can change from decade to decade. Ford was a conservative midwestern Republican. In the 1970s, it was thinkable for him to advocate for a tax increase. In later decades, conservative midwestern Republicans would be known often for opposing tax increases. Stockman’s narrative continues:

To this end, he called for drastic spending cuts to keep the current-year budget under $300 billion. He also requested a 5 percent surtax on the incomes of corporations and more affluent households to staunch the flow of budget red ink. At that point in history Ford’s proposed tax increase was applauded by fiscal conservatives, and there was no supply-side chorus around to denounce it. In fact, Art Laffer had just vacated his position as an underling at OMB.

One of the changes which happened after the 1970s was that the government budget came to be viewed, in later decades, as a tool for repairing the economy. Changing the levels of taxation and spending were later seen as ways to tweak the national business climate. In the 1970s and in earlier decades, the government’s budget was understood to simply be the funding for the government to carry out whichever tasks were essential and necessary as legislated.

Stockman explains how Ford attempted to redirect the executive branch and the legislative branch toward a responsible budgeting process:

In attempting to get Washington off the fiscal stimulus drug, Ford was aided immeasurably by the fact that Schultz had vacated the Treasury Department and had been replaced by Bill Simon. The latter was from a wholly different kettle of fish.

Both Congress and the entire bureaucratic apparatus of the executive branch, however, had momentum in a direction different from President Ford’s goals. This momentum came from a decade of ever-increasing government spending, apathy about growing debt and deficits, and policies which viewed taxation as a way to steer the economy rather than a way to raise revenue to fund legitimate government tasks. This decade was the combined years of the Johnson administration and the Nixon administration. Although in many ways opposed to each other, Johnson and Nixon had similar economic views.

Rather than let a deregulated business environment find its way to an optimum and natural equilibrium, Johnson and Nixon had continuously tinkered with regulations, thinking that the government knew best.

The momentum of bad economic policies for a decade proved to be the brick wall against which Ford’s common sense would crash; Treasury Secretary Bill Simon fared no better:

Bill Simon’s militant crusade within the Ford administration for the old-time fiscal religion and unfettered free markets was consequently short-lived. To be sure, his advocacy was not the run-of-the-mill Republican bombast about private enterprise. In speeches and congressional testimony, Simon offered consistent, forceful, and intelligent opposition to all forms of federal market intervention designed to stimulate the general economy or boost particular sectors like housing, agriculture, and energy.

The vertical separation of powers — between city, county, state, and federal governments — provided a clear example. The federal government has no responsibility to pay for the bad decisions of city governments. Yet when President Ford made a principled stand against using taxpayer dollars to pay for a city government’s bad spending, his reasoning was not universally accepted, as David Stockman reports:

In famously telling New York City to “drop dead” in its request for federal money, Gerald Ford betrayed a fundamental sympathy with his treasury secretary’s approach to fiscal rectitude. Yet the economic wreckage left behind by the Nixon abominations soon overwhelmed Ford’s best intentions.

The mess left by Johnson and Nixon was too big to fix without pain, and too many people were not willing to accept the pain: not Congress, not the media, not the lobbyists, not the labor unions. While many businesses were willing to ride out the bumps of a free market to get back to a healthy economy, some businesses relied on cronyism-type relationships with the government. Instead of offering good products at competitive prices, such businesses relied on governmental regulations to tilt the marketplace in their favor: they did not want a truly free market.

Under political pressure, President Ford changed his policies, against his own better judgment. The new policies, accompanied by similarly-designed legislations from Congress, were nearly opposite to Ford’s original position: The new policies moved away from a truly free market and moved toward a cronyism between a few businesses and the government; the new policies implemented tax cuts, not because they were analogous to the federal government’s obligations and responsibilities, but because they were part of an anti-inflation tactic, as David Stockman describes:

As the US economy weakened in the winter of 1974-1975, the Ford administration reversed direction at the urging of businessmen like OMB director Roy Ash and big business lobbies like the Committee for Economic Development. In the place of October’s tax surcharge to close the budget gap, Ford proposed in his January 1975 budget message that Congress enact a $16 billion tax cut, including a $12 billion rebate to households designed to encourage them to spend.

Although President Ford knew better, political realities forced him to allow a type of “crony capitalism” to gain ground against “free market capitalism” — the result was that, instead of the marketplace being a level playing field for free and fair competition between businesses, a few businesses were favored over all the others, leading to higher prices, leading to shortages, leading to increasing unemployment, and leading to falling wages and to falling standards of living for the working class.

And although he knew better, Ford was pressured into allowing tax policy to be used as a way to attempt to tweak the economy, instead of being used in a straightforward way to gather the necessary revenue.

As usual, the legislators ignored budget cuts, which can reduce the deficit and fight inflation. Cutting taxes without reducing spending leads inevitably to economic downturns and leads necessarily to increased deficits and debt, which in turn fuel inflation and unemployment. While inflation may cause nominal wages to rise, it causes real wages to fall, and the standard of living to fall as well. David Stockman quantifies the situation:

At length, Congress upped the tax cut ante to $30 billion. It also completely ignored Ford’s plea to make compensating spending cuts of about $5 billion.

Tax cuts may seem like generosity from the Congress, but they are salutary only if spending cuts accompany them. The generosity is an illusion because, for the consumer, any benefit from the tax cuts will be negated by inflation, shortages, and falling real wages.

Political presentations about tax cuts or tax rebates can seem like generosity — but this is an illusion, because all of that money belonged to the citizens in the first place. A tax cut is not a gift, it’s merely the government taking less from the people than it did the year before. By analogy, a victim of crime would not consider it to be ‘generosity’ if a burglar decided to steal only a computer instead of a computer and a smartphone.

A tax cut is not a gift from Santa Claus. It is merely the government’s decision to confiscate less wealth from people than it might have otherwise seized:

Then in the fall of that year (1975) the Ford White House escalated the tax stimulus battle further, proposing a tax reduction double the size of its January plan. This led to even more tax-cut largesse on Capitol Hill, a Ford veto, and then a final compromise tax bill on Christmas Eve 1975. Senate finance chairman Russell Long aptly described this final resolution as “putting Santa Claus back in his sleigh.”

Just as the 1974/1975 economic trouble was beginning to self-correct, the government’s efforts to fix it kicked in. These unnecessary measures simply drove the economy back away from equilibrium and caused more trouble:

Thus, the 1969-1972 cycle repeated itself: by the time the big Ford tax cut was enacted, the inventory liquidation had run its course and a natural rebound was under way. So the 1970s second round of fiscal stimulus was destined to fuel a renewed inflationary expansion, and this time it virtually blew the lid off the budget.

A single bad action simultaneously drove inflation and increased the deficit. Freemarket thought, by contrast, would have the government stand down from any contemplated intervention, and allowed the organic forces of buying and selling to nudge the economy back toward equilibrium.

With the horizontal separation of powers, it is always questionable whether one should ascribe credit or blame to the President or to the Congress. In most cases, the glory or the shame is shared. President Ford could have resisted interventionist tendencies more. Congress could have done the same. Neither branch of government behaved perfectly. Ford’s instincts were probably better than Congress’s, and even if he’d done exactly the correct thing in each case, Congress still had the ability to override his veto.

Political factors having nothing to do with economics — mainly Nixon’s Watergate scandal — played into Congress’s ability to override Ford’s veto. The Watergate scandal had also brought candidates into Congress who had a significant tendency to spend — there being no direct link between Watergate and excessive spending, but such are the dynamics of electoral politics, as David Stockman depicts them:

Despite Ford’s resolute veto of some appropriations bills, his red pen was no match for the massive Democratic congressional majorities that had come in with the Watergate election of 1974. Their urge to spend would not be denied, as attested by the figures for budget outlays.

The problems which confronted President Ford between August 1974 and January 1977 can be traced back to a meeting in August 1971. President Nixon called the meeting at Camp David, and invited most of his important economic advisors. It was at this meeting that the concept of wage and price controls became a policy. The results were disastrous, and even after Nixon was gone, and his wage and price controls were no longer in effect, the ripple effects of that meeting continued to plague the economy:

Federal spending grew by 23 percent in fiscal 1975 and then by more than 10 percent in each of fiscal 1976 and 1977. All told, federal outlays reached $410 billion in the Ford administration’s outgoing budget, a figure nearly double the spending level in place six years earlier when Nixon hustled his advisors off to Camp David.

The history of Ford’s economic policies is in some ways a tragedy. President Ford was a lonely advocate for common-sense measures: reduce spending and deregulate the economy. He was defeated, not by economic theorists who had better equations and graphs, but by the political reality that Congressmen like to make themselves popular by spending government funds — funds which in actuality belong to the people.

It is no coincidence that in February 1976, halfway around the globe, Margaret Thatcher made her famous comment about governments which spend “other people’s money.”

Once expensive government programs have legislatively been put into place, it is difficult for anyone to stop them. A president cannot overturn legislation. Congress can overturn its own legislation. A court can nullify congressional legislation if such legislation violates the Constitution. But a president is powerless in such cases — by design: this is the separation of powers in action, as David Stockman explicates:

This was ironic in the extreme. Ford was a stalwart fiscal conservative who went down to defeat in 1976 in a flurry of spending bill vetoes. But the massive increase in entitlement spending enacted during the Nixon years, particularly the 1972 act which indexed Social Security for cost of living increases just as runaway inflation materialized, could not be stopped with the veto pen. In fact, the specious facade of the Nixon-Schultz full-employment budget provided cover for a historic breakdown of financial discipline.

The reader will be aware that both David Stockman and President Reagan were, as policymakers and elected officials, sometimes controversial. There are various interpretations of the narrative about Stockman’s years in the Reagan administration.

What, however, is uncontroversial, is the basic narrative of economic policies and economic conditions during the Ford administration. The events of the Ford administration in some ways foreshadowed Stockman’s career in the Reagan administration.

The axiom to be learned is this: the pressures of electoral politics will often overrule serious economic thought.