Saturday, December 31, 2022

Reasons To Be Cheerful — Part 1

The nature of human communication includes a temptation to focus on bad news. This is not a recent development, fostered by the internet and cable TV. Thousands of years ago, it was already known that bad news travels fast.

The effort to conscientiously focus on good news is a mental discipline which will reward the person who practices it. Charles Calomiris reported the following in December 2022 in the Wall Street Journal:

The percentage of people living in poverty fell from 32% in 1947 to 15% in 1967 to only 1.1% in 2017. Opportunities created by economic growth, and government-sponsored social programs funded by that growth, produced broadly shared prosperity: 94% of households in 2017 would have been at least as well off as the top quintile in 1967. Bottom-quintile households enjoy the same living standards as middle-quintile households, and on a per capita basis the bottom quintile has a 3% higher income. Top-quintile households receive income equal to roughly four times the bottom (and only 2.2 times the lowest on a per capita basis), not the 16.7 proportion popularly reported.

“Real income of the bottom quintile,” Calomiris adds, “grew more than 681% from 1967 to 2017.” He concludes: “Average living standards have improved dramatically.”

If these data seem unfamiliar, it is because of that principle which dictates that the media, left unchecked, tend to focus on bad news. The reader who is regularly exposed to the typical news media will have been so bombarded with negative reports that good news will seem counterintuitive.

Readers may even have developed an automatic skepticism about any good news. Yet pleasant developments do, in reality, take place.

What does this all mean? That in the United States, wage-earners in all categories have experienced increases in their standards of living, and that those in the lowest categories are catching up to the middle and upper classes.

While there is a form of income inequality, if one measures pre-tax earned income, the situation looks quite different if one measures post-tax income from all sources: this is because those earning larger incomes pay a larger percentage of their income in taxes, and those earning smaller incomes receive a larger share of unearned income.

Calomiris continues:

The equality of consumption between the bottom quintile (in which only 36% of prime-age persons work) and the middle quintile (in which 92% of prime-age persons work) is a striking finding.

The savvy reader will be aware of the news media’s tendency to amplify or invent some types of problems. Worth noting is also a tendency to downplay or ignore types of problems.

Tuesday, December 13, 2022

When the Budget Decides the War: U.S. Defense Spending and the Korean Conflict

The second half of 1949 and the first half of 1950 formed a twelve-month period of history which was a traumatic year for global peace and diplomacy. In late 1949, two events shook the world: the communists took over China and the Soviet Socialists used their espionage network inside the United States to steal the intellectual property needed to build an atomic bomb. In early 1950, a select group of leaders inside the U.S. government received a secret document, titled NSC-68, which unsettled them with its revelations and evaluations of the world military scene. Finally, in June 1950, North Korea, backed by both communist China and the Soviet Socialists, make a surprise attack on South Korea, starting a war which would eventually kill millions of people.

Prior to 1949, there was a seemingly reasonable hope that the world would be able to experience a time of protracted peace. To be sure, the Cold War tensions between the USSR and the western Allies were real and detectable, but America’s monopoly on atomic weapons was assumed to be the trump card which would prevent massive Soviet aggression.

Anticipating peace, the U.S. had begun dismantling its military.

The addition of China to the communist bloc also strengthened the Soviet position. Sino-Soviet relations would remain strong for several years after 1949. In the mid-1950s, those relations would cool.

It was not obvious that the Soviet Socialists were using their spy network inside the U.S. to gain nuclear weapons technology. When they conducted a weapons test in August 1949, exploading an atomic bomb, and when U.S. intelligence agencies confirmed this event was confirmed in September 1949, the global balance of power shifted. The USSR was emboldened, and used ruthless military power to squash uprisings of freedom fighters in Berlin in June 1953, in Hungary in 1956, and in Prague in 1968.

In light of these events, President Truman requested that the National Security Council (NSC) write a comprehensive document, detailing the global military situation. The report, titled NSC-68, analyzed the state of the world, projected possible future scenarios, and advised steps which the U.S. could take in order to be ready for those scenarios. A small number of officials within the U.S. government read the text, written jointly by members of the Department of State and the Department of Defense; the text disturbed the readers: their hopes for a few quiet and peaceful years were dashed.

Instead of dismantling its military, circumstances forced the U.S. to build up its military.

Based on NSC-68, the Joint Chiefs of Staff (JCS) developed a scenario called Joint Outline War Plan Reaper. This was in essence a plan for World War III, and it would go into effect if and when the Soviets attacked. The consensus among the military leaders was that the Soviets would probably attack in Europe. The war in Korea was considered to be a “secondary” theater, as historian William Donnelly writes:

In September 1950, the JCS made their recommendations for a military buildup based on NSC 68. The active Army would expand from ten to eighteen divisions by Fiscal Year 1952, with its active duty personnel strength increasing from 593,526 to 1,567,000. With the end of the Korean War expected in 1951, the active duty personnel strength would fall to 1,355,000 by Fiscal Year 1954, but the number of active divisions would remain at eighteen so that the Army could meet the demands of Joint Outline War Plan Reaper.

Although acknowledging the importance of the Pacific Rim, the plan anticipated a major Soviet offensive, crossing what was then the border between East Germany and West Germany. NATO and U.S. forces calculated that the Soviets would have the advantage in the first few days of the war, so the strategy was to let the USSR extend itself as far as the Rhine (Rhein). At that point, the western powers would have organized a defensive line. The Soviet would have spent their initial energy and would face longer supply lines across unfamiliar territory.

Like all war plans, Reaper was a collection of hypotheticals, as historian William Donnelly explains:

The Joint Outline War Plan was the JCS plan for World War III with the Soviet Union, and Reaper was the first version of the plan prepared in light of NSC 68’s recommendations. Reaper called for ten Army divisions stationed in the Zone of the Interior (ZI — the contemporary term for the continental United States), four in Japan, and four in Europe at the start of the war. Like previous Joint Outline War Plans, Reaper foresaw that the initial Soviet advantage in ground forces meant that the Air Force and the Navy would play the dominant roles in early operations. The ten Army divisions and their supporting units in the ZI would form the General Reserve, portions of which could be deployed overseas early in the war, but whose most important function would be to serve as the cadre for a massive expansion of the Army. The four divisions in Japan would defend it from a Soviet invasion while the four divisions in Europe, along with their allies in the North Atlantic Treaty Organization (NATO), would conduct a delaying operation that, in conjunction with air attacks, would halt the Soviet Army along the Rhine. The U.S. Army, drawing on the resources made available by a World War II-type national mobilization, would expand and then launch a second crusade in Europe, ending the war after four years with a Soviet surrender and a force of eighty divisions.

The implications of War Plan Reaper and the assumptions and attitudes which shaped it were these: The Korean conflict would receive limited resources, and the U.S. would need to engage in substantial defense spending and a military buildup.

Unlike WW2, when the combat operations of the military held a high priority, the Korean conflict would be supplied, manned, and funded around preparations for a speculative war plan. The military units in Korea, where actual fighting was happening in real time, received less funding, because resources were being diverted to a buildup in Europe and in the continental United States.

Not only did supply shortages and a “manpower dilemma” (Donnelly’s phrase) impact the field effectiveness of the U.S. Army in Korea, but morale also understandably suffered. Many of the soldiers in Korea were conscripts who stayed no longer than they were required to do so, while the army sent more experienced soldiers and officers to Europe.

Morale deteriorated further when it became clear that the goal for the NATO and United Nations (UN) coalition, including the United States, was an armistice or ceasefire, not a victory.

Budgetary considerations had a major influence on both the strategies and the tactics of the coalition supporting South Korea in war.

Wednesday, December 7, 2022

Facing the Greatest Danger Without the Greatest Resources: Balancing the Korean Conflict with the Global Cold War

Following WW2, the United States began to reduce the size of its armed forces. The number of men in the military was reduced. Spending on research and development for new weapons was reduced. Procurement of current weapon systems was reduced. The overall budget for defense spending was reduced.

The war was over. The principal enemies, Japan and Germany, had been thoroughly smashed and were occupied by Allied troops. The United States alone possessed the technology to manufacture and use atomic weapons, giving it an unsurpassable advantage over any competing nation.

America felt secure. There was no need for large military spending or for a large and well-equipped army.

So, from the time that WW2 ended in late 1945 — a ceasefire took place in August, and both sides signed the final surrender papers in September — the United States optimistically anticipated a time of peace. There were no obvious threats of major military action on the horizon, so disassembling the U.S. military seemed like a sensible thing to do.

Four events would startle this calm attitude.

First, the USSR obtained from its network of espionage agents the American technology needed to assemble its own nuclear weapons. In late August 1949, the Soviet Socialists conducted a test, exploding for the first time their own atomic bomb. By early September, the U.S. intelligence agencies confirmed this reality. Suddenly, the U.S. was not the only nation on earth possessing nuclear weapons. This changed the balance of power suddenly and dramatically. The Soviet Socialists no longer needed to restrain themselves in their plans to take over and oppress other smaller nations.

Second, in late 1949, the Communist Party won the Chinese Civil War. This war had started in 1927, dragging on for many years, and had paused during WW2. A few of the freedom fighters who had resisted the Communists fled to the island of Formosa, and set up their own small country, called Taiwan or “free China.” Communist China, or “mainland China,” allied itself closely with the Soviet Socialists during the first several years of its existence.

Third, in January 1950, President Truman asked the National Security Council (NSC) to compose a report about the world’s geopolitical situation. The document, known as NSC-68, was kept secret until it was declassified in 1975. It alarmed the few leaders who had permission to read it. It persuaded readers that, instead of dismantling the military, the U.S. needed to be ready to face major threats.

Finally, in June 1950, North Korea, with substantial support from Communist China, and a smaller amount of support from the Soviet Socialists, launched a surprise attack on South Korea. This began the Korean War, which would ultimately cost the lives of more than a million human beings. Although the majority of military support for North Korea came from China, the Soviet Socialists led the political and strategic impulse behind the war. The procurement of nuclear weapons emboldened the USSR, in which Stalin was still ruling. The global counterforce was the North Atlantic Treaty Organization (NATO), created in 1949, a collective mutual security system, based on a large alliance between thirty nations. These nations pledged to help defend each other, and the major threat was obviously the Soviet Socialists.

The globe was not as safe as had been hoped, as historian William Donnelly writes:

President Truman and his senior advisors quickly concluded that the North Korean invasion on 25 June 1950 demonstrated that the Soviet Union was beginning to take greater risks as predicted by NSC 68. American intelligence estimates stated that the Soviets were not likely to initiate a general war until they had built up conventional and nuclear forces to the point where they could be confident of overrunning Western Europe and deterring an American nuclear response. NSC 68 had warned that 1954 would be the year of maximum danger of a general war. Preventing that war decisively colored the U.S. response to the invasion of South Korea. North Korea’s aggression had to be repulsed lest it encourage further local attacks, but the United States would limit its military commitments on the peninsula in case the attack actually was a Soviet effort to weaken America’s ability to defend the crucial areas of Western Europe and Japan. American leaders decided that the United States would avoid a wider war in Asia, undertake a massive buildup of conventional and nuclear forces to defend crucial areas, use much of that buildup to create a credible conventional defense in Europe, supply its allies with large amounts of military aid, and do all this by 1954 without causing irreparable harm to the American economy.

The thinking about defense spending changed significantly between 1948 and 1951. Although thinking can change quickly, the physical realities change slower. The events of 1949 and 1950 were shocking. There was a lag time between those events and the implementation of plans for a military buildup.

One of the implications of the situation was that the nations supporting South Korea — which included several NATO countries as well as several United Nations countries — fought the Korean War “on the cheap.” Many of these nations were still repairing themselves from WW2, both economically and in terms of physical infrastructure. They were not available for a massive war effort.

The limited military and fiscal resources had to not only support a war in Korea, but develop a global defense system at the same time. Massive amounts of money were required for the research and development of missiles, jet airplanes, and nuclear weapons, as well as the usual conventional forces.

There wasn’t a lot of money left over to fund the Korean War.

Not only was there a lack of money for equipment, supplies, research, and development, but rather there was also, in the words of William Donnelly, a “manpower dilemma.” Soldiers not only have to be paid, but rather also trained, clothed, fed, and sheltered.

The army was experiencing a “massive expansion,” but given the tasks it faced, the needs for men were still greater than the supply. This was especially so regarding leadership positions like non-commissioned officers (NCO). There was a large supply of enlisted men, given the realities of conscription. But draftees remain only as long as they must, and so there was a high rate of turnover among footsoldiers, making leadership even more important. Yet it was precisely among NCO ranks that there was a manpower shortage.

The United States fought the Korean War on a shoestring budget.

Manpower shortages, high rates of turnover, and a high percentage of draftees among the soldiers led to morale problems. Also detracting from morale was the fact that top-level leadership was deciding to treat Korea as a “secondary theater,” with Europe still seen as the likely place for a face-to-face confrontation with the Soviets. A further dampener was the selection of armistice, rather than victory, as the goal in Korean: this was hardly inspiring to already-skeptical draftees who didn’t want to be in the army in the first place.

Friday, August 12, 2022

Scaling Down: Preparing for Smaller Wars

In January 1950, President Harry Truman requested the Department of State and the Department of Defense to jointly compose a document regarding U.S. objectives in both diplomatic and military concerns. In April, he received the report, a top-secret document titled NSC-68.

This document remained classified until 1975, but is now available to the reading public. It shaped much of American strategic and geopolitical thought throughout the 1950s and 1960s. It addressed both strategy and ideology.

NSC-68 also included references to the nation’s founding texts from the 1700s, including the Declaration of Independence, the Constitution, the Bill of Rights, and the Federalist Papers.

The report’s authors were concerned to distinguish between, on the one hand, massive wars of annihilation on a global scale, and on the other hand, smaller regional conflicts:

The mischief may be a global war or it may be a Soviet campaign for limited objectives. In either case we should take no avoidable initiative which would cause it to become a war of annihilation, and if we have the forces to defeat a Soviet drive for limited objectives it may well be to our interest not to let it become a global war.

It was therefore incumbent upon the United States military establishment to be prepared for both types of conflict. But the U.S. military in 1950 was not ready, as author Russell Weigley writes:

NSC-68 suggested a danger of limited war, of Communist military adventures designed not to annihilate the West but merely to expand the periphery of the Communist domains, limited enough that an American riposte of atomic annihilation would be disproportionate in both morality and expediency. To retaliate against a Communist military initiative on any but an atomic scale, the American armed forces in 1950 were ill equipped. Ten understrength Army divisions and eleven regimental combat teams, 671 Navy ships, two understrength Marine Corps divisions, and forty-eight Air Force wings (the buildup not yet having reached the old figure of fifty-five) were stretched thinly around the world.

It would not be fitting to respond, e.g., to the Soviet blockade of Berlin by unleashing America’s arsenal. Although some military strategists in the late 1940s saw the atomic bomb as the answer to nearly any tactical question, it was now becoming clear that America should have a full conventional force as well.

The Air Force atomic striking force, embodied now in eighteen wings of the Strategic Air Command, was the only American military organization possessing a formidable instant readiness capacity. So much did Americans, including the government, succeed in convincing themselves that the atomic bomb was a sovereign remedy for all military ailments, so ingrained was the American habit of thinking of war in terms of annihilative victories, that occasional warnings of limited war went more than unheeded, and people, government, and much of the military could scarcely conceive of a Communist military thrust of lesser dimensions than World War III.

So it happened, then, that in June 1950, when North Korea attacked South Korea, the United States was in possession of a large nuclear arsenal, but a barely serviceable — if at all serviceable — infantry. The United States was prepared for global atomic war, but the Soviet Socialists chose smaller proxy wars — Korea, Vietnam — and even smaller military maneuvers to quell uprisings — Berlin 1953, Hungary 1956, Prague 1968.

America’s brief romance with the atomic bomb was over. By the mid-1950s, it was clear that the United States needed a full conventional force alongside its nuclear arsenal.

This would require a bit of a scramble to make up for years in the late 1940s during which the conventional forces were allowed to languish. The Korean War included a U.S. Army which was underfunded and undersized.

In the postwar decades, the United States needed to have both a strategic nuclear force as well as sufficient conventional forces in the traditional Army, Navy, Air Force, and Marines.

Wednesday, August 3, 2022

The Best President Ever?

On a regular basis, every few years, journalists will assemble a group of historians or political scientists and ask them to sort through the presidents of the United States, and come up with a list of the top ten, or the bottom ten, or to rank all of them from best to worst, or to select the single best-ever, or worst-ever, president.

Such efforts are sometimes interesting, but in the end, they are meaningless.

These processes are hopelessly subjective, and reveal, at most, the personal preferences and partialities of the researchers involved. Because these types of surveys have been going on for years, one can trace their contradictory results which expose their sheer non-confirmability and un-verifiability.

Writing in 2012, Robert Merry traced the flip-flops and reversals of such surveys:

Consider Dwight Eisenhower, initially pegged by historians as a mediocre two-termer. In 1962, a year after Ike relinquished the presidency, a poll by Harvard’s Arthur Schlesinger Sr. ranked him 22nd — between Chester A. Arthur, largely a presidential nonentity, and Andrew Johnson, impeached by the House and nearly convicted by the Senate. Republicans were outraged; Democrats laughed. By the time a 1981 poll was taken, however, Eisenhower had moved up to 12th. The following year he was ninth. In three subsequent polls he came in 11th, 10th and eighth.

The academics did a similar turnaround, and did an about-face on another famous president:

Academics initially slammed Reagan, as they had Eisenhower. One survey of 750 historians taken between 1988 and 1990 ranked him as “below average.” A 1996 poll ranked him at 25th, between George H.W. Bush, the one-termer who succeeded him, and that selfsame Chester Arthur. Reagan's standing is now on the rise.

If the search for the “best ever” president, or even the “top ten” presidents, is an empty pursuit, can scholars give more meaningful results? Perhaps: while it is meaningless to say that Calvin Coolidge is a “good” or “bad” president, it is meaningful to say that he lowered taxes, lowered the national debt, and reduced the federal government’s spending. Such statements are verifiable and quantifiable.

Historians can give us meaningful data when they research specific and measurable details about a president, instead of merely trying to assign him a relative rank as “better than” or “worse than” some other president.

It is observable, and therefore material, the President Polk’s management of the Mexican-American war impacted presidential elections after the war’s end in 1848.

Such observations are not only more reliable and objective, but also protect scholars from ending up with the proverbial “egg on the face” of declaring some president to be “good” or “bad” and then find themselves facing stiff opposition to such judgments. One example of academics hastily praising a president, only to find themselves slowly retracting such glowing evaluations, is the case of Woodrow Wilson.

Wilson’s high marks from historians belie the fact that voters in 1920 delivered to his party one of the starkest repudiations ever visited upon an incumbent party. Similarly, historians consistently mark Harding as a failure, though he presided over remarkably good times and was very popular.

Exactly as scholars revised their estimates of Eisenhower and Reagan upward, so now they are reconsidering Harding in a more favorable light. Wilson’s reputation, meanwhile, has descended.

In sum, it is more important to gather data about a president than to evaluate him.

Writing about a president should emphasize, not general impressions, but rather observable, measurable, verifiable, and quantifiable data. That’s how serious historians work. Reports about presidents should be full of dates, places, specific actions, and the names of other individuals with whom that president interacted.

Such a method would lead to the “best ever” texts about presidents!

Tuesday, July 26, 2022

The Role of Nationalism in History: Unclear

Historians, politicians, and news media use the word ‘nationalism’ frequently. Although the term is often used with passion, its exact meaning is frequently unclear. One reason for this ambiguity is that, as the designation is used, it has more than one meaning.

There are at least two distinct, different, and mutually exclusive definitions for ‘nationalism’ and this ambivalence is responsible for misunderstandings, disagreements, and quarrels.

One the one hand, ‘nationalism’ can refer to a malignant ideology: a value system in which the power and growth of the nation-state are the ultimate goals, transcending all other potential morals. To understand this malign type of nationalism, the reader must understand first what a nation-state is.

A “nation” is a group of people who have a shared cultural identity — an ethnic group. This mutuality often relates to a shared language, history, or religion, as well as other aspects of culture: food, clothing, holidays, and the arts — music, architecture, literature, etc.

A “state” is a geographical territory with an independent and sovereign government: a piece of land with its own ruling system.

A “nation-state” is when a nation and a state are coextensive. There are nations which are not states, and there are states which are not nations. But in some cases, the state is the nation, and the nation is the state: the two are identical.

When an individual embraces the malevolent form of nationalism, the nation-state becomes the highest goal and value for this person. In such a case, all other potential values are demoted to lower rankings. The practical effect of this system is that the individual will sacrifice anything if by so sacrificing, the nation-state is strengthened.

Because this malicious type of nationalism demands that the nation-state is the ultimate value, it stands opposed to any other value which people might ordinarily cite as an ultimate value: family, friends, duty, honor, God, faith, religion, art, etc. Therefore it is impossible for a person who embraces this dangerous form of nationalism to accomplish the true duties of friendship, family, religious faith, etc., because such a person will ultimately be required to oppose those things when the needs and desires of the nation-state demand such opposition.

This harmful type of nationalism can even lead to wars and to cruelty. It can lead the nation-state to violate the human rights and civil rights of both its own citizens and citizens of other nation-states.

On the other hand, there is a benign and beneficial type of nationalism which is akin to a healthy patriotism. This form of nationalism enables the individual to appreciate and celebrate the culture and accomplishments of her or his nation-state. This kind of nationalism is an affection for one’s own nation-state. Importantly, this sort of nationalism does not oppose, but rather even requires, a respect and even an affection for other nation-states. It is impossible to truly respect one’s own nation-state without also respecting other nation-states.

This wholesome type of nationalism creates unity as together the citizens of the nation-state honor and work toward the maintenance of their nation’s culture. This cheerful variety of nationalism is edifying because it seeks to build the nation, and is peaceful, because, being constructive, it must necessarily oppose war, which is essentially destructive.

Such a gladdening kind of nationalism encourages each individual to find self-respect and self-worth, because respect for one’s self and one’s nation are coextensive. Even in circumstances in which one might disagree with one’s government, one can still have affection for one’s nation. To hate one’s nation is indirectly but inevitably to hate one’s self; to hate one’s self will eventually lead one to hate one’s nation. If one opposes one’s government, one can do so out of fondness for the nation: one desires the best for one’s nation, and in some conditions, that could include adjustments to the government.

So the word ‘nationalism’ can refer to two different things — things which are not only different, but opposed to each other. It is inevitable that disagreements and misunderstandings will arise around this term, given its ambiguity.

To look then specifically at the United States, one must first pose the question, whether the USA is a nation-state or not. It is in any case a state, but is it a nation? This is a debatable question. One the one hand, the extent of diversity among heritages, religions, spoken languages, and ethnic cultures might point to the conclusion that rather than being a nation, the United States is a collection of nations. On the other hand, one could argue that, since 1776, a diverse group of nations have built a common heritage which transcends the cultural backgrounds from which they came, and have thereby produced a new nation.

If one adopts the view that the USA is a patchwork of multiple nations, then one can say that what Americans created, fostered, nurtured, and celebrated since 1776 is a state. Rather than building an identity around a nation, according to this interpretation, Americans built an identity around ideas like liberty and equality. On this understanding, then, the United States would not have embraced nationalism, because there is no nation-state to be the centerpiece or raison d’etre for a nationalist ideology.

But if the USA isn’t a nation in this sense, and so can’t have nationalism in this sense, the question poses itself, about a mere state — a state which is a state without simultaneously being a nation: what can it have as a source of encouragement for its people? For what can they have affection? What concept will show them their place among the other nations in the world?

If one is not a part of a nation-state, of what is one a part? What will substitute for the patriotism and esprit de corps which allow one to build diplomacy and alliances with other nations?

The options may not be appetizing, as historian Jill Lepore writes:

The United States, thought by some to have never known nationalism, was now said to be beyond nationalism. A politics of identity replaced a politics of nationality. In the end, they weren’t very different from each other. Nor did identity politics dedicate a new generation of intellectuals to the study of the nation or a new generation of Americans to a broader understanding of Americanism.

If the USA isn’t a nation-state, and as such can’t have a nationalism, then the space left empty by the absence of nationalism might be filled by a divisive and bitter “identity politics.” Nationalism provides a narrative. Identity politics provides no narrative, or rather provides only a narrative of sins and grievances: there is no forgiveness in the narratives provided by identity politics.

Jill Lepore goes on to say that if a healthy patriotism is absent, then identity politics will provide only a “history that can’t find a source of inspiration in the nation’s past and therefore can’t really plot a path forward to power.”

The confused and confusing discussions about nationalism arise because there are two different types of nationalism: On the one hand, there’s a violent and warlike nationalism which demands the supremacy of the nation-state. On the other hand, there’s a peaceful and diplomatic nationalism which teaches the individual to appreciate her or his own nation and have affection for it, which in turn leads to appreciation for other nations and a diplomatic desire for peace.

Clearly, the belligerent version of nationalism is to be avoided, but in the absence of the healthy patriotism which is the desirable form of nationalism, something quite dangerous can emerge.

Saturday, July 23, 2022

When Good Advice Is Ignored: Economic Policy During the Ford Administration

David Stockman was elected to the U.S. House of Representatives in November 1976 and took office in January 1977. Prior to that, he’d worked since 1970 as a congressional staffer. Stockman was first elected to office in the same election which ended the electoral career of President Gerald R. Ford.

Ford had become president upon the resignation of President Richard Nixon. Ford had been Nixon’s vice president. Before becoming vice president, Ford had been elected, and then many times re-elected to the House of Representatives. Both Ford and Stockman had spent most of their lives in Michigan, and had represented that state in Congress.

Stockman would go on to have a multi-faceted and famous career after the November 1976 election, ultimately becoming the Director of the Office of Management and Budget from January 1981 to August 1985, being a part of the administration of President Ronald Reagan.

The political dynamics which influenced monetary, fiscal, and budgetary policy during Stockman’s tenure in the Office of Management and Budget (OMB) were eye-opening to him, and ultimately ended his political career, and — some observers might suggest — turned him into a bit of a bitter cynic. Stockman learned that political interests — i.e., elected officials doing what they need to do in order to get reelected — will usually trump the prudent advice of serious economists.

In the case of the Reagan administration, a simple formula — cut taxes and cut government spending — seemed to gain widespread agreement and approval, until the time came when specific and real budget cuts had to be identified. At that point, legislators worked to make sure that their pet projects — “set asides” and “pork barrel” items — were not going to be cut. With each legislator defending some expenditure, no expenditure was left to be reduced.

The predictable result of taking the first step — tax cuts — without taking the second step — spending cuts — was an increasing deficit. Stockman knew that deficits are harmful, and the plan had been to avoid them. But the political reality was that no legislator would embrace a spending cut that affected his electoral base. The tax cuts during the Reagan administration fueled economic growth and wage increases for the working class, but in the long run, the national debt increased during those same years, laying the foundation for problems in the future.

In hindsight, Stockman saw that this bitter experience in the 1980s was foreshadowed by President Ford’s similar experience in the 1970s. Stockman writes:

After assuming the presidency in August 1974, Gerald Ford had started off on the right foot. As a fiscally orthodox Midwestern Republican, he had been frightened by the recent runaway inflation and repulsed by the insanity of the Nixon freeze and the ever-changing wage and price control rules and phases which followed. Ford had also been just plain embarrassed by Nixon’s five straight years of large budget deficits.

President Nixon had hoped to fix the inflation problem by regulation: wage and price controls. Retailers could not set the prices of the goods on their shelves: the government did. Employers could not decide how much to pay their workers: the government did. The result was shortages of consumer products and the development of numerous workarounds designed to help businesses sidestep the government regulations. The inflation problem continued unabated.

President Ford, when he took office, saw the error of Nixon's policies and determined not to repeat the folly of Nixon’s economic regulation and Nixon’s deficits.

Ford’s first approach was to try to be serious about disciplined budgeting and to eliminate wasteful spending. The government’s budget for any one year should be the amount needed, and no more, to carry out the legislated tasks given by Congress to the executive branch.

Stockman recalls Ford’s attempt to instill sobriety into the Congressional budgeting process:

So for a brief moment in the fall of 1974, he launched a campaign to get back to the basics. Ford proposed to jettison the notion that the budget was an economic policy tool, and demanded that Washington return to the sober business of responsibly managing the spending and revenue accounts of the federal government.

Because he understood that a balanced budget and the avoidance of deficits was one of the primary responsibilities of the government, President Ford was even willing to consider a temporary tax increase. To obtain a responsible balanced budget, spending cuts must always be the primary method, but occasionally tax increases must be employed as well. If managed correctly, the tax increases would ultimately lead to tax cuts in the future, if the budget were brought under control.

The trends and fashions of politics can change from decade to decade. Ford was a conservative midwestern Republican. In the 1970s, it was thinkable for him to advocate for a tax increase. In later decades, conservative midwestern Republicans would be known often for opposing tax increases. Stockman’s narrative continues:

To this end, he called for drastic spending cuts to keep the current-year budget under $300 billion. He also requested a 5 percent surtax on the incomes of corporations and more affluent households to staunch the flow of budget red ink. At that point in history Ford’s proposed tax increase was applauded by fiscal conservatives, and there was no supply-side chorus around to denounce it. In fact, Art Laffer had just vacated his position as an underling at OMB.

One of the changes which happened after the 1970s was that the government budget came to be viewed, in later decades, as a tool for repairing the economy. Changing the levels of taxation and spending were later seen as ways to tweak the national business climate. In the 1970s and in earlier decades, the government’s budget was understood to simply be the funding for the government to carry out whichever tasks were essential and necessary as legislated.

Stockman explains how Ford attempted to redirect the executive branch and the legislative branch toward a responsible budgeting process:

In attempting to get Washington off the fiscal stimulus drug, Ford was aided immeasurably by the fact that Schultz had vacated the Treasury Department and had been replaced by Bill Simon. The latter was from a wholly different kettle of fish.

Both Congress and the entire bureaucratic apparatus of the executive branch, however, had momentum in a direction different from President Ford’s goals. This momentum came from a decade of ever-increasing government spending, apathy about growing debt and deficits, and policies which viewed taxation as a way to steer the economy rather than a way to raise revenue to fund legitimate government tasks. This decade was the combined years of the Johnson administration and the Nixon administration. Although in many ways opposed to each other, Johnson and Nixon had similar economic views.

Rather than let a deregulated business environment find its way to an optimum and natural equilibrium, Johnson and Nixon had continuously tinkered with regulations, thinking that the government knew best.

The momentum of bad economic policies for a decade proved to be the brick wall against which Ford’s common sense would crash; Treasury Secretary Bill Simon fared no better:

Bill Simon’s militant crusade within the Ford administration for the old-time fiscal religion and unfettered free markets was consequently short-lived. To be sure, his advocacy was not the run-of-the-mill Republican bombast about private enterprise. In speeches and congressional testimony, Simon offered consistent, forceful, and intelligent opposition to all forms of federal market intervention designed to stimulate the general economy or boost particular sectors like housing, agriculture, and energy.

The vertical separation of powers — between city, county, state, and federal governments — provided a clear example. The federal government has no responsibility to pay for the bad decisions of city governments. Yet when President Ford made a principled stand against using taxpayer dollars to pay for a city government’s bad spending, his reasoning was not universally accepted, as David Stockman reports:

In famously telling New York City to “drop dead” in its request for federal money, Gerald Ford betrayed a fundamental sympathy with his treasury secretary’s approach to fiscal rectitude. Yet the economic wreckage left behind by the Nixon abominations soon overwhelmed Ford’s best intentions.

The mess left by Johnson and Nixon was too big to fix without pain, and too many people were not willing to accept the pain: not Congress, not the media, not the lobbyists, not the labor unions. While many businesses were willing to ride out the bumps of a free market to get back to a healthy economy, some businesses relied on cronyism-type relationships with the government. Instead of offering good products at competitive prices, such businesses relied on governmental regulations to tilt the marketplace in their favor: they did not want a truly free market.

Under political pressure, President Ford changed his policies, against his own better judgment. The new policies, accompanied by similarly-designed legislations from Congress, were nearly opposite to Ford’s original position: The new policies moved away from a truly free market and moved toward a cronyism between a few businesses and the government; the new policies implemented tax cuts, not because they were analogous to the federal government’s obligations and responsibilities, but because they were part of an anti-inflation tactic, as David Stockman describes:

As the US economy weakened in the winter of 1974-1975, the Ford administration reversed direction at the urging of businessmen like OMB director Roy Ash and big business lobbies like the Committee for Economic Development. In the place of October’s tax surcharge to close the budget gap, Ford proposed in his January 1975 budget message that Congress enact a $16 billion tax cut, including a $12 billion rebate to households designed to encourage them to spend.

Although President Ford knew better, political realities forced him to allow a type of “crony capitalism” to gain ground against “free market capitalism” — the result was that, instead of the marketplace being a level playing field for free and fair competition between businesses, a few businesses were favored over all the others, leading to higher prices, leading to shortages, leading to increasing unemployment, and leading to falling wages and to falling standards of living for the working class.

And although he knew better, Ford was pressured into allowing tax policy to be used as a way to attempt to tweak the economy, instead of being used in a straightforward way to gather the necessary revenue.

As usual, the legislators ignored budget cuts, which can reduce the deficit and fight inflation. Cutting taxes without reducing spending leads inevitably to economic downturns and leads necessarily to increased deficits and debt, which in turn fuel inflation and unemployment. While inflation may cause nominal wages to rise, it causes real wages to fall, and the standard of living to fall as well. David Stockman quantifies the situation:

At length, Congress upped the tax cut ante to $30 billion. It also completely ignored Ford’s plea to make compensating spending cuts of about $5 billion.

Tax cuts may seem like generosity from the Congress, but they are salutary only if spending cuts accompany them. The generosity is an illusion because, for the consumer, any benefit from the tax cuts will be negated by inflation, shortages, and falling real wages.

Political presentations about tax cuts or tax rebates can seem like generosity — but this is an illusion, because all of that money belonged to the citizens in the first place. A tax cut is not a gift, it’s merely the government taking less from the people than it did the year before. By analogy, a victim of crime would not consider it to be ‘generosity’ if a burglar decided to steal only a computer instead of a computer and a smartphone.

A tax cut is not a gift from Santa Claus. It is merely the government’s decision to confiscate less wealth from people than it might have otherwise seized:

Then in the fall of that year (1975) the Ford White House escalated the tax stimulus battle further, proposing a tax reduction double the size of its January plan. This led to even more tax-cut largesse on Capitol Hill, a Ford veto, and then a final compromise tax bill on Christmas Eve 1975. Senate finance chairman Russell Long aptly described this final resolution as “putting Santa Claus back in his sleigh.”

Just as the 1974/1975 economic trouble was beginning to self-correct, the government’s efforts to fix it kicked in. These unnecessary measures simply drove the economy back away from equilibrium and caused more trouble:

Thus, the 1969-1972 cycle repeated itself: by the time the big Ford tax cut was enacted, the inventory liquidation had run its course and a natural rebound was under way. So the 1970s second round of fiscal stimulus was destined to fuel a renewed inflationary expansion, and this time it virtually blew the lid off the budget.

A single bad action simultaneously drove inflation and increased the deficit. Freemarket thought, by contrast, would have the government stand down from any contemplated intervention, and allowed the organic forces of buying and selling to nudge the economy back toward equilibrium.

With the horizontal separation of powers, it is always questionable whether one should ascribe credit or blame to the President or to the Congress. In most cases, the glory or the shame is shared. President Ford could have resisted interventionist tendencies more. Congress could have done the same. Neither branch of government behaved perfectly. Ford’s instincts were probably better than Congress’s, and even if he’d done exactly the correct thing in each case, Congress still had the ability to override his veto.

Political factors having nothing to do with economics — mainly Nixon’s Watergate scandal — played into Congress’s ability to override Ford’s veto. The Watergate scandal had also brought candidates into Congress who had a significant tendency to spend — there being no direct link between Watergate and excessive spending, but such are the dynamics of electoral politics, as David Stockman depicts them:

Despite Ford’s resolute veto of some appropriations bills, his red pen was no match for the massive Democratic congressional majorities that had come in with the Watergate election of 1974. Their urge to spend would not be denied, as attested by the figures for budget outlays.

The problems which confronted President Ford between August 1974 and January 1977 can be traced back to a meeting in August 1971. President Nixon called the meeting at Camp David, and invited most of his important economic advisors. It was at this meeting that the concept of wage and price controls became a policy. The results were disastrous, and even after Nixon was gone, and his wage and price controls were no longer in effect, the ripple effects of that meeting continued to plague the economy:

Federal spending grew by 23 percent in fiscal 1975 and then by more than 10 percent in each of fiscal 1976 and 1977. All told, federal outlays reached $410 billion in the Ford administration’s outgoing budget, a figure nearly double the spending level in place six years earlier when Nixon hustled his advisors off to Camp David.

The history of Ford’s economic policies is in some ways a tragedy. President Ford was a lonely advocate for common-sense measures: reduce spending and deregulate the economy. He was defeated, not by economic theorists who had better equations and graphs, but by the political reality that Congressmen like to make themselves popular by spending government funds — funds which in actuality belong to the people.

It is no coincidence that in February 1976, halfway around the globe, Margaret Thatcher made her famous comment about governments which spend “other people’s money.”

Once expensive government programs have legislatively been put into place, it is difficult for anyone to stop them. A president cannot overturn legislation. Congress can overturn its own legislation. A court can nullify congressional legislation if such legislation violates the Constitution. But a president is powerless in such cases — by design: this is the separation of powers in action, as David Stockman explicates:

This was ironic in the extreme. Ford was a stalwart fiscal conservative who went down to defeat in 1976 in a flurry of spending bill vetoes. But the massive increase in entitlement spending enacted during the Nixon years, particularly the 1972 act which indexed Social Security for cost of living increases just as runaway inflation materialized, could not be stopped with the veto pen. In fact, the specious facade of the Nixon-Schultz full-employment budget provided cover for a historic breakdown of financial discipline.

The reader will be aware that both David Stockman and President Reagan were, as policymakers and elected officials, sometimes controversial. There are various interpretations of the narrative about Stockman’s years in the Reagan administration.

What, however, is uncontroversial, is the basic narrative of economic policies and economic conditions during the Ford administration. The events of the Ford administration in some ways foreshadowed Stockman’s career in the Reagan administration.

The axiom to be learned is this: the pressures of electoral politics will often overrule serious economic thought.

Tuesday, June 14, 2022

Promoting Public Health and Economic Justice: The Single-Family Dwelling

Home ownership has often — always? — been a part of the “American Dream,” whatever the American Dream may be. But now it’s becoming clear that it’s also a vehicle for creating economic equity and for helping people stay healthy.

Since March 2020, it’s become clear that people who live in freestanding houses are not only less likely to test positive for COVID, but demonstrate better overall health by a number of metrics. The structure of a freestanding home reduces virus transmission.

Apartments, condos, townhouses, row houses, and other homes which have shared crawlspaces, attics, and walls create paths for airborne pathogens. If a neighbor’s cooking can be smelled, then particles and vapors are being communicated from one living space to another. A virus can easily be among those things transmitted.

By contrast, a neighbor might cough and sneeze in one house, but if the next house is separated by several feet of grass, trees, and breeze, the chances of a SARS-CoV-2 transmission is nearly nonexistent. Freestanding single-family homes demonstrated their health benefits during the pandemic.

In addition to saving lives, however, home ownership is an important instrument in the effort to achieve an aspect of societal equity. Families who own a house are economically more resilient. Children who grow up in a single-family dwelling do better in school, are less likely to run afoul of the police, and are less likely to be obsese.

Some observers have asked whether home ownership might be a substitute for affluence in general. Could it be that the benefits attributed to home ownership are actually simply the benefits of wealth?

Further analysis, however, reveals that the smallest and most humble single family dwelling yields both the health and economic benefits which the grandest condo cannot give. A very modest house reduces virus transmission and bestows educational and social benefits, while a lavish upscale flat in an urban center does not.

Both for reasons of public health, and for reasons of social equity, zoning boards and local city councils should encourage the construction of single family dwellings more than condos and townhouses. This would especially benefit ethnic and racial demographic groups who are traditionally underrepresented in home ownership.

If local governments encourage “affordable housing,” but that housing isn’t freestanding houses, then those demographic groups will not be able to access the full benefits of home ownership. If society can increase the percentages of people who own a single family dwelling on its own piece of land, then that is a step forward for equity, equality, and justice.

Wednesday, June 8, 2022

LBJ’s Big Mistake: How Johnson’s Great Society Significantly Slowed Economic Progress for Black Americans

“Stagnant rates of increase in black prosperity” have troubled America for decades, writes Ben Shapiro. In the immediate wake of the 1863 Emancipation Proclamation and the end of the U.S. Civil War in 1865, it seemed that African-Americans were set to enjoy the opportunities of the economy.

In bits and pieces, Black entrepreneurship did indeed produce successes. But this growth could have flourished more, had not the government stood in the way. State and local governments which were not the result of free and fair elections took power in some places in the late 1870s.

For about a decade after the war’s end, the Reconstruction Era offered political and economic freedom to African-Americans, albeit imperfectly. Both in business leadership and in elected public offices, Blacks had high-profile roles.

In the late 1870s, that changed. Blacks lost many of the advancements they’d gained, as the Democratic Party reasserted itself. In some places, elections were neither free nor fair, and the Democrats gained control of cities, counties, and states. They enacted legislation and adopted policies designed to reduce opportunities for Blacks. After a decade of achievement, African-Americans lost ground, and “government involvement” was “to blame,” notes Shapiro.

Black economic fortunes were at a low point for several decades, until the administration of Theodore Roosevelt at the turn of the century, and the administrations of Warren Harding and Calvin Coolidge in the 1920s.

President Theodore Roosevelt began a new trend when he invited Booker T. Washington to dine at the White House in October 1901. This was a signal of new openness to African-Americans. Theodore Roosevelt’s forward movement was interrupted by the election of Woodrow Wilson in 1912.

Wilson, who assumed office in March 1913, undid much the access which Roosevelt had created for Blacks. Wilson imposed harsh segregation in government departments which had been desegregated and integrated prior to 1913.

Happily, Warren Harding and Calvin Coolidge picked up Theodore Roosevelt’s trend and carried it further. The 1920s were a time when African-Americans again advanced, both in business and in higher education. President Coolidge became the first president to deliver a commencement address at Black college when he spoke at Howard University in 1924.

Again, sadly, civil rights were put on hold. The direction given by Theodore Roosevelt at the beginning of the century was paused by Franklin Roosevelt in the middle of the century. While Franklin Roosevelt gave lip service to civil rights in order to obtain votes from African-American citizens, he took no action on their behalf. Instead, he insisted on segregation in the U.S. military during WW2. Eisenhower defied FDR’s orders and allowed Black and White troops to work together.

In this up-and-down narrative, the next up was Eisenhower’s presidency in the 1950s. Working together with Vice President Richard Nixon and with Martin Luther King, Jr., President Eisenhower drove two landmark pieces of legislation through Congress: the 1957 Civil Rights Act and the 1960 Civil Rights Act. Eisenhower also sent the U.S. military, in the form of the 101st Airborne Division, to Little Rock, Arkansas, where the Democratic Party’s Governor Faubus was determined to deny African-American children the right to attend school. Eisenhower made Faubus into an irrelevance and made sure that the Arkansas schools were desegregated and integrated, giving Black children major opportunities.

After the benefits of the Eisenhower years, African-Americans experienced another downturn during the 1960s. President Johnson inflicted a series of programs on the Blacks. These detrimental and even racist policies were lumped together under the heading of “The Great Society,” as Ben Shapiro notes:

In essence, the Great Society drove impoverished black people into dependency. In 1960, 22 percent of black children were born out of wedlock; today, that number is over 70 percent. The single greatest indicator of intergenerational poverty is single motherhood. As Thomas Sowell writes, “What about ghetto riots, crimes in general and murder in particular? What about low levels of labor force participation and high levels of welfare dependency? None of those things was as bad in the first 100 years after slavery as they became in the wake of the policies and notions of the 1960s.”

Presented as beneficial to African-Americans, these programs were not produced out of a sincere desire to help them or to promote justice. President Lyndon Johnson was an unrepentant racist. Behind the scenes, he routinely referred to Blacks using hateful and inappropriate epithets. He openly spoke to friends about how his programs were designed to deceive Black voters into supporting Johnson’s political party. But in public, he proclaimed himself a friend of the African-Americans.

The Great Society programs of President Lyndon Johnson, touted as a sort of reparations-lite by Johnson allies, actually harmed the black community in significant ways that continue to play out today. According to former Air Force One steward Ronald MacMillan, LBJ pushed the Great Society programs and civil rights bill out of desire to win black votes.

Johnson’s propaganda was designed not only to trick Black voters, but also to mislead the public into believing that his “Great Society” was having a meaningful impact, when in fact it was not, as historian Thomas Sowell writes:

Despite the grand myth that black economic progress began or accelerated with the passage of the Civil Rights laws and “War on Poverty” programs of the 1960s, the cold fact is that the poverty rate among blacks fell from 87 percent in 1940 to 47 percent by 1960. This was before any of those programs began. Over the next 20 years, the poverty rate among blacks fell another 18 percentage points, compared to the 40-point drop in the previous 20 years. This was the continuation of a previous economic trend, at a slower rate of progress, not the economic grand deliverance proclaimed by liberals and self-serving black “leaders.”

The 1964 Civil Rights Act was an example of President Johnson’s duplicity. While he’d opposed the 1957 Civil Rights Act and the 1960 Civil Rights Act, and had attempted to damage both of those laws by attaching amendments to them which could have weakened them, he suddenly presented himself as a supporter of civil rights and promoted the 1964 Civil Rights Act. In reality, however, the 1964 bill was different from the two previous ones: it proposed that the government intervene into the business practices of individuals and companies, effectively limiting civil rights rather than expanding them.

Johnson’s malignant racism hid behind his rhetoric, and a segment of the voting public believed his propaganda. Yet the Black experience during the Johnson administration showed only economic decline.

The next energizing step forward for civil rights, after the dreary oppression imposed by Lyndon Johnson, came during the presidency of Gerald Ford. President Ford had a longstanding friendship with Judge Willis Ward. Ward was a jurist who’d played football with Ford at the University of Michigan. Together, Willis Ward and Gerald Ford exemplified how civil rights could be promoted in practical situations.

The history of civil rights in the United States is not one of continuous and uninterrupted growth. It has been a narrative of ups and downs and ups, from 1863 to the present. It is a narrative with heroes and villains. Like the stock market, despite repeated downturns, the long term trend has been on the upside. Civil rights continue to thrive in the United States.

Wednesday, April 6, 2022

The Role of Revenue in Policy: Why to Tax, or Not

The role of taxation in American political thought changed significantly during the last decade or two of the twentieth century and during the first decade or two of the twenty-first century. Prior to that time, the primary role, if not the only role, of taxation was to generate revenue.

During the earlier decades of the twentieth century, there had been some effort to use taxation to guide behavior, e.g., so-called “sin taxes” like those placed on tobacco. But these examples were a small part of tax policy, and a small part of tax revenue.

The big change came when the idea gained popularity that tax cuts could be used to stimulate consumer spending. Whether implemented as simple tax cuts or as “stimulus payments,” the government hoped to energize the economy as citizens spent the extra money. The government failed, however, to correspondingly cut spending, so the increased debt and deficit partially negated any stimulating effect.

David Stockman, who was the Director of the Office of Management and Budget from January 1981 to August 1985, explains:

Until then, conservatives had generally treated taxes as an element of balancing the expenditure and revenue accounts, not as an explicit tool of economic stimulus. All three postwar Republican presidents — Eisenhower, Nixon, and Ford — had even resorted to tax increases to eliminate red ink, albeit as a matter of last resort after spending-cut options had been exhausted.

While it is true that tax cuts sometimes stimulate the economy, and can even be used to increase revenue according to the Laffer curve, most leaders prior to 1975 thought of cutting taxes for that purpose. The stimulating effect was seen as an incidental byproduct of tax cuts, not the goal of tax cuts.

Monday, January 17, 2022

Income Inequality: Why It’s Not as Bad as the Media Thinks, and Why the Numbers Are Misleading

A famous phrase — often but uncertainly attributed to Mark Twain — refers to the increasing evils of “lies, damned lies, and statistics.”

No matter who said it first, it’s true that numbers are used, misused, and abused, especially in political debates. In the early twenty-first century in the United States, debates about the nature, existence, and extent of so-called “income inequality” have made generous use of statistics.

These numbers demand examination. How does one quantify income? There are numerous ways. But income is not the only way to measure economic well-being, and perhaps not the most accurate way. Some economists point out that measuring consumption, as opposed to income, is a truer measure of one’s standard of living. Edward Conard writes:

Consumption is a more relevant measure of poverty, prosperity, and inequality. University of Chicago economist Bruce Meyer and the University of Notre Dame economist James Sullivan, leading researchers in the measurement of consumption, find that consumption has grown faster than income, faster still among the poor, and that inequality is substantially less than it appears to be.

Consumption is, after all, a measure of the items which constitute a standard of living: clothing, food, housing, transportation, etc.

Income and consumption are two variables which can increase or decrease independently of each other, as Edward Conard notes:

Measures of consumption paint a more robust picture of growth than proper measures of income.

Misleading income measures assume tax returns — including pass-through tax entities — represent households. They exclude faster-growing healthcare and other nontaxed benefits. They fail to account for shrinking family sizes, where an increasing number of taxpayers file individual returns. They don’t separate retirees from workers. They ignore large demographic shifts that affect the distribution of income.

It may well be a mistake to think of “income inequality” in simplistic terms as “the gap between the highest earners and the lowest earners,” as Ben Shapiro reports. There can be low earners whose standard of living is higher than the standard of living of high earners. A simple example is retirees, whose earnings may be low, but whose standard of living is supported by a lifetime of saving and investing.

More to the point, as noted above, a low earner may receive health insurance worth thousands of dollars, and therefore have a higher standard of living than someone whose nominal income is greater.

Income gaps have reliably “widened and narrowed over time”, Ben Shapiro explains, and there is no “correlation between levels of inequality of outcome and general success of the society or individuals within it.” Income inequality at any one point in time is misleading, because it is a continuously changing variable. Income inequality between various social classes is also misleading, because mobility means that individuals are constantly moving in and out of the various classes.

It’s quite possible for income inequality to grow while those at the bottom end of the scale get richer. In fact, that’s precisely what’s been happening in America: the middle class hasn’t dissipated, it’s bifurcated, with more Americans moving into the upper middle class over the past few decades. The upper middle class grew from 12 percent of Americans in 1979 to 30 percent as of 2014. As far as median income, myths of stagnating income are greatly exaggerated

What is now called “income inequality” should be understood as an often transient condition. The fact that one person earns more and another person earns less is evil only if those individuals are irreversibly locked into those conditions. But in fact, most American wage-earners are in a position of mobility: they can work their way up, and earn more in the future.

Well-intentioned but mistaken efforts to “eliminate income inequality” lead to the unintended consequence of freezing individuals at certain income levels and reducing chances for advancement.

Income inequality exists everywhere, and “social justice” destroys personal liberty and exacerbates inequality.

Attempts to “eliminate income inequality” actually ossify inequality. Only the fluid system of a free market creates chances for individuals to move up in terms of their incomes.

Monday, January 3, 2022

Enslaving the Free Market: The Era of the “Bailout”

In August and September 2008, Lehman Brothers (officially Lehman Brothers Holdings, Inc.) filed for bankruptcy. The firm came to an end as part of an event known as the “subprime mortgage crisis.” Both executive policies and Congressional legislation gave rise to this event, or more precisely, series of events.

The policies and legislation in question encouraged or required financial institutions to give loans and mortgages to customers who were manifestly unable and unfit to repay. This money was lent primarily for the purpose of buying houses. The inevitable and foreseeable result was a wave of defaults and foreclosures: individuals and families unable to pay their monthly amounts.

As the number of defaults and foreclosures increased, the banks and other institutions who’d lent the money and who now were unable to get it back, were left with little or nothing, and went bankrupt. The number of lenders going bankrupt grew, and the size of the institutions going bankrupt grew. The pattern culminated in the bankruptcy of Lehman Brothers.

Lehman Brothers was a major company. It had more than 26,000 employees and over $600 billion in assets.

Naturally, some observers were surprised, shocked, or worried. They assumed that the bankruptcy of a major company would cause trouble for the economy. They were wrong. They had forgotten the economic principle of “creative destruction.”

It is easy to assume that, if a large corporation goes bankrupt, that this will create problems like unemployment, inventory shortages, etc.

This assumption ignores the fact that when a business fails and collapses, it creates opportunities for new businesses which are better, more effective, more efficient, bigger, more profitable, and more adapted to the marketplace. The end of an old business creates space for a new business.

Metaphors may be useful in understanding this concept: One demolishes an old building in order to construct a new, better, larger building; one cuts down some old trees in a forest in order to plant younger and healthier trees.

A bankruptcy can create short term dislocation, like temporary unemployment and a dip in the stock market. In the long run, however, it can create more jobs than it destroyed, and better-paying jobs with better chances for advancement, resulting in a net increase in prosperity. In many scenarios, workers who are laid off eventually find employment at higher wages than the job they lost. The stock market, likewise, will not only recover from a downtick, but eventually go even higher in the wake of bankruptcy.

This principle is associated with a broad variety of economists: Joseph Schumpeter, Karl Marx, Werner Sombart, etc.

Adherence to this principle would have directed that the government, in the wake of the Lehman Brothers collapse, should have refrained from any intervention, and allowed the next two companies in line to go bankrupt as well: Goldman Sachs and Morgan Stanley. Had they gone bankrupt, their workers would have found new jobs at higher wages, the stock market would have recovered from a drop and gone on to new highs, and general prosperity would have increased.

Sadly, various elected and appointed leaders in government forgot this basic principle — or never knew it to begin with.

Congress passed several pieces of legislation, primarily the Troubled Asset Relief Program (TARP) and the Emergency Economic Stabilization Act of 2008. These legislations gave billions of dollars to companies which were in danger of going bankrupt. In addition to Goldman Sachs and Morgan Stanley, other companies soon asked for help, including Citigroup, Chrysler, American Express, and many others. The money given to these businesses came from two sources: either it was confiscated from ordinary American citizens by means of taxes, or it was borrowed, and ordinary American citizens will be required to repay this money by means of taxes.

Instead of allowing these corporations to go bankrupt — and only a few of them would have done so; the others simply asked for the money and got it — the TARP legislation kept them alive, but allowed them to remain inefficient and irresponsible. Had they gone out of business, new and more productive companies would have arisen in their places.

The end result was higher taxes for ordinary people, and debts which will have to be repaid with even more higher taxes.

This colossal misjudgment was made possible by government officials who were ignorant of basic economic principles, or who ignored them, or who forgot them.

David Stockman was a U.S. Congressman and later Director of the Office of Management and Budget. Concerning TARP, he points out that many government officials understood why it was wrong:

Certainly President Eisenhower’s treasury secretary and doughty opponent of Big Government, George Humphrey, would never have conflated the future of capitalism with the stock price of two or even two dozen Wall Street firms. Nor would President Kennedy’s treasury secretary, Douglas Dillon, have done so, even had his own family’s firm been imperiled. President Ford’s treasury secretary and fiery apostle of free market capitalism, Bill Simon, would have crushed any bailout proposal in a thunder of denunciation. Even President Reagan’s man at the Treasury Department, Don Regan, a Wall Street lifer who had built the modern Merrill Lynch, resisted the 1984 bailout of Continental Illinois until the very end.

As David Stockman shows, this was not a “Democrat” issue or a “Republican” issue. It was an issue about the basic principles of economics. A free market must be allowed to find its own way to an equilibrium.

The women and men who promoted TARP were in many cases people of good will: they legitimately wanted to help. But even well-intentioned governmental interventions in a free market economy are harmful.

The economy organically works toward an equilibrium, and at that equilibrium point lies maximal prosperity for all. The blind forces of demand and supply distribute better wages and higher standards of living to everyone in the marketplace, from the smallest to the largest.

Statist intervention in markets can only prevent the economy from achieving the best results for everyone.