Wednesday, April 6, 2022

The Role of Revenue in Policy: Why to Tax, or Not

The role of taxation in American political thought changed significantly during the last decade or two of the twentieth century and during the first decade or two of the twenty-first century. Prior to that time, the primary role, if not the only role, of taxation was to generate revenue.

During the earlier decades of the twentieth century, there had been some effort to use taxation to guide behavior, e.g., so-called “sin taxes” like those placed on tobacco. But these examples were a small part of tax policy, and a small part of tax revenue.

The big change came when the idea gained popularity that tax cuts could be used to stimulate consumer spending. Whether implemented as simple tax cuts or as “stimulus payments,” the government hoped to energize the economy as citizens spent the extra money. The government failed, however, to correspondingly cut spending, so the increased debt and deficit partially negated any stimulating effect.

David Stockman, who was the Director of the Office of Management and Budget from January 1981 to August 1985, explains:

Until then, conservatives had generally treated taxes as an element of balancing the expenditure and revenue accounts, not as an explicit tool of economic stimulus. All three postwar Republican presidents — Eisenhower, Nixon, and Ford — had even resorted to tax increases to eliminate red ink, albeit as a matter of last resort after spending-cut options had been exhausted.

While it is true that tax cuts sometimes stimulate the economy, and can even be used to increase revenue according to the Laffer curve, most leaders prior to 1975 thought of cutting taxes for that purpose. The stimulating effect was seen as an incidental byproduct of tax cuts, not the goal of tax cuts.

Monday, January 17, 2022

Income Inequality: Why It’s Not as Bad as the Media Thinks, and Why the Numbers Are Misleading

A famous phrase — often but uncertainly attributed to Mark Twain — refers to the increasing evils of “lies, damned lies, and statistics.”

No matter who said it first, it’s true that numbers are used, misused, and abused, especially in political debates. In the early twenty-first century in the United States, debates about the nature, existence, and extent of so-called “income inequality” have made generous use of statistics.

These numbers demand examination. How does one quantify income? There are numerous ways. But income is not the only way to measure economic well-being, and perhaps not the most accurate way. Some economists point out that measuring consumption, as opposed to income, is a truer measure of one’s standard of living. Edward Conard writes:

Consumption is a more relevant measure of poverty, prosperity, and inequality. University of Chicago economist Bruce Meyer and the University of Notre Dame economist James Sullivan, leading researchers in the measurement of consumption, find that consumption has grown faster than income, faster still among the poor, and that inequality is substantially less than it appears to be.

Consumption is, after all, a measure of the items which constitute a standard of living: clothing, food, housing, transportation, etc.

Income and consumption are two variables which can increase or decrease independently of each other, as Edward Conard notes:

Measures of consumption paint a more robust picture of growth than proper measures of income.

Misleading income measures assume tax returns — including pass-through tax entities — represent households. They exclude faster-growing healthcare and other nontaxed benefits. They fail to account for shrinking family sizes, where an increasing number of taxpayers file individual returns. They don’t separate retirees from workers. They ignore large demographic shifts that affect the distribution of income.

It may well be a mistake to think of “income inequality” in simplistic terms as “the gap between the highest earners and the lowest earners,” as Ben Shapiro reports. There can be low earners whose standard of living is higher than the standard of living of high earners. A simple example is retirees, whose earnings may be low, but whose standard of living is supported by a lifetime of saving and investing.

More to the point, as noted above, a low earner may receive health insurance worth thousands of dollars, and therefore have a higher standard of living than someone whose nominal income is greater.

Income gaps have reliably “widened and narrowed over time”, Ben Shapiro explains, and there is no “correlation between levels of inequality of outcome and general success of the society or individuals within it.” Income inequality at any one point in time is misleading, because it is a continuously changing variable. Income inequality between various social classes is also misleading, because mobility means that individuals are constantly moving in and out of the various classes.

It’s quite possible for income inequality to grow while those at the bottom end of the scale get richer. In fact, that’s precisely what’s been happening in America: the middle class hasn’t dissipated, it’s bifurcated, with more Americans moving into the upper middle class over the past few decades. The upper middle class grew from 12 percent of Americans in 1979 to 30 percent as of 2014. As far as median income, myths of stagnating income are greatly exaggerated

What is now called “income inequality” should be understood as an often transient condition. The fact that one person earns more and another person earns less is evil only if those individuals are irreversibly locked into those conditions. But in fact, most American wage-earners are in a position of mobility: they can work their way up, and earn more in the future.

Well-intentioned but mistaken efforts to “eliminate income inequality” lead to the unintended consequence of freezing individuals at certain income levels and reducing chances for advancement.

Income inequality exists everywhere, and “social justice” destroys personal liberty and exacerbates inequality.

Attempts to “eliminate income inequality” actually ossify inequality. Only the fluid system of a free market creates chances for individuals to move up in terms of their incomes.

Monday, January 3, 2022

Enslaving the Free Market: The Era of the “Bailout”

In August and September 2008, Lehman Brothers (officially Lehman Brothers Holdings, Inc.) filed for bankruptcy. The firm came to an end as part of an event known as the “subprime mortgage crisis.” Both executive policies and Congressional legislation gave rise to this event, or more precisely, series of events.

The policies and legislation in question encouraged or required financial institutions to give loans and mortgages to customers who were manifestly unable and unfit to repay. This money was lent primarily for the purpose of buying houses. The inevitable and foreseeable result was a wave of defaults and foreclosures: individuals and families unable to pay their monthly amounts.

As the number of defaults and foreclosures increased, the banks and other institutions who’d lent the money and who now were unable to get it back, were left with little or nothing, and went bankrupt. The number of lenders going bankrupt grew, and the size of the institutions going bankrupt grew. The pattern culminated in the bankruptcy of Lehman Brothers.

Lehman Brothers was a major company. It had more than 26,000 employees and over $600 billion in assets.

Naturally, some observers were surprised, shocked, or worried. They assumed that the bankruptcy of a major company would cause trouble for the economy. They were wrong. They had forgotten the economic principle of “creative destruction.”

It is easy to assume that, if a large corporation goes bankrupt, that this will create problems like unemployment, inventory shortages, etc.

This assumption ignores the fact that when a business fails and collapses, it creates opportunities for new businesses which are better, more effective, more efficient, bigger, more profitable, and more adapted to the marketplace. The end of an old business creates space for a new business.

Metaphors may be useful in understanding this concept: One demolishes an old building in order to construct a new, better, larger building; one cuts down some old trees in a forest in order to plant younger and healthier trees.

A bankruptcy can create short term dislocation, like temporary unemployment and a dip in the stock market. In the long run, however, it can create more jobs than it destroyed, and better-paying jobs with better chances for advancement, resulting in a net increase in prosperity. In many scenarios, workers who are laid off eventually find employment at higher wages than the job they lost. The stock market, likewise, will not only recover from a downtick, but eventually go even higher in the wake of bankruptcy.

This principle is associated with a broad variety of economists: Joseph Schumpeter, Karl Marx, Werner Sombart, etc.

Adherence to this principle would have directed that the government, in the wake of the Lehman Brothers collapse, should have refrained from any intervention, and allowed the next two companies in line to go bankrupt as well: Goldman Sachs and Morgan Stanley. Had they gone bankrupt, their workers would have found new jobs at higher wages, the stock market would have recovered from a drop and gone on to new highs, and general prosperity would have increased.

Sadly, various elected and appointed leaders in government forgot this basic principle — or never knew it to begin with.

Congress passed several pieces of legislation, primarily the Troubled Asset Relief Program (TARP) and the Emergency Economic Stabilization Act of 2008. These legislations gave billions of dollars to companies which were in danger of going bankrupt. In addition to Goldman Sachs and Morgan Stanley, other companies soon asked for help, including Citigroup, Chrysler, American Express, and many others. The money given to these businesses came from two sources: either it was confiscated from ordinary American citizens by means of taxes, or it was borrowed, and ordinary American citizens will be required to repay this money by means of taxes.

Instead of allowing these corporations to go bankrupt — and only a few of them would have done so; the others simply asked for the money and got it — the TARP legislation kept them alive, but allowed them to remain inefficient and irresponsible. Had they gone out of business, new and more productive companies would have arisen in their places.

The end result was higher taxes for ordinary people, and debts which will have to be repaid with even more higher taxes.

This colossal misjudgment was made possible by government officials who were ignorant of basic economic principles, or who ignored them, or who forgot them.

David Stockman was a U.S. Congressman and later Director of the Office of Management and Budget. Concerning TARP, he points out that many government officials understood why it was wrong:

Certainly President Eisenhower’s treasury secretary and doughty opponent of Big Government, George Humphrey, would never have conflated the future of capitalism with the stock price of two or even two dozen Wall Street firms. Nor would President Kennedy’s treasury secretary, Douglas Dillon, have done so, even had his own family’s firm been imperiled. President Ford’s treasury secretary and fiery apostle of free market capitalism, Bill Simon, would have crushed any bailout proposal in a thunder of denunciation. Even President Reagan’s man at the Treasury Department, Don Regan, a Wall Street lifer who had built the modern Merrill Lynch, resisted the 1984 bailout of Continental Illinois until the very end.

As David Stockman shows, this was not a “Democrat” issue or a “Republican” issue. It was an issue about the basic principles of economics. A free market must be allowed to find its own way to an equilibrium.

The women and men who promoted TARP were in many cases people of good will: they legitimately wanted to help. But even well-intentioned governmental interventions in a free market economy are harmful.

The economy organically works toward an equilibrium, and at that equilibrium point lies maximal prosperity for all. The blind forces of demand and supply distribute better wages and higher standards of living to everyone in the marketplace, from the smallest to the largest.

Statist intervention in markets can only prevent the economy from achieving the best results for everyone.

Friday, November 26, 2021

The Mint and the Pandemic: Making Coins

When the pandemic struck the world in March 2020, it was clear that it would have significant economic impacts. Exactly what those impacts would be was, however, at that time, not always clear.

One of the less obvious effects was a shortage of circulating coins in the United States. Billions of coins existed, but they were either in businesses which were temporarily closed or permanently closed, or they were in people’s homes, and with millions of people under “lockdown,” those coins weren’t circulating.

The types of transactions which often involve coins were particularly hard-hit: people used more credit cards than cash, made more online purchases, and casual foot traffic in city centers was sparse to non-existent. Buying a newspaper, a cup of coffee, or a candy bar while walking downtown — once an ordinary part of daily life — quickly became rare.

To keep the economy alive, the U.S. Mint ramped up production to replace the coins which were frozen in idle cash registers or in homes. Writing for American Banker magazine, Jon Prior reports:

To help push more coins into circulation, the U.S. Mint last year boosted production to levels not seen since 2017.

The Mint’s two facilities in Denver and Philadelphia churned out 14.8 billion coins for circulation in 2020, up 26% from less than 12 billion the year before, according to data the agency provided to American Banker.

The effort was part of a plan between the government, coin collection companies, retailers and banks to cure a shortage in tills across the U.S. as in-person spending slowed in the early months of the pandemic and online and card transactions soared. The sudden scarcity of change was one of the unseen economic side-effects of the coronavirus pandemic, but that boost in production, combined with increased economic activity in recent months, means that coin circulation is finally returning to normal, industry officials say.

The extra production by the Mint happened at a time when maintaining normal production levels was already a challenge. The increased mintages represent a heroic effort.

In raw numbers, for example, the total number of nickels produced in 2018 was 1256.4 million; in 2019 it was 1094.89 million, but the pandemic in 2020 pushed the mintage to 1623.1 million. The increase in output was shared by both the Denver mint and the Philadelphia mint. Those are the only two mints in the United States which produce circulating coins. Smaller mints in San Francisco and in West Point produce coins only for investing and collecting, but not for retail circulation.

Similar increases were achieved in the production of the dime and quarter.

On the other hand, coins deemed less essential to commerce — the penny, the half-dollar, and the dollar coin — saw production levels in 2020 similar to, or slightly lower than, the previous two years, as resources were directed to the more urgently-needed coins.

Monday, July 12, 2021

Carter’s Accomplishments: The 39th President

Historians are often tempted to devote little time or energy to studying the presidency of Jimmy Carter. If they do pay attention to his one four-year term in office, they routinely dismiss his administration as a failure. But he might merit a second look.

Carter continued at least two agenda items from his predecessor, President Gerald R. Ford, whom Carter defeated in the November 1976 election. Upon taking office in January 1977, Carter embraced both Ford’s affection for deregulation and Ford’s commitment to take an unwavering stance in support of human rights.

In the transportation sector, Carter achieved some milestones of deregulation, as historian Kai Bird writes:

Despite his aversion to political machinations — such as cutting deals with smarmy congressmen — Carter was an effective and extraordinarily productive president. He deregulated the airline industry, making it possible for middle-class Americans to fly.

He was willing to contradict one of his party's major allies: organized labor. The Democratic Party had significant support from labor unions at the time. Carter risked their dissent:

Trade unions opposed his deregulation of airlines, trucking and railroads.

Although deregulation ultimately proved to energize the economy and help working-class families, the move was one factor in Carter’s loss in the November 1980 election. Many union members voted for Carter’s opponent, President Ronald Reagan.

Carter maintained President Ford’s focus on human rights. During the Ford, Carter, and Reagan administrations, this focus must be understood in the context of the Cold War. More and more evidence was coming to light, revealing the ongoing violation of human rights by the Soviet Socialists, spanning decades from the 1930s to the 1980s.

President Ford drew international attention to the question with a document known as the Helsinki Accords. Carter continued Ford’s pattern. A global consensus among many nations emerged, and international sentiment was against the USSR. During Carter’s administration, Kai Bird notes,

The principle of human rights became a cornerstone of America’s foreign policy.

Jimmy Carter was the first president to use his nickname in an official capacity. Rarely, if ever, was he referred to as “James,” but routinely as “Jimmy.” This was a departure from two centuries of precedent.

Gerald Ford was never officially listed as “Jerry,” and John Kennedy was never officially cited as “Jack.” The nicknames were only for the closest friends and family. But Jimmy Carter was known universally by that name.

Some later presidents would follow Carter’s pattern: Bill Clinton was never cited as “William,” and Joe Biden was never listed as “Joseph.”

Although Carter failed to get reelected, and was thereby limited to four years in office, his presidency nonetheless merits attention.

Friday, July 9, 2021

Urban Planning: The Third Way

City planners in the United States and elsewhere have long been subject to the dogma that there are two options for cities. The first option is the automotive city, with rings and spokes of multi-lane limited-access freeways and highways, large multi-lane surface streets as the main arteries, smaller surface streets branching off the larger ones, and many parking spaces. The second option is the walkable city, with generous sidewalks, bike paths, and public transportation like streetcars and subways.

The dichotomy between these two has sometimes become so extreme that when one option is chosen, the other option is not only ignored, but actively discouraged. Planners who choose the automotive option will deliberately omit sidewalks; planners who choose the walkable option will deliberately work to reduce the number of parking spaces and make the driving experience frustrating in other ways.

This binary framework reduces urban planning to “either/or” decisions, simplistic thinking, and in some cases political conflicts.

There is a third way. Most cities in the United States — small, medium, or large — can embrace both of these options simultaneously. A city can be walkable and automotive at the same time. It can have a robust public transportation system and lots of parking at the same time.

There are exceptions: cities whose geographical peculiarities make them less flexible, like San Francisco and parts of New York.

But other cities can take advantage of America’s resources: lots of open land and the ability to generate concrete, steel, and asphalt in large amounts.

An office worker in a city might choose to be the only passenger in her or his SUV driving to work two days a week, bicycle to work another two days per week, and take the streetcar to work on the final day of the work week. City planners can make all of these convenient and equally convenient.

The economics of this arrangement can become self-sustaining: more people will be lured into the city from the suburbs, either for an afternoon shopping trip, or to live in the city permanently. Increased revenue will pay for the infrastructure.

Thursday, July 1, 2021

Nixon’s Visit to China: Playing Cold War Communist Powers Against Each Other

The presidency of Richard Nixon, from January 1969 to August 1974, is known for many things. One of them is Nixon’s engagement with China. His visit to China in February 1972 was the first time a U.S. president had set foot in the country. China and the U.S. had no diplomatic communication with each other for over twenty years.

On a surface level, Nixon’s rapprochement with China could be seen as a softening of America's resistance to communism during the Cold War. He was granting diplomatic recognition to a communist regime which was responsible for the deaths of millions of Chinese, and was responsible for the egregious violations of human rights and civil rights.

On a deeper level, however, Nixon’s China policy was a clever way to play two communist nations against each other. From 1949, when the communists took over China, the Soviet Socialists had an alliance with China. Mao had a comfortable working relationship with Stalin.

But after Stalin died in 1953, China’s alliance with the Soviet Socialists began to deteriorate. Nixon saw this as an opportunity. When Nixon was in China, the Soviets worried that a close relationship between America and China would leave the USSR out. So Nixon visited Moscow in May 1972. At that point, the Chinese worried that America would develop a good connection to the Soviets.

Nixon was able to play the two communist nations against each other. Nixon’s successor, President Gerald Ford, recalls:

Our new ties with the Soviets were possible, I believed, only because the Soviet leaders were becoming concerned about developments within the People’s Republic of China. Both Mao Tse-tung and Chou En-lai were making increasingly antagonistic speeches toward the Kremlin. Nixon sensed that the Chinese leaders feared and distrusted the Soviets. Their long-standing border dispute was a festering sore. Mao had never forgiven the Soviets for mistreating him in the 1950s, and he was concerned about Soviet intentions in the Pacific. Skillfully, Nixon moved to take advantage of the split.

U.S. diplomat Richard Haass sees Nixon’s policy as a kind of balance. Nixon’s goal, according to Haass, was to make China and the USSR feel equally jealous of each other’s relationship with America:

The purpose of the policy developed by Richard M. Nixon and Henry Kissinger was to use China as a counterweight to the Soviet Union and shape China’s foreign policy, not its internal nature.

The timing of Nixon’s visits to China and to the USSR must be understood in the context of the Vietnam War. It was not until 1973 that the final peace documents were signed, and that the U.S. began withdrawing its troops from Vietnam.

Both China and the USSR supported North Vietnam to varying degrees during the war. China’s support for North Vietnam was continued but reduced after 1968, when China began to reserve more soldiers and equipment for anticipated direct combat between the USSR and China.

In cementing China’s split from the Soviet Union, the United States gained leverage that contributed to the Cold War ending when and how it did.

Although China was interested in empire-building in southeast Asia, in the late 1960s and early 1970s, China was not as aggressive in the South China Sea region as it became after 1990 and especially after 2000.

China’s ability to pose a military and economic threat to the nations of the South China Sea regions was limited in the 1960s and 1970s. Nixon’s rapprochement with China cannot be seen as opening the door to the Chinese expansionism that the following decades would see. At the time of Nixon’s visit, lacked the military power and economic power needed to take control of the South China Sea.