Monday, July 29, 2019

Becoming President: Nixon in 1968

The political mood in the United States in 1968 was tumultuous. Richard Nixon competed with Ronald Reagan for the Republican nomination.

In the Democratic Party, it seemed at first that incumbent President Lyndon Johnson would easily win the party’s nomination; but in March 1968, Johnson announced that he would not seek re-election.

After LBJ withdrew from the contest, four candidates seemed strong: Hubert Humphrey, Robert Kennedy, George Wallace, and Eugene McCarthy. It was not at all clear which of these would win the party’s nomination.

In June 1968, a Palestinian terrorist murdered Robert Kennedy. This created further confusion in the Democratic Party. George McGovern entered the race after Kennedy’s death, additionally complicating the situation.

Eugene McCarthy represented the anti-war radicals; Humphrey represented the labor unions and major urban political machines within the Democratic Party; Wallace represented the segregationists who opposed Nixon’s support of civil rights legislation.

At the Democratic Party’s national convention in Chicago in August 1968, the party ultimately chose Hubert Humphey as its candidate, but the ‘big story’ in the media was the major rioting in downtown Chicago in the area surrounding the convention. Radicals and revolutionaries of various stripes, beginning with anti-war activists but then spreading to all manner of troublemakers, caused damage and injury. Hundreds of rioters and hundreds of police were wounded.

When the dust settled, then, it was Nixon vs. Humphrey in 1968. Recalling the campaign, Donald Rumsfeld writes:

Amid anger and protest, Nixon offered himself as a source of reassurance and stability. For voters it was a welcome change from the anguished presidency of Lyndon Johnson. But because he had been defeated in two high-profile elections during the past decade, he had to battle the impression that he was a loser.

Humphrey suffered from the internal fractures within his Democratic Party. By contrast, the Republican Party was unified behind Nixon.

But Nixon had suffered a prominent defeat in the 1960 presidential election against Kennedy, and had further endured a loss in the 1962 California gubernatorial election. How could Nixon shake the reputation of being a loser?

Nixon gained much public sympathy after the 1962 election, when Howard Smith, a news broadcaster on the ABC network, invited Alger Hiss to comment on Nixon’s losses.

Alger Hiss was a convicted felon — a Soviet spy who’d been paid to reveal U.S. government secrets to the KGB in Moscow, and who’d been paid to give misleading advice to U.S. policymakers, including President Roosevelt — and the America public was not happy with the network for giving airtime to a Soviet Socialist espionage agent.

Rebounding from his political losses, and gaining public sympathy from Hiss’s TV appearance, Nixon emerged as a strong leader. Nixon eventually won the November 1968 election by a landslide.

Monday, June 17, 2019

Ronald Reagan vs. John Maynard Keynes

Much of President Reagan’s economic policy was directed at undoing New Deal policies which were leftover from the Great Depression. These policies were fifty years old by the time Reagan was in office, and he thought that they needed to be updated, revised, or replaced.

Reagan was elected in 1980 and took office in 1981. In a 1986 comment in the Wall Street Journal, he noted that Keynesian economics were “a legacy from a period when I was back in college studying economics.”

Indeed, Keynesian theories began making an appearance with early publications like Indian Currency and Finance, published by Keynes in 1913. His major publications came later, like The Economic Consequences of Peace in 1919, A Tract on Monetary Reform in 1923, The Economic Consequences of Mr. Churchill in 1925, and his large systematic explication in the Treatise on Money in 1930.

Keynesian views were widely understood by the time Ronald Reagan graduated from college in 1932.

Many historians see the influence of Keynes in FDR’s “New Deal” policies, policies which were designed to heal the damage done by the Great Depression, but which instead caused the Depression to last longer and do deeper harm to the economy.

By the time Ronald Reagan became president, many voters believed that it was time to discard Keynesian economics and find other, more credible, economic policy systems.

Friday, March 15, 2019

Re-Examining the Integration Narrative: School Desegregation Before Brown

The simple narrative taught in most history textbooks is that prior to 1954, schools in the United States were divided into two categories: those for Black children and those for White children. After the landmark decision in Brown vs. Board of Education of Topeka, schools across the country were gradually integrated, sometimes against resistance, to the point at which they are now largely, if not entirely, integrated.

This simple narrative is wrong.

When the case of Oliver Brown, et al. v. Board of Education of Topeka, et al. was first presented to the Supreme Court in 1952, there were already many integrated schools around the United States.

Desegregated schools were nothing new; in fact, they were almost a century old. Some schools, like Berea College in Kentucky, were integrated even prior to the Civil War.

There is a great deal of documentation about desegregation before 1954.

Yearbooks from high schools reveal substantial integration among students in towns like Muncie, Indiana. In the mid-1940s, African-American students and white students were mixed together in academic classes, in sports, and in music groups like choirs.

Similar instances of desegregation appear in high schools in other cities.

Integrated schools since the time of the Civil War were more common in the North than in the South, more common in smaller towns than in large cities, and more common in parochial private schools than in preparatory boarding schools. The main targets of the Supreme Court’s Brown decision were public high schools in large cities in the South.

Some small towns integrated out of economic necessity rather than moral principle. Many parochial schools considered it part of their mission to integrate. Some schools in the North had never been segregated, and so had no need to desegregate.

Oversimplified narratives in history textbooks could lead students to assume that all schools were segregated prior to 1954, while in reality, by 1954, there were large numbers of integrated schools.

Sunday, February 17, 2019

Why Do Governments Do What They Do? Economic Policy As Political Choice

Governments, all too often, intervene in economies, regulating, incentivizing, subsidizing, and generally gumming up the works, making marketplaces less nimble, and slowing creative processes. Why?

Some imagine the government as the neutral umpire, the objective referee who keeps the playing field level and fair. Others imagine government as the benevolent patriarch, reaching in with parental wisdom to adjust market forces. But neither of these images prevails.

Instead, governments routinely get in the way of wealth creation, and inhibit the very opportunities which they often claim to foster. James Buchanan, recipient of the Nobel Prize, sought to explain this quirky characteristic of governments, as journalist Dylan Matthews writes:

Buchanan is most famous for breathing new life into political economy, the subfield of economics and political science that studies political institutions, and in particular how they affect the economy. In particular, Buchanan is strongly associated with public choice theory, an approach which assumes that individual actors in political contexts are out for themselves, and then uses game theory to model their choices, with the hope of gaining insight into the incentives faced by political actors.

Policymakers have various motives, and the average of these motives - much like the net effect of several vectors in physics and engineering - determine their policy choices. Legislators are not thinking in the abstractions of equations and graphs which constitute academic economics.

Governmental regulators are flesh-and-blood human beings who are concerned about public perceptions, about private success, and whose preconceptions and ideologies shape their decisions with as much efficacy as empirical data.

Before Buchanan, economics was primarily about how individuals make choices in the private sphere. He was among the first to argue that it could explain their choices in the public sphere as well. Traditionally, economists have treated the government as a dictatorial “social planner” which is capable of impartially correcting failures in private markets. Buchanan's contribution was pointing out that that social planner also responded to incentives, and that they sometimes pushed him to make markets worse off than he found them.

Legislators, even those with the best of motives, nudge policies in suboptimal directions merely by being one of many vectors which will be averaged out: a perfect policy, when averaged with many other policies, does not steer policy toward perfection.

The legislators who have less than the best motives will clearly see opportunities for gain as they shape policy. This need not be flagrantly illegal or immoral, but merely an opportunity to benefit his constituency, which is, after all, the reason he was elected. But what benefits his constituents in the short term might harm others areas of the country in the long run, and therefore ultimately be suboptimal for everyone.

Government intervention will never be the crystalline abstraction which some academic economists hope it to be. At its best, it will be an approximation, an averaging of policies, in which even very good policies, when averaged, might produce less than very good results.

The safest conclusion, itself also not quite perfect, is to minimize regulation. Given that governmental intervention is never perfect, it would be wise to have as little of it as possible.

Saturday, February 9, 2019

The Story Behind the Story: Flint Was Not the First Water Crisis

The problems with government-provided drinking water pipelines in Flint, Michigan began in 2014, and by 2015 had obtained a high level of attention in news media. Since then, the problems have been resolved, and the water supply for citizens of Flint is once again safe to drink.

This story has been well documented in various media - and from various political viewpoints. The basic narrative is clear.

What is less well known is that this is not the first such water crisis.

A decade earlier, a similar series of events unfolded in the comfortable upper-middle-class neighborhoods of Washington, D.C.

The story broke in 2002 in little-known, alternative media outlets like the Washington City Paper. Surprisingly, there was little response from the government or from the mainstream media.

The neighborhoods affected by lead in their drinking water were economically middle- or upper-class, and racially mostly white.

The problem was largely ignored for over two years. Citizens were drinking lead-tainted water on a large scale.

In 2004, the Washington Post began to follow the story. Gradually, the size and scope of the problem became clear to the mainstream media. Eventually, action was taken, and the problem was corrected.

Comparing Washington to Flint, several contrasts emerge.

Flint’s problem was publicized and corrected quickly - within a year. Washington’s problem was ignored for over two years, and only after two years were steps taken to correct the problem.

Why the difference?

Flint had two variables which worked in its favor:

First, Flint has a manufacturing sector; Washington doesn’t. While Flint’s factories have declined in number and activity over the decades, there are still functioning manufacturing facilities in the city. The very first warning about water quality problems came from a General Motors plant. GM was concerned that high levels of lead in the water could damage machinery. If GM hadn’t raised the alarm, the problem could have continued for much longer.

Second, Flint has a significant African-American population; the affected neighborhoods in Washington, D.C. were mainly white. The residents of Flint were used to alerting political activists about their concerns, and the activists, in turn, were in the habit of worrying about Flint. Activists were more likely to engage about a public health issue in Flint than in a comfortable neighborhood in Washington.

So it was that Flint’s water-quality problems gained attention quickly and were corrected quickly, while the citizens of Washington were exposed to high levels of lead for a longer time. The public health problems in Washington are correspondingly greater, as data show.

Friday, January 4, 2019

The Cold War: An Unwanted Leadership Role for America

The Cold War was a span of years which greatly shaped the second half of the twentieth century. Its roots, however, were present already early in the first half of the century.

As early as 1919, the newly-formed Soviet Union, which was still fighting a civil war to stabilize its existence, was organizing covert activities inside the United States – activities designed to destabilize and eventually overthrow the government. The communists in America hoped to do away with the Constitution, with personal freedom, and with political liberty.

Although this espionage network had already existed for several decades, it was not until 1946 that the period known as the ‘Cold War’ began. The start of this era was marked by the end of World War II, and by the USSR’s procurement of atomic weapon technology.

The end of WW2 was also the end of an uncomfortable but necessary cooperation between the United States and the Soviet Socialists. With the pretense of an alliance gone, the communists could unabashedly pursue their goal of dominating nations in Europe and Asia, and eventually, their goal of placing American under a communist dictatorship, as historian Robert Maginnis writes:

World War II ended in 1945, and America’s leaders anticipated that the Soviet Union would continue the level of cooperation enjoyed during the war years. After all, President Franklin Roosevelt, an ideological progressive, believed the partnership that defeated the Axis Powers, which included Russia, would coalesce around his vision of a United Nations that would prevent future world wars. Roosevelt’s dream for the United Nations is traceable to his progressive ideological brother, President Woodrow Wilson, a man who shared a similar ambition after World War I in the form of the League of Nations.

History repeats itself: two American presidents, both at the end of a world war, blinded by the illusion that a group of international ambassadors would be able to prevent future wars. Roosevelt and Wilson posed a noble and idealistic goal, but impractical and impossible one.

If we believe that both of these presidents were sincere in wanting a world parliament to preclude wars, then we can conclude that both were blinded to the risks that such well-intentioned efforts could prove to be a facade behind which subtle influences would actually work to erode national sovereignty instead of strive toward world peace. Instead of preventing war, the League of Nations and the United Nations could serve as cover to hide the work of agents who wanted to undermine the United States Constitution.

Other leaders shared the desire for peace, but saw that these gatherings of diplomats were not the mechanism to ensure peace. It became clear that these conventions would in fact attempt to override the will of the American voters.

Free citizens can remain free only if no external powers violate the results of free and fair elections. Once the people have spoken, no gathering of foreign diplomats has the right to overturn their vote.

But the League of Nations was headed in that direction, and the United Nations has in fact reached the point at which it sees itself as authorized to ignore the expressed will of the people. The USSR hoped to use the United Nations to dissolve personal freedom and individual political liberty.

The League failed, thanks to Republican senators suspicious of international entanglements, and Roosevelt’s grand hope for a “true war-preventing organization” really never materialized after the Second World War, because Roosevelt’s United Nations ultimately became little more than a toothless, empty-headed debating forum on the Hudson River in Manhattan, New York.

After the concept of the United Nations proved to be unrealistic, the tensions which fueled the Cold War took another turn. In the postwar years, as the Soviet Socialists expanded into country after country, establishing their communist military dictatorships, the Americans and the nations of western Europe used the word ‘containment’ to describe their response: they hoped to stop Soviet expansion.

The United States inherited from the Second War the global leadership mantle, a role it was ill prepared to fulfill. It quickly saw a rising Soviet Union that must be stopped, and therefore it embraced containment as the only viable strategy. But that containment strategy was primarily a military power exercise that created mutual defense alliances that became little more than an anti-communist alliance and spurred an arms race.

The Cold War created a bipolar paradigm, in which most of the world’s nations allied themselves either with the West or the East. The West promoted free market economics, personal political liberty, property rights, a respect for the individual, and the freedoms of speech, religion, and the press.

The East looked to the state, the government, to control and own property, and to manage industrial production and the economy. The state imposed its own education with no alternatives, imposed atheism, and imposed socialist dogma. Individuals were told that they could not make any significant choices, whether in matters of the economy or in the field of politics. There was no free speech or free press; instead, all aspects of life were saturated with socialist propaganda.

A small number of the world’s nations did not fit comfortably into this East-West pattern. The Islamic despots of the Near East and Middle East, for example, did not embrace Soviet socialism because of its strictly enforced atheism and because Islam was not interested in the USSR’s economic agenda. These Muslim regimes, however, also did not like the personal freedom which the West promoted.

Saturday, November 24, 2018

Twenty-First Century Universities: Civilization's Suicide Pill

A February 2014 edition of The Michigan Daily reported in a front-page story about a “$3 million donation to create virtual curriculum for Fall 2015.” The story appeared under the headline “Grant expands Islamic studies” and featured quotes from Professor Pauline Jones Luong.

The article prompts certain questions: are similar grants found for the study of Judaism, Hinduism, Buddhism, or Christianity? Or lesser-known religions such as Jainism or Sikhism?

From the article, one has no evidence to conclude anything specific about the Islamic Studies program, but in the larger context of the contemporary university, there is reason to wonder if it will promote Islam rather than present Islam. There is reason to wonder if it will shy away from concepts like Hadd and Hudud.

The larger context which motivates these questions is the contemporary university’s subversiveness. Western Civilization has historically valued political liberty; modern universities tend to stifle any diversity of political thought, even as they claim to champion diversity of race, religion, or gender.

European culture has fostered individual freedom; contemporary universities have worked to dampen freedom of speech, freedom of the press, and other metrics of personal liberty.

It is reasonable to ask, then, whether the modern university will present Islam in a manner which complements the university’s general attack on Western Civilization. There is certainly a great deal within Islam which does not promote individual political liberty or personal freedom. Historically, Muslim-majority nations have been hesitant to adopt free speech, or to adopt a form of government composed of freely-elected representatives.

Among those Muslim-majority nations which have instituted some form of free election, the ideological implications of Islam’s social and political vision prevent a lively debate about issues which touch the Islamic worldview.

Will the contemporary university teach about Islam in a way which helps students to appreciate the unprecedented degree of freedom which Western Civilization has given? Or will that freedom, and the dangers which threaten it, be ignored?