Monday, June 17, 2019

Ronald Reagan vs. John Maynard Keynes

Much of President Reagan’s economic policy was directed at undoing New Deal policies which were leftover from the Great Depression. These policies were fifty years old by the time Reagan was in office, and he thought that they needed to be updated, revised, or replaced.

Reagan was elected in 1980 and took office in 1981. In a 1986 comment in the Wall Street Journal, he noted that Keynesian economics were “a legacy from a period when I was back in college studying economics.”

Indeed, Keynesian theories began making an appearance with early publications like Indian Currency and Finance, published by Keynes in 1913. His major publications came later, like The Economic Consequences of Peace in 1919, A Tract on Monetary Reform in 1923, The Economic Consequences of Mr. Churchill in 1925, and his large systematic explication in the Treatise on Money in 1930.

Keynesian views were widely understood by the time Ronald Reagan graduated from college in 1932.

Many historians see the influence of Keynes in FDR’s “New Deal” policies, policies which were designed to heal the damage done by the Great Depression, but which instead caused the Depression to last longer and do deeper harm to the economy.

By the time Ronald Reagan became president, many voters believed that it was time to discard Keynesian economics and find other, more credible, economic policy systems.

Friday, March 15, 2019

Re-Examining the Integration Narrative: School Desegregation Before Brown

The simple narrative taught in most history textbooks is that prior to 1954, schools in the United States were divided into two categories: those for Black children and those for White children. After the landmark decision in Brown vs. Board of Education of Topeka, schools across the country were gradually integrated, sometimes against resistance, to the point at which they are now largely, if not entirely, integrated.

This simple narrative is wrong.

When the case of Oliver Brown, et al. v. Board of Education of Topeka, et al. was first presented to the Supreme Court in 1952, there were already many integrated schools around the United States.

Desegregated schools were nothing new; in fact, they were almost a century old. Some schools, like Berea College in Kentucky, were integrated even prior to the Civil War.

There is a great deal of documentation about desegregation before 1954.

Yearbooks from high schools reveal substantial integration among students in towns like Muncie, Indiana. In the mid-1940s, African-American students and white students were mixed together in academic classes, in sports, and in music groups like choirs.

Similar instances of desegregation appear in high schools in other cities.

Integrated schools since the time of the Civil War were more common in the North than in the South, more common in smaller towns than in large cities, and more common in parochial private schools than in preparatory boarding schools. The main targets of the Supreme Court’s Brown decision were public high schools in large cities in the South.

Some small towns integrated out of economic necessity rather than moral principle. Many parochial schools considered it part of their mission to integrate. Some schools in the North had never been segregated, and so had no need to desegregate.

Oversimplified narratives in history textbooks could lead students to assume that all schools were segregated prior to 1954, while in reality, by 1954, there were large numbers of integrated schools.

Sunday, February 17, 2019

Why Do Governments Do What They Do? Economic Policy As Political Choice

Governments, all too often, intervene in economies, regulating, incentivizing, subsidizing, and generally gumming up the works, making marketplaces less nimble, and slowing creative processes. Why?

Some imagine the government as the neutral umpire, the objective referee who keeps the playing field level and fair. Others imagine government as the benevolent patriarch, reaching in with parental wisdom to adjust market forces. But neither of these images prevails.

Instead, governments routinely get in the way of wealth creation, and inhibit the very opportunities which they often claim to foster. James Buchanan, recipient of the Nobel Prize, sought to explain this quirky characteristic of governments, as journalist Dylan Matthews writes:

Buchanan is most famous for breathing new life into political economy, the subfield of economics and political science that studies political institutions, and in particular how they affect the economy. In particular, Buchanan is strongly associated with public choice theory, an approach which assumes that individual actors in political contexts are out for themselves, and then uses game theory to model their choices, with the hope of gaining insight into the incentives faced by political actors.

Policymakers have various motives, and the average of these motives - much like the net effect of several vectors in physics and engineering - determine their policy choices. Legislators are not thinking in the abstractions of equations and graphs which constitute academic economics.

Governmental regulators are flesh-and-blood human beings who are concerned about public perceptions, about private success, and whose preconceptions and ideologies shape their decisions with as much efficacy as empirical data.

Before Buchanan, economics was primarily about how individuals make choices in the private sphere. He was among the first to argue that it could explain their choices in the public sphere as well. Traditionally, economists have treated the government as a dictatorial “social planner” which is capable of impartially correcting failures in private markets. Buchanan's contribution was pointing out that that social planner also responded to incentives, and that they sometimes pushed him to make markets worse off than he found them.

Legislators, even those with the best of motives, nudge policies in suboptimal directions merely by being one of many vectors which will be averaged out: a perfect policy, when averaged with many other policies, does not steer policy toward perfection.

The legislators who have less than the best motives will clearly see opportunities for gain as they shape policy. This need not be flagrantly illegal or immoral, but merely an opportunity to benefit his constituency, which is, after all, the reason he was elected. But what benefits his constituents in the short term might harm others areas of the country in the long run, and therefore ultimately be suboptimal for everyone.

Government intervention will never be the crystalline abstraction which some academic economists hope it to be. At its best, it will be an approximation, an averaging of policies, in which even very good policies, when averaged, might produce less than very good results.

The safest conclusion, itself also not quite perfect, is to minimize regulation. Given that governmental intervention is never perfect, it would be wise to have as little of it as possible.

Saturday, February 9, 2019

The Story Behind the Story: Flint Was Not the First Water Crisis

The problems with government-provided drinking water pipelines in Flint, Michigan began in 2014, and by 2015 had obtained a high level of attention in news media. Since then, the problems have been resolved, and the water supply for citizens of Flint is once again safe to drink.

This story has been well documented in various media - and from various political viewpoints. The basic narrative is clear.

What is less well known is that this is not the first such water crisis.

A decade earlier, a similar series of events unfolded in the comfortable upper-middle-class neighborhoods of Washington, D.C.

The story broke in 2002 in little-known, alternative media outlets like the Washington City Paper. Surprisingly, there was little response from the government or from the mainstream media.

The neighborhoods affected by lead in their drinking water were economically middle- or upper-class, and racially mostly white.

The problem was largely ignored for over two years. Citizens were drinking lead-tainted water on a large scale.

In 2004, the Washington Post began to follow the story. Gradually, the size and scope of the problem became clear to the mainstream media. Eventually, action was taken, and the problem was corrected.

Comparing Washington to Flint, several contrasts emerge.

Flint’s problem was publicized and corrected quickly - within a year. Washington’s problem was ignored for over two years, and only after two years were steps taken to correct the problem.

Why the difference?

Flint had two variables which worked in its favor:

First, Flint has a manufacturing sector; Washington doesn’t. While Flint’s factories have declined in number and activity over the decades, there are still functioning manufacturing facilities in the city. The very first warning about water quality problems came from a General Motors plant. GM was concerned that high levels of lead in the water could damage machinery. If GM hadn’t raised the alarm, the problem could have continued for much longer.

Second, Flint has a significant African-American population; the affected neighborhoods in Washington, D.C. were mainly white. The residents of Flint were used to alerting political activists about their concerns, and the activists, in turn, were in the habit of worrying about Flint. Activists were more likely to engage about a public health issue in Flint than in a comfortable neighborhood in Washington.

So it was that Flint’s water-quality problems gained attention quickly and were corrected quickly, while the citizens of Washington were exposed to high levels of lead for a longer time. The public health problems in Washington are correspondingly greater, as data show.

Friday, January 4, 2019

The Cold War: An Unwanted Leadership Role for America

The Cold War was a span of years which greatly shaped the second half of the twentieth century. Its roots, however, were present already early in the first half of the century.

As early as 1919, the newly-formed Soviet Union, which was still fighting a civil war to stabilize its existence, was organizing covert activities inside the United States – activities designed to destabilize and eventually overthrow the government. The communists in America hoped to do away with the Constitution, with personal freedom, and with political liberty.

Although this espionage network had already existed for several decades, it was not until 1946 that the period known as the ‘Cold War’ began. The start of this era was marked by the end of World War II, and by the USSR’s procurement of atomic weapon technology.

The end of WW2 was also the end of an uncomfortable but necessary cooperation between the United States and the Soviet Socialists. With the pretense of an alliance gone, the communists could unabashedly pursue their goal of dominating nations in Europe and Asia, and eventually, their goal of placing American under a communist dictatorship, as historian Robert Maginnis writes:

World War II ended in 1945, and America’s leaders anticipated that the Soviet Union would continue the level of cooperation enjoyed during the war years. After all, President Franklin Roosevelt, an ideological progressive, believed the partnership that defeated the Axis Powers, which included Russia, would coalesce around his vision of a United Nations that would prevent future world wars. Roosevelt’s dream for the United Nations is traceable to his progressive ideological brother, President Woodrow Wilson, a man who shared a similar ambition after World War I in the form of the League of Nations.

History repeats itself: two American presidents, both at the end of a world war, blinded by the illusion that a group of international ambassadors would be able to prevent future wars. Roosevelt and Wilson posed a noble and idealistic goal, but impractical and impossible one.

If we believe that both of these presidents were sincere in wanting a world parliament to preclude wars, then we can conclude that both were blinded to the risks that such well-intentioned efforts could prove to be a facade behind which subtle influences would actually work to erode national sovereignty instead of strive toward world peace. Instead of preventing war, the League of Nations and the United Nations could serve as cover to hide the work of agents who wanted to undermine the United States Constitution.

Other leaders shared the desire for peace, but saw that these gatherings of diplomats were not the mechanism to ensure peace. It became clear that these conventions would in fact attempt to override the will of the American voters.

Free citizens can remain free only if no external powers violate the results of free and fair elections. Once the people have spoken, no gathering of foreign diplomats has the right to overturn their vote.

But the League of Nations was headed in that direction, and the United Nations has in fact reached the point at which it sees itself as authorized to ignore the expressed will of the people. The USSR hoped to use the United Nations to dissolve personal freedom and individual political liberty.

The League failed, thanks to Republican senators suspicious of international entanglements, and Roosevelt’s grand hope for a “true war-preventing organization” really never materialized after the Second World War, because Roosevelt’s United Nations ultimately became little more than a toothless, empty-headed debating forum on the Hudson River in Manhattan, New York.

After the concept of the United Nations proved to be unrealistic, the tensions which fueled the Cold War took another turn. In the postwar years, as the Soviet Socialists expanded into country after country, establishing their communist military dictatorships, the Americans and the nations of western Europe used the word ‘containment’ to describe their response: they hoped to stop Soviet expansion.

The United States inherited from the Second War the global leadership mantle, a role it was ill prepared to fulfill. It quickly saw a rising Soviet Union that must be stopped, and therefore it embraced containment as the only viable strategy. But that containment strategy was primarily a military power exercise that created mutual defense alliances that became little more than an anti-communist alliance and spurred an arms race.

The Cold War created a bipolar paradigm, in which most of the world’s nations allied themselves either with the West or the East. The West promoted free market economics, personal political liberty, property rights, a respect for the individual, and the freedoms of speech, religion, and the press.

The East looked to the state, the government, to control and own property, and to manage industrial production and the economy. The state imposed its own education with no alternatives, imposed atheism, and imposed socialist dogma. Individuals were told that they could not make any significant choices, whether in matters of the economy or in the field of politics. There was no free speech or free press; instead, all aspects of life were saturated with socialist propaganda.

A small number of the world’s nations did not fit comfortably into this East-West pattern. The Islamic despots of the Near East and Middle East, for example, did not embrace Soviet socialism because of its strictly enforced atheism and because Islam was not interested in the USSR’s economic agenda. These Muslim regimes, however, also did not like the personal freedom which the West promoted.

Saturday, November 24, 2018

Twenty-First Century Universities: Civilization's Suicide Pill

A February 2014 edition of The Michigan Daily reported in a front-page story about a “$3 million donation to create virtual curriculum for Fall 2015.” The story appeared under the headline “Grant expands Islamic studies” and featured quotes from Professor Pauline Jones Luong.

The article prompts certain questions: are similar grants found for the study of Judaism, Hinduism, Buddhism, or Christianity? Or lesser-known religions such as Jainism or Sikhism?

From the article, one has no evidence to conclude anything specific about the Islamic Studies program, but in the larger context of the contemporary university, there is reason to wonder if it will promote Islam rather than present Islam. There is reason to wonder if it will shy away from concepts like Hadd and Hudud.

The larger context which motivates these questions is the contemporary university’s subversiveness. Western Civilization has historically valued political liberty; modern universities tend to stifle any diversity of political thought, even as they claim to champion diversity of race, religion, or gender.

European culture has fostered individual freedom; contemporary universities have worked to dampen freedom of speech, freedom of the press, and other metrics of personal liberty.

It is reasonable to ask, then, whether the modern university will present Islam in a manner which complements the university’s general attack on Western Civilization. There is certainly a great deal within Islam which does not promote individual political liberty or personal freedom. Historically, Muslim-majority nations have been hesitant to adopt free speech, or to adopt a form of government composed of freely-elected representatives.

Among those Muslim-majority nations which have instituted some form of free election, the ideological implications of Islam’s social and political vision prevent a lively debate about issues which touch the Islamic worldview.

Will the contemporary university teach about Islam in a way which helps students to appreciate the unprecedented degree of freedom which Western Civilization has given? Or will that freedom, and the dangers which threaten it, be ignored?

Thursday, September 20, 2018

The Abuser-in-Chief: Bill Clinton Guilty and Impeached

In the United States, it rarely happens that Congress impeaches a president. It has happened only twice, so far, in the nearly 250 years that the nation has existed. In 1868, President Andrew Johnson was impeached.

Over a century later, Bill Clinton would be impeached.

Naturally, impeachments are both political and partisan. Because Bill Clinton was a Democrat, his party defended him. As historian Andrew McCarthy writes, “Democrats, prominently including women’s-rights advocates, closed ranks around Bill Clinton.”

Clinton had committed several crimes, which began to be exposed as investigators started to examine charges filed against him in connection with complaints brought forth in civil lawsuits. McCarthy writes:

According to the victim’s credible accusation, Clinton had raped Juanita Broaddrick in 1978.

This crime is shocking enough, but even more so when the reader considers that Clinton held an elected office when he committed the offense:

Clinton, at the time, was the 32-year-old attorney general of Arkansas.

As this crime was being investigated, others came to light. Bill Clinton had a habitual pattern of sexual assault. Officials eventually compiled a long list of victims.

Unlike other cases in which allegations of impropriety are made against some government official, in Clinton’s case, these were no mere allegations. The results of court proceedings confirm that Clinton acted criminally.

His sexual assault against Ms. Broaddrick came to light during the investigation of Clinton’s obstruction of a sexual-harassment suit filed against him by another woman, Paula Jones. She alleged that, while governor of Arkansas, Clinton had exposed himself to her, demanding oral sex. She declined and fled from the room.

The court worked to find justice for the victims. Clinton, in an effort end the proceedings, negotiated an agreement. He hoped to avoid further attention from judges.

President Clinton eventually paid $850,000 to settle the matter out of court.

But despite ending the civil suit with an out-of-court settlement, Clinton was still held responsible by the justice system for his crimes. Further victims, Leslie Millwee and Kathleen Willey, presented testimony.

When asked by investigators about his activities, Clinton lied. He lied after having taken an oath to present honest statements. He lied to court officials. That is a serious crime.

The president was later held in contempt of court by a federal judge for providing perjurious testimony. That testimony was about Monica Lewinsky. It was also through Ms. Jones’s case that we discovered that Clinton, while the 50-year-old president of the United States, had arranged Oval Office sexual liaisons with the then-22-year-old White House intern.

Because he was found guilty of perjury, Bill Clinton’s law license was taken from him. His is no longer allowed to appear as a lawyer in court, or to sign any documents as a lawyer.

The courts fined Clinton $90,000.00 for his perjury in the Paula Jones case, and $25,000 for the Monica Lewinsky case.

But President Clinton paid the highest price for his crimes when he was impeached by Congress. Only two presidents have been impeached in the 250-year-history of the United States (so far). Clinton’s legacy has been permanently marked: history books for centuries into the future will note his impeachment.

Oddly, although his crimes were against women, some of strongest supporters were women: Hillary Clinton, Elizabeth Warren, Nancy Pelosi, Gloria Steinem, Joy Behar, and others. These women spoke publicly, defending Bill Clinton.

There is no doubt that Bill Clinton is guilty of sexual assault. There are questions about why women would defend such a man.