Friday, December 30, 2016

Campaign Aftermath: a University President Speaks

The presidential election of 2016 produced results which were a surprise, if not to everyone, then at least to many observers. Many African-American voters, Latino voters, and female voters chose not to vote for Hillary Clinton, and thereby handed the victory to President Trump.

On many university campuses, a vocal segment of the student body could not understand how Trump’s presidency would eventually benefit not only college-age citizens, but average citizens from all social classes, races, and ethnicities.

President Trump came into office, after all, due to millions of African-American and Latino voters who chose not to vote for Hillary.

Yet universities, often hailed as centers of free speech and free thought, became quite hostile to anyone who admitted to having supported Trump’s candidacy. Students who were even suspected of voting for Trump were bullied.

At the University of Michigan, at a meeting of the campus senate, university president Mark Schlissel pointed out that saturation of socialist viewpoints had removed both faculty and students from an accurate assessment of reality:

I would argue no matter how [the election] turned out, our community has an awful lot of work to do to try to understand the forces at play in our society and how we've ended up with such large degrees of polarization. Why was this a surprising result in Ann Arbor and not a surprising result in other communities around our nation? I think as an academic community, we have to ask whether we're really in touch with the full breadth of the society we're serving and how they're thinking and what's important to them. Do we have in our student body, on our faculty, and adequate breadth of diversity of thought?

Because students and faculty had been living with an illusion, the election presented a moment of disillusionment. Having silenced the viewpoints of ordinary citizens on campus, the university was surprised when those same viewpoints made themselves felt off campus - at the ballot box.

The rage of the campus socialists vented itself on the hapless Trump supporters, who merely wanted freedom of speech. Mark Schlissel, speaking of students who voted for Trump, noted that

They feel marginalized. This is a challenge for the community and they need to feel included and involved in the discussion. Their opinions need to be considered and discussed as opposed to marginalized. We need to try, I think, to have ideas included in our community for discussion that are more representative of the ideas in the world at-large as compared to the academic part of the world at-large. I think that's a way to understand what is happening in modern society – here and globally.

As the post-election lunacy accelerated, leftist students began fabricating fake “hate crimes,” and to claim that these crimes were perpetrated by Trump supporters. A woman wearing a hijab claimed that she had been assaulted.

The National Review reports that, after investigating, “Ann Arbor police lieutenant Matthew Lige” announced that the woman had filed a false police report, and that no “ethnic intimidation” or any other form of assault had occurred. Indeed, video surveillance records showed that the entire incident was a fiction.

Faked “hate crimes” are nothing new. For more than a decade, individuals hoping to identify themselves as victims have falsified evidence and filed false police reports. Such fraud reveals that the very people who want to be seen as “victims” are, in fact, the oppressors and aggressors.

The bullying, intimidation, and harassment of Trump supporters on campus is merely the latest instance of such deception.

Wednesday, December 21, 2016

What Makes History Tick

In any narrative about events past or present, the reader wants to know: what was planned and intentional, and what was mere coincidence?

The answer is that much which seems coincidental was in fact planned.

Was it mere coincidence that the Japanese military launched a surprise attack on the United States on a Sunday? No; the Japanese officers were well aware of which day of the week might offer the best chances of catching the military defenses at a low level of alertness.

Was it mere coincidence that President Lincoln was assassinated at the precise moment that an actor onstage delivered a comic line in a play which the president was attending? No; the assassin had planned to kill the president on cue.

If we look at other events, then we can begin to see the effects of a grand conspiracy. Problems facing society endure despite actions taken to address those problems. Perhaps the actions are deliberately ineffective: someone benefits from the duration of various forms of social misery.

In domestic matters, the persistence of inner city poverty not only withstands governmental efforts to alleviate it, but it thrives on those efforts. Poverty is intensified by government programs designed, or allegedly designed, to end it.

The main causes of poverty are government programs intended to end poverty - or at least, programs presenting themselves as having such intentions.

In foreign policy, relations with our true and natural allies are damaged or soured by unwitting gaffes by diplomats. But were those blunders so accidental? Was there not a larger plan designed to weaken the international status of the United States?

Likewise, actions that inadvertently help our enemies seem to be the State Department’s miscalculations, but are in fact quite calculated and in no way inadvertent.

Not all of history is determined by conspiracies, but much of it is. As historian Gary Allen notes, “Because the Establishment controls the media, anyone exposing the” the national and international conspiracies

will be the recipient of a continuous fusillade of invective from newspapers, magazines, TV and radio. In this manner one is threatened with loss of “social respectability” if he dares broach the idea that there is organization behind any of the problems currently wracking America. Unfortunately, for many people social status comes before intellectual honesty. Although they would never admit it, social position is more important to many people than is the survival of freedom in America.

Large conspiracies bring together actors of opposite and seemingly incompatible categories. Conspiracies often escape detection because it would not occur to many observers that a capitalist and communist would work together.

In secret, however, there is collusion between members of the international communist conspiracy and certain key figures in the financial and monetary systems of industrialized nations. The unlikeliness of this combination is its camouflage.

Of the several levels of deception at work here, one of them is linguistic: the communist conspiracy is not about achieving of some socialist worker’s utopia in which every laborer receives the same pay as his manager. The word ‘communist’ is robbed of its original meaning, and used as an excuse to obtain and maintain power over governmental and economic systems.

Likewise, members of the conspiracy who seem to represent ‘business’ or ‘capitalism’ do not, in fact, have a devotion to the concept of the free market or of property rights, and thus the words are again used inaccurately to disguise a naked grab for power.

A far-flung and wide-ranging conspiracy is, and has been, at work in many different events and trends which threaten to weaken the United States. A network of individuals and groups from an incredibly diverse spectrum of institutions strive in concert to damage the freedom which is the foundational identity of the nation.

Thus seemingly incompatible combinations appear: billionaire investors and slum-dwelling rioters; leftist politicians and bankers.

It is incumbent upon those who value freedom to continue to uncover and expose conspiracies. The alternative, however it may be named, is a form of slavery.

Friday, November 18, 2016

Understanding Trump: Categories of Language

When two minds independently come to similar conclusions, or to the same conclusion, it’s worth noting. Analyzing President Trump’s victory in the 2016 election, a theme emerged amidst the seemingly infinite volume of reporting.

In September 2016, The Atlantic magazine included an article by Salena Zito titled “Taking Trump Seriously, Not Literally.” Moving through various examples of Trump’s campaign rhetoric, Zito notes how the news media carefully parsed the candidate’s words and subjected them to “fact checking.”

The media’s scrutiny didn’t sync with the popular enthusiasm which met Trump’s speeches. As Zito writes,

It’s a familiar split. When he makes claims like this, the press takes him literally, but not seriously; his supporters take him seriously, but not literally.

Whether Trump spoke of the border with Mexico or dealing with “Islamic State” terrorists in the Middle East, the voters responded to his sentiment and attitude, not to the specifics of any alleged “plan.”

Voters were not content with the rather spineless image which the Obama administration projected to other nations. The voters wanted a general feeling of a representative who would act in the interests of the average American, not an Obama-like figure who worked to cultivate a charm among foreign leaders.

Trump seemed to be someone who would work on behalf of ordinary Americans. Crowds cheered that feeling, rather than the details of particular policies.

When Trump talked about a “wall” on the border to Mexico, the news media went to work making calculations about physically building a wall; Trump’s listeners heard a metaphor - they didn’t know or care whether or not Trump would build a literal physical wall. They knew that he understood the concepts of national sovereignty and territorial integrity.

Separately, another journalist, Margaret Sullivan, writing in The Washington Post in November 2016, described an interview she had with Peter Thiel:

It’s a familiar split. When he makes claims like this, the press takes him literally, but not seriously; his supporters take him seriously, but not literally.

Just as Obama’s supporters had reacted to slogans like “Yes We Can” and “Hope and Change,” Trump’s supporters embraced the concept of a president who would act on behalf of the ordinary citizen.

Voters perceived that the Obama administration had prioritized diplomatic relationships and climate concerns over safety and prosperity. Domestic violence and international Islamic terrorism left U.S. citizens feeling unsafe. The ongoing economic doldrums of the Obama era had left Americans with lower wages and a smaller net personal worth. Margaret Sullivan writes:

And although many journalists and many news organizations did stories about the frustration and disenfranchisement of these Americans, we did not take them seriously enough.

The voters wanted a change of leadership. They didn’t really care whether or not a wall was built along the Rio Grande. But they wanted someone who spoke, and who would act, with directness:

Again speaking of the news media, Sullivan writes:

Although we touched down in the big red states for a few days, or interviewed some coal miners or unemployed autoworkers in the Rust Belt, we didn’t take them seriously. Or not seriously enough.

Voters really don’t care about the nuts-and-bolts of some policy decision. Analysts for newspapers and television networks tend to wrestle with statistics, definitions, and technicalities. The average citizens simply want to know that someone is looking out for them.

That’s why the endless hand-wringing on the editorial pages and opinions shows didn’t bother the voters. Many who voted for Trump didn’t take seriously many of his statements:

A lot of voters think the opposite way: They take Trump seriously but not literally.

What voters embraced in Trump was a simple premise: that a government should act on behalf of its citizens. The ordinary citizens want government which will protect their lives, their liberties, and their property.

Obama had failed to create the impression that he was doing that. Hillary failed to create the impression that she would do that.

Trump signalled that he would watch out for American lives, liberties, and economic opportunities. The details might be fuzzy, exaggerated, inexact, or nonexistent. But the voters didn’t care about the details.

Tuesday, November 15, 2016

Who Voted for Trump? Who Didn’t Vote for Hillary?

Historians and statisticians will spend years analyzing the U.S. presidential election of 2016. The dynamics and demographics in that vote were unforeseen and manifested the beliefs of the citizens.

Elections are about perceptions. What a candidate “really” is, how that candidate “really” thinks or would act in some future hypothetical situation, is unknown and, to the average voter, unknowable.

Citizens vote, therefore, based on what they believe or perceive about a candidate. The surprise was that the U.S. voters believed different things about Hillary and about Trump than what the news media were telling them to believe.

While most newspapers and cable TV networks were telling the voters that Trump was a racist, and that Hillary was tolerant, it seems that the voters believed quite the opposite.

Trump actually got a smaller percentage of the “white” (European-American) vote than the Republican candidate four years earlier (Mitt Romney) had gotten. Apparently, Trump was favored by African-American and Latino voters.

Trump got nearly double the percentage of Black voters that the GOP had gotten four years earlier. As historian David French writes:

Would you believe that Trump improved the GOP’s position with black and Hispanic voters? Obama won 93 percent of the black vote. Hillary won 88 percent. Obama won 71 percent of the Latino vote. Hillary won 65 percent. Critically, millions of minority voters apparently stayed home.

Comparing the 2012 election to the 2016 election, Trump, as the Republican candidate, gained African-American voters and Hispanic voters.

Millions of Black and Latino voters decided that Hillary was not reliable. They didn’t trust her; they didn’t want her in the White House. Although Hillary’s allies wanted to label Trump as “racist,” it turns out that, in the minds of many voters, Blacks and Latinos did not trust Hillary.

The Clinton campaign patronizingly assumed that Hillary would automatically receive the vast majority of the African-American and Hispanic vote. That assumption was a form of racism.

Wednesday, October 26, 2016

A Generous Governor

Those who analyze economies often overlook the powerful factor of private sector charity. An example of this significant variable is Rick Snyder, the governor of Michigan.

According to a Detroit Free Press report, he and his wife earned a bit more than $400,000 in the year 2015.

During that same year, they donated approximately $95,000 to charities. (This is the sum which went to true charities, i.e., not to professional organizations or political causes.)

By this reckoning, Snyder donated around 25% of his annual income.

For that same year, he paid approximately $31,189 in the form of income taxes. He paid many thousands more in sales tax, property tax, and other taxes.

By far, the greater impact was in the private sector, and this for three reasons: first, the actual dollar amount was greater; second, the private sector charities have lower overhead administrative costs than government programs; third, much government spending is counterproductive, i.e., it exacerbates the very problems it is designed to address.

Private sector charities have a greater, more efficient, and more effective impact than government spending.

Tuesday, October 25, 2016

The History of Taxation in the United States: An Unnecessary Evil

From the Han Dynasty in ancient China to the Roman Empire, people have demonstrated a strong antipathy toward taxation. Both government and taxes are necessary, but humanity is happiest when both are kept to an absolute minimum.

For several decades leading up to the 1770s, most of the grievances which caused the United States to declare itself independent of the British Empire were tax-related, from the quartering of soldiers to the Stamp Act to regulated imports.

After achieving independence, the U.S. government largely avoided taxing the citizens directly, as Steven Weisman, writing in the Washington Post, notes:

The American Revolution began as a protest against unfair taxation. In our early history, leaders avoided the question of income tax altogether, choosing instead to raise federal revenue with import tariffs.

One notable exception was a direct tax on distilled beverages, which led to the bloodless ‘Whiskey Rebellion’ of 1794. Although major violence was avoided, the rebellion demonstrated that U.S. citizens were strongly allergic to taxation.

The first income tax of note was imposed during the Civil War. Its top bracket levied as much as 10%, but approximately 90% of households were totally exempt, having incomes which fell below the lowest bracket. Weisman writes:

Since the federal income tax was introduced during the Civil War, U.S. citizens have complained about their taxes.

By 1872, the Civil War income tax was phased out. The people had tolerated the tax long enough to end the war, and pay war-related expenses in the first few years postbellum.

Public outcry against income taxes had grown by 1872, and in 1895, the Supreme Court ruled, in Pollock v. Farmers’ Loan and Trust Company, that income taxes were unconstitutional.

Less than two decades later, Woodrow Wilson and his ‘Progressivist’ movement would avoid both the Supreme Court’s decision and the popular vote, enacting an income tax of a much larger proportion than the short-lived Civil War tax.

Steven Weisman reports:

The income tax disappeared when the war ended. But it returned on the eve of World War I, enabling President Woodrow Wilson to raise the marginal income tax rate to 70 percent. Wilson called paying taxes a “glorious privilege” and a way for the businesses profiting from military buildup to give back. Sen. Hiram Johnson of California even attacked “the skin-deep dollar patriotism” of those who favored war but opposed taxes.

The citizens got a brief respite from the shocking tax burden during the Harding and Coolidge administrations. With the help of Congress, Coolidge brought Wilson's draconian 70% rate down to 25%.

The federal government, held hostage to ‘Progressive’ ideology savaged the citizens with income tax rates as high as 91% in peacetime and an astounding 94% during WW2.

The 94% rates applied during 1944 and 1945, and might be excused because of wartime urgency.

But a 92% top rate, during 1952 and 1953, was not justified by the Korean War, inasmuch as that conflict consumed a smaller segment of the total defense budget.

The long reign of the 91% rate, from 1946 to 1963, cannot be excused. It was this era which gave birth to creative accounting, tax shelters, loopholes, and offshore accounts.

For almost two decades, confiscatory rates were used against citizens. No war or natural disaster created a justifying urgency. The government was simply savaging its people.

It was only logical, therefore, that financial systems were developed to help citizens avoid as much income tax as they legally could. From this we have inherited both a large tax accounting industry and bullying IRS with its thousands of pages of Byzantine tax codes.

Sunday, September 11, 2016

Black Voting Rights in the South

The Civil War ended in 1865. For the next several decades, African-Americans not only enjoyed their right to vote in large numbers, they were also elected to major offices, including the Senate and the House of Representatives.

During these decades, Republicans worked hard to protect the civil rights of the Blacks in the South. In Congress, the Republicans made the Civil Rights Act of 1866, the Civil Rights Act of 1871, and the Civil Rights Act of 1875 into law.

By the end of that century, however, the pro-slavery Democrats began to take power in the South.

As the Democratic Party asserted itself, it found ways to prevent African-Americans from voting. By the early 1900s, fewer Blacks were voting in the South than in the late 1800s.

With fewer African-Americans voting, the Democrats began to win elections in the South. Blacks had traditionally voted Republican.

The Democratic Party accumulated a string of victories by preventing African-Americans from voting in the South, as historian Patrick Buchanan notes:

In the two presidential campaigns of Wilson and the four of FDR, Democrats swept every Confederate state all six times. The Democratic candidate in 1924, John W. Davis, carried every Confederate state and, with the exception of Oklahoma, only Confederate states. Truman took seven Southern states to Strom Thurmond’s four. Dewey got none. In 1952 and 1956 most of the electoral votes Adlai Stevenson got came from the most segregated states of the South.

Some of the Democratic presidential candidates spoke in favor of segregation, like Woodrow Wilson. Some of the candidates were silent on the subject, like FDR, but allied themselves with segregationist vice-presidential candidates.

John Davis, the Democratic candidate for president in 1924, was an attorney who argued for segregation before the Supreme Court in the Brown v. Board of Education of Topeka in 1954.

The Republicans did not give up. They worked to help Black voters get back to the polls.

Against opposition from the Democrats, the Republicans in Washington passed the Civil Rights Act of 1957 and the Civil Rights Act of 1960, both of which were signed into law by Republican President Eisenhower.

The Republicans continued with the Civil Rights Act of 1964 and the Civil Rights Act of 1968. The Voting Rights Act of 1965 helped millions of African-Americans get their right to vote.

The return of Blacks to the voting booth eventually began to break the stranglehold which the Democratic Party held on the South. Over the next few decades, the monopoly held by the Democrats in southern politics ended.

Tuesday, July 12, 2016

Mounting Evidence Yields Details of Soviet Spy Activity

Political parties are usually formed to represent a set of ideas. They work to spread those ideas by nominating candidates and campaigning for their election.

The American political system is based on the freedom of speech and the freedom of the press – allowing vigorous and unrestrained debate about public topics. To function properly, this system requires tolerance – people need to tolerate other viewpoints and other people who have those viewpoints.

A free and democratic society needs people to tolerate – but not to accept, support, welcome, or affirm – various political opinions. During an election, one voter must tolerate another voter’s views: but a voter is not required to accept, support, welcome, or affirm another voter’s beliefs.

In the case of the Communist Party (CPUSA), however, a terrorist organization presents itself as a political party. The CPUSA is not a party which exists simply to spread its ideas and nominate candidates. It is a terrorist organization.

The CPUSA seeks a “violent” revolution in the United States – it explicitly uses the word ‘violent’ in its documents – and is committed to sabotage and assassination. It is further a part of an intelligence network.

The larger international communist conspiracy of which the CPUSA is a part operates an espionage network with the goal of gathering confidential information from within the United States government and sending it on to hostile governments, and with the additional goal of planting operatives inside the U.S. government who can influence policymakers to make decisions which favor those hostile foreign governments.

The CPUSA, by calling itself a political party, seeks the protection of the first amendment, and seeks to garner the sympathy of a free and democratic society. But the CPUSA has no interest in a free exchange of ideas, and has no interest in free and fair elections.

This ruse has been successful. The CPUSA has fooled some Americans into believing that it is an organization interested in ideas, and has hidden its identity as a terrorist group and a spy network.

Historian John Earl Haynes, speaking of his work with Harvey Klehr, says,

Too-large of a segment of the academic world is inclined to a benign view of communism in general, and of the CPUSA in particular. They prefer to think of Communists as idealists interested only in social justice and peace. They resent historical accounts such as those Klehr and I produced that present archival documentation of the CPUSA’s totalitarian character and its devotion to promoting Soviet victory over the United States in the Cold War.

Those who’ve been fooled into believing that the CPUSA is peaceful and humane resist the preponderance of evidence which reveals the group’s true nature.

At the end of the Cold War, around 1990 or 1991, data became available as the Soviet Union crumbled. Information from Soviet intelligence agencies, and from agencies in various Warsaw Pact countries, showed a thriving communist spy network inside the United States.

Other evidence emerged as Cold War era documents within the U.S. government were declassified, and as former operatives began to recount their activities.

Specific information is now available about the activities, from the 1930s to the 1950s, of known Soviet agents like Alger Hiss, Klaus Fuchs, Julius Rosenberg, and others.

There is now no doubt that the CPUSA was funded and directed by the Soviet government in Moscow, and was a functional part of the Soviet intelligence apparatus. What seemed to be a political party was in fact a group bent on destroying the United States.

Thursday, May 5, 2016

Robert Oppenheimer, Soviet Agent: a Physicist Goes Rogue

During WW2, development of an atomic bomb was a high priority. If the United States hadn’t achieved this capability, the Japanese would have killed millions of Americans.

So urgent was the need for this technology that military leaders, like General Leslie Groves, were willing to take security risks in order to accomplish this task. He hired a number of physicists and technological experts who had known Soviet sympathies.

By May 1942, when a physicist named Robert Oppenheimer was recruited to work on the Manhattan Project - originally named the ‘Manhattan District’ - the Soviet Union had switched sides in the middle of WW2, and was now ostensibly an ally of the United States.

Perhaps this made General Groves feel better about hiring Oppenheimer. While the war, and the alliance with the Soviets, lasted, it seemed like a reasonable risk.

But quickly after the war’s end, the USSR declared itself to be an enemy of the United States, and allowing the Soviets to gain nuclear technology would be dangerous indeed. Puzzlingly, Oppenheimer was allowed to remain in government projects related to nuclear weaponry.

More than merely having communist sympathies, Oppenheimer was a member of the communist party, dedicated to espionage. He was not the only atomic scientist working for Moscow. Klaus Fuchs and David Greenglass were among his fellow Soviet agents.

Oppenheimer was one of the more dangerous Soviet spies, because he worked his way up into administrative decision-making, and became he held his post longer before being eventually discovered and removed, as historian Stan Evans writes:

Foremost among such cases was J. Robert Oppenheimer, the famous nuclear scientist who played a leading role in the atom project of World War II. This was by all odds the most significant security problem in Cold War records, having its genesis in the days of FDR, blossoming into a full-fledged scandal under Truman, then finally coming to public view in the Eisenhower era.

Oppenheimer’s communist links were strong. The American Communist Party (CPUSA) had declared that it sought the “violent” overthrow of the United States, its people, and its government. This was no mere political party. It was a terrorist organization.

Oppenheimer was part of an espionage network throughout the United States. His task was to see that the Soviets obtained military and scientific secrets about nuclear weapons.

The earliest known mention of Oppenheimer in the FBI reports is a memo from March 28, 1941, which says he had the previous year attended a meeting in the home of Haakon Chevalier, an identified (later self-admitted) Red, along with Communist leaders Isaac Folkoff and William Schneiderman. It was apparently this information, obtained at the era of the Hitler-Stalin pact, that prompted the FBI to put Oppenheimer on its “custodial detention” list of people to be picked up by the Bureau if a national emergency developed. A memo to this effect was issued May 21, 1941, describing his “national tendency” as “Communist.”

In the early 1950s, Oppenheimer’s security clearance was revoked, effectively ending his career. After his death, documents were discovered which indicated that Oppenheimer’s sympathies had clouded his judgment, and that he had been, in some instances, more of a ‘dupe’ than a spy. He had enabled others to relay information to Moscow, sometimes unwittingly, sometimes knowing that these individuals had connections to Soviet intelligence agencies.

Despite these possibly moderating factors, Oppenheimer had joined forces with an organization which envisioned the deaths of many Americans in violent revolution. He was clearly aware that in some instances, his actions led directly to the Soviet acquisition of American nuclear technology.

The fact that the USSR eventually obtained atomic weaponry emboldened the imperialistic ambitions of the international communist conspiracy, and led to wars in Korea and Vietnam.

It is possible that if the USSR had not obtained nuclear capabilities, then the wars in Korea and Vietnam would not have taken place. Oppenheimer bears some responsibility for the deaths of American soldiers in those two wars.

Wednesday, May 4, 2016

President Harry Truman Was Tough on Communism - Except When He Wasn't

President Harry Truman spoke forcefully about his intentions to stop the USSR’s global imperial ambitions. He famously proclaimed the ‘Truman Doctrine’ in a speech, as he sought Congressional approval to aid the nation of Greece against an attempted Soviet takeover:

I believe that it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures.

Truman also indicated that he would resist Soviet attempts to infiltrate the United States government with ‘moles,’ and that he would root out those communist agents who had already lodged themselves inside federal bureaucracy. Historian Stan Evans writes:

In many standard histories and bios, Truman is depicted as a tough cold warrior who bravely faced down Moscow, being teamed in this respect with his foreign policy vicar Acheson at State. Even more to the present point, we’re told, Truman cleaned up security problems on the home front.

The Soviet Union was about the business of taking over nations who did not want to become the victims of communist dictatorships. Truman successfully put forth the image of someone who would fight Soviet socialist aggression, as he did, e.g., in the case of Korea.

It was logical, therefore, if the public were to assume that Truman was also dedicated to eradicating the agents of the various Soviet intelligence agencies who had worked their way into positions inside the U.S. government.

The cleanup was supposedly effected through the Truman loyalty program, announced in March of 1947. Thanks to this draconian effort, it’s said, whatever Communists or security risks had got on official payrolls were ousted.

Puzzlingly, Truman’s actions on the domestic front did not always match his words. His efforts against the well-developed Soviet espionage network in North America were lackluster and halfhearted.

Some of the most notorious communist spies held position inside Truman’s administration. Robert Oppenheimer, Alger Hiss, Harry Dexter White, and William Remington were a few of the many Soviet operatives who had access to sensitive information which they sent to Moscow, and also had access to the ears of policymakers, who could be influenced to unwittingly make pro-Soviet decisions.

This well developed intelligence network connect these individuals, and others, back to the USSR and eventually back to Stalin.

Truman, however, did not act on information given to him by the FBI which indicated that these men were major security risks, as Stan Evans writes:

Sad to say, this portrayal of Truman’s policy on the home front is almost entirely fiction. That he was a visceral anti-Communist is not in doubt. However, he seemed to know little about the way the Soviets and their U.S. agents functioned, or their presence in the government he headed, and didn’t show much interest in learning. This ennui persisted despite the myriad FBI reports supplied to the White House and Truman cabinet about the vast extent and serious nature of the penetration. Accordingly, not only was the security problem not cleaned up by 1950, some of the most flagrant suspects imaginable were flourishing in the federal workforce.

Why did Truman, who seemed dedicated to freeing the world of the communist threat, turn an inattentive eye to Soviet spies inside his own offices? Why did he not act when others attempted to alert him to this danger?

There are several possible answers.

Perhaps Truman was concerned about the image of a U.S. president having to publicly admit the presence of communist operatives inside the federal government. The political damage to his administration and to his party would be grave.

Or perhaps Truman, like many presidents, had to answer to higher powers - the leaders of his party, and the shadowy figures who operate internationalist conferences - who told him not to root out the dangers.

It’s possible that we’ll never know the cause of Truman’s mystifying inaction on this topic.

Tuesday, April 12, 2016

New Concepts of Warfare: Deterrence Both Large and Small

While students are often familiar with the word ‘deterrence’ in the context of doctrines like “massive retaliation” during the Cold War, they are less likely to know the word’s use in terms of smaller, regional conflicts. Deterrence is primarily taught in the context of the large arsenal of atomic weapons held by the United States: arsenals which made “massive response” or “massive deterrence” a reality.

Thus readers tend to think of deterrence on a macro scale: the extensive collections of nuclear weapons and their delivery systems in the face of Soviet aggression.

The goal of deterrence is always to prevent an armed conflict before it begins, by convincing a potential attacker that any aggression will meet with an overwhelming response.

By 1952, voters in the United States were tired of the Korean War. Eisenhower’s strategy was to prevent the United State from being dragged into similar regional conflicts by using deterrence on a regional scale. This was deterrence on a micro scale.

Structuring national defense systems for deterrence instead of for engagement requires a different type of planning. Prior to the Eisenhower presidency, the focus of military planning was for a large-scale confrontation with the Soviets, perhaps a land war, a re-play of WW2.

A WW2-style mobilization made no sense militarily because nuclear weapons had significantly changed strategy, and made no sense politically because the citizens were tired of protracted warfare. There was not much room, in Eisenhower’s deterrence strategies, for “tactical atomic” weapons.

Tactical nuclear weapons are intended for use within the context of conventional land war. They are often small missiles, with ranges of two or three miles.

By contrast, strategic nuclear weapons are large, powerful, and often delivered from thousands of miles away. They are intended either to be a part of strategic, not conventional, combat, or to deter combat altogether.

Although ‘tac nukes’ were produced in quantity throughout the Eisenhower administration, they were not a significant part of strategic planning.

Eisenhower’s goals were to get the United States out of the Korean conflict, to prevent U.S. entanglement in similar regional conflicts, and to accomplish this by developing deterrence on both a macro and a micro scale, as historian Russell Weigley writes:

The new Eisenhower administration embraced deterrence still more enthusiastically, with fewer backward glances toward plans for mobilization on the pattern of the World Wars. Apart from the asset of Dwight Eisenhower’s winning personality and prestige, the Republicans captured the Presidency in the election of 1952 largely because of voter discontent with the prolonged and puzzling Korean War. The new administration intended both to extricate the country from the Korean entanglement and to ensure against further involvements of the Korean type. It was able to succeed in the former aim, to end the fighting and the weary truce talks, for various reasons, including its political ability to be more flexible in negotiation than the Truman administration - few Americans could believe that Republicans were soft on Communism - and perhaps primarily, because Stalin soon died. Many in the new administration also believed that a threat to use atomic weapons in Korea, the message being conveyed to the Chinese through India, was decisive; this conviction was important in conditioning subsequent policy. For the second goal, guarding against a repetition of Korea, the new administration turned to an explicit strategy of deterrence, aimed at deterring local and limited as well as general wars.

Conventional war plans, on a strategic or macro level, were reactive, or at best responsive, and amounted to simply waiting for WW3 to break out. Deterrence was a more proactive approach.

Rather than make plans for a possible armed conflict with the Soviets, deterrence was designed to reduce the probability of that conflict, as historians Allan Millett and Peter Maslowski explain:

After the Korean War, the United States turned from a crisis-oriented military policy toward concepts and programs designed to last as long as the rivalry with the Soviet Union. Presidents Dwight D. Eisenhower, John F. Kennedy, and Lyndon B. Johnson adopted policies suited for “the long haul.” With Soviet-American competition accepted as the central fact in international relations, American policy makers regarded defense policy as a principal instrument for containing the parallel spread of Communism and Soviet imperialism. To check the extension of Soviet influence, the United States sought to reduce the chance that the Russians would threaten or use military force as a tool of international influence. For all the debate about the means and costs of defense, American policy rested upon consensual assumptions about the nature of the military challenge and the appropriate response. Supported by an activist coalition in Congress, the three Cold War presidents further refined containment, strategic deterrence, and forward, collective defense.

But on the political side, strategies of containment, deterrence, and defense are only as good as the realism needed to face the global situation, and are only as good as the will to implement such strategies.

On the American political scene, there were significant numbers of both leaders and voters who either did not understand, or willfully chose not to believe, that the crystalized goal of the USSR and the international communist conspiracy was the destruction of western societies leading to the Soviet Socialist domination of the world.

The Soviets were clearly focused on this goal, and willing to use, even intending to use, the deaths of thousands and millions of innocent civilians to reach this objective. In 1960, Senator Goldwater wrote:

The temptation is strong to blame the deterioration of America’s fortunes on the Soviet Union’s acquisition of nuclear weapons. But this is self-delusion. The rot had set in, the crumbling of our position was already observable, long before the Communists detonated their first Atom Bomb. Even in the early 1950s, when America still held unquestioned nuclear superiority, it was clear that we were losing the Cold War. Time and again in my campaign speeches of 1952 I warned my fellow Arizonians that “American Foreign Policy has brought us from a position of undisputed power, in seven short years, to the brink of possible disaster.” And in the succeeding seven years, that trend, because its cause remains, has continued.

There were enough voters and leaders, however, who accurately understood the Soviet threat. Although imperfectly, the United States was able to maintain enough deterrence that the USSR chose not to mount direct frontal attacks on either North America or western Europe.

The communists certainly continued their efforts at expansion, via proxy wars, via the acquisition of smaller defenseless countries, and via the espionage network they had established inside the United States. But America managed to deter a large-scale, massive Soviet military aggression.

Sunday, April 10, 2016

Haldore Hanson: a Danger

Who was Haldore Hanson? His name is largely forgotten, but he posed a direct threat to lives, not only of United States citizens, but also of people around the globe. As historian Stan Evans notes, “Hanson was a full-time State Department employee.”

Hanson started out as a journalist, reporting on China in the late 1930s. But his writing was propaganda for the Chinese Communist Party, and its leader, Mao Tse-Tung. In his advocacy for Mao, whose name is also transliterated as Zedong, he broke no U.S. laws, but he did contribute to what would ultimately become the mass murder of millions of Chinese.

Later in his career, Hanson “served on the staff of Assistant Secretary of State William Benton.” Hanson also joined the CPUSA, the American Communist Party.

In the context of the Cold War, joining the CPUSA was not merely an expression of a political view, but rather it was supporting an organization which declared in its written materials the “inevitability of and necessity for violent revolution.” To join the CPUSA was to endorse, support, and prepare for the violent overthrow of the United States government - including the killing of innocent civilians.

Working in William Benton’s office, Hanson was “one of the numerous group of” communist “suspects once employed in that office.” By 1950, “Hanson headed a division at State that dealt with matters of foreign aid. Most to the present point, he had in the latter 1930s gone on record with some revealing comments about the Communist cause in China.”

Hanson was, therefore, separately, both a criminal and a danger. He was a criminal because he was working for the violent destruction of the American government, including the deaths of innocent civilians. He was a danger because he work to support Mao’s regime and its eventual slaughter of millions of Chinese.

Monday, March 28, 2016

Korea: Caught by Surprise

In the years immediately after WW2, the world was adjusting to the new alignments which would shape, and be shaped by, the Cold War which would last for the next several decades. There were the “Western” nations, roughly coalesced around Japan, the United States, England, and Germany - the NATO powers plus the friendly Pacific powers.

On the other side were the Communist powers, centered around the USSR and mainland China.

There were also ‘unaligned’ nations, who either in reality or in mere words sought to remain neutral or independent of the two groups which opposed each other in the broad framework of Cold War conceptualization.

There were many points of geographical contact between the two sides of the Cold War: the boundary between East German and West Germany, extending into the whole ‘Iron Curtain’ which ran up and down Europe from the Arctic Circle to the Mediterranean; the uneasy and unclear borders through southern and southeastern Asia; and the Pacific coast along eastern Asia, where sometimes narrow stretches of water separated Communist and nonCommunist nations by only a few miles.

While the NATO Allies were making efforts to be prepared should the Soviets launch a surprise attack across central Europe, they were surprised when North Korea attacked South Korea in June 1950. The North Korean People’s Army (NKPA) succeeded in overrunning almost all of South Korea, as historians Allan Millett and Peter Maslowski note:

Korea had been a backwater of American postwar diplomacy, and it did not loom large as a military concern. Divided in 1945 by an arbitrary line at the 38th Parallel so that occupying Russian and American forces could disarm the Japanese and establish temporary military administrations, Korea had by 1950 became part of the Cold War’s military frontier. In North Korea the Russians had turned political control over to the Communist regime of Kim Il Sung and helped him create an eleven-division army of 135,000 seasoned by service in the Soviet and Chinese communist armies. The NKPA was a pocket model of its Soviet counterpart, armed with T-34 tanks, heavy artillery, and attack aircraft.

Although the North Koreans experienced sweeping success in the first phase of the war, due to the element of surprise, to the unreadiness of the U.S. forces in South Korea, and to the small numbers of those U.S. forces, the South Koreans, aided both by the United States and by the United Nations, would turn the tide. The South would be freed of the NKPA invaders, the North would be largely in the hand of the U.S. and U.N. forces.

While the North Korean war planners were correct in their estimation that they could quickly advance through the South, they were wrong in believing that they could hold that ground for long. The United States had resources very close by, as Russell Weigley writes:

The authors of the North Korean invasion of South Korea had also miscalculated the American response. Despite the weaknesses of the American armed forces, hardly another place on the boundary between the Communist and non-Communist worlds could have been so well selected as a setting for the frustration of a Communist military venture by the military resources of the United States. Korea is a peninsula which at the narrowest point of the Strait of Tsushima is little more than a hundred miles from Japan. Therefore Korea lay within ready reach of the largest concentration of American troops outside the United States, the four divisions of General Douglas MacArthur’s army of occupation in Japan, and within ready reach also of American sea power.

Only two months before the NKPA invaded the south, a famous document known as NSC-68 hinted at the possibility of unilateral and unprovoked action by the Soviets or one of their proxy states, like North Korea. This document arrived too late, and even if it appeared earlier, it is not clear that the Truman administration would have taken significant actions to put South Korea on a more defensible footing.

To arrange U.S. and NATO forces to respond the dangers listed in NSC-68 would have required a reallocation of the resources inside the defense budget. Much of the thought in the Truman administration centered on deterring, defending against, or fighting in a global strategic nuclear conflict, or at least a massive conventional invasion through central Europe. A smaller regional war was not fully anticipated, as William Donnelly reports:

North Korea’s invasion of South Korea in June 1950 occurred as senior U.S. civilian and military officials were considering what to do about recommendations in an April 1950 State Department paper submitted to President Truman. This paper, NSC 68, had argued that there was an increasingly dangerous imbalance of power between the Soviet Union and the United States, an imbalance that favored the former and would lead the Soviets to take greater risks in advancing their interests. The United States, NSC 68 urged, should undertake a major military buildup to reassure its allies and deter the Soviet Union. Although the Joint Chiefs of Staff (JCS) had not provided by June 1950 a final estimate of the forces required by NSC 68, there was no doubt that its implementation would involve a major increase in defense spending. But before June 1950, President Truman was not convinced such a step was necessary.

Domestic policy, foreign policy, and military policy should ideally harmonize to serve the national interest. Periodic reallocations within the defense budget are a necessary part of keeping policy both on task and congruent to current global realities.

The Korean conflict would serve to alert American policy makers to unanticipated dangers coming from the communist powers, as Mark Levin writes:

The moral imperative of all public policy must be the preservation and improvement of American society. Similarly, the object of American foreign policy must be no different.

The ‘improvement’ which is the goal of proper policy is the increase of personal freedom and individual political liberty. In domestic policy, this takes the forms of deregulation, tax cuts, and the defense of property rights. In foreign policy, and in military action, it takes the form of acting always to protect the lives, liberties, and properties of United States citizens.

South Korea survived the North’s attack, despite initial unpreparedness on the part of the United States, because major U.S. forces were able to be quickly redeployed from nearby Japan, and because those forces were large, well-equipped, and well-trained, relative to the NKPA and their Soviet supporters.

Sunday, March 6, 2016

The American Economy, Circa 1960

Taxes are a perennial problem for citizens. They are, in the words of Thomas Paine, a “necessary evil.” Taxes are truly necessary: if a government is to have the resources to protect the lives, properties, and liberties of its citizens, it will need the resources to carry out that responsibility.

Yet taxes are truly evil: the are the confiscation of a citizen’s property. To the extent that property is attained by work, taxes are the appropriation of a citizen’s labor, and thereby constitute involuntary servitude: slavery.

This was true in ancient Egypt, Babylon, Rome, and Greece. It is also the case in modern history. Reviewing economic statistics sometime around 1959, Senator Goldwater wrote:

Here is an indication of how taxation currently infringes on our freedom. A family man earning $4,500 a year works on the average, twenty-two days a month. Taxes, visible and invisible, take approximately 32% of his earnings. This means that one-third, or seven whole days, of his monthly labor goes for taxes. The average American is therefore working one-third of the time for government: a third of what he produces is not available for his own use but is confiscated and used by others who have not earned it. Let us note that by this measure the United States is already one-third ‘socialized.’” The late Senator Taft made the point often. “You can socialize,” he said “just as well by a steady increase in the burden of taxation beyond the 30% we have already reached as you can by government seizure. The very imposition of heavy taxes is a limit on a man’s freedom.

That citizens tolerated, in the early 1960s, the government’s seizure of a third of their annual income is due to the disguised nature of that seizure. Had the government presented, all at once, a bill for that sum, a revolution would have been likely.

Instead, taxes were divided into various segments: income taxes, capital gains taxes, tariffs, etc. Some taxes are hidden in the cost of consumer goods: the price on a loaf of bread or a bottle of ketchup, for example, includes the taxes paid by the manufacturer and shipper of the goods.

The political process regularly features candidates who express sympathy for the citizens and the burden of taxation which they bear. Yet the uniformity with which those candidates, regardless of their party affiliation, fail to significantly alleviate that burden is a uniformity which engenders a cynicism on the part of the voters.

While eager to elect leaders who will address the excessive size of taxes, voters have ceased truly to expect meaningful relief in this regard. Senator Goldwater wrote: “I suspect that this vicious circle of cynicism and failure to perform is primarily the result of” the habit, found among the news media and the politicians, of addressing taxation as a technical economic inconvenience, instead of considering taxation as an ethical question.

When a government uses the brute force of regulation to confiscate property from citizens, then surely a moment of ethical consideration is in order. The old slogan of ‘no taxation without representation’ alerts us to the fact that ‘administrative rulings’ and ‘user fees’ constitute a path around proper legislative processes.

Yet, as Goldwater notes, the political class and the media have fueled cynicism because of their “success in reading out of the discussion the moral principles with which the subject of taxation is so intimately connected.”

Friday, February 26, 2016

Competing Economic Doctrines: ‘Creative Destruction’ vs. ‘Too Big To Fail’

In 1942, the economist Joseph Schumpeter earned a permanent place in history when he developed the phrase ‘creative destruction’ to describe how economic growth and innovation arise from the debris of failed enterprises.

On average, businesses which fail or go bankrupt are replaced by more successful companies. On average, a worker who is laid off eventually finds not only another job, but a better-paying one.

To secure the full benefits of Schumpeter’s principle, however, those who are in a position to intervene in the economy must exercise restraint. The sometimes counterintuitive implications of “creative destruction” may require regulators to stand back and allow the financial collapse of a business, or an entire industry, to play out.

The humanitarian impulse, of course, nudges the government to intervene on the behalf of failing companies for the sake of workers who might lose their jobs, or for the sake of investors who depend on dividends for their daily bread.

Yet the best outcome for workers and investors alike, over the long run, is to allow creative destruction to take its course. Bitter lessons of history show that, e.g., the 1979 bailout of the Chrysler Corporation did indeed keep the company alive, but only barely, and long-run effect was to extend the misery as auto workers coped with shrinking wages and investors found Chrysler to be less than fruitful.

Historians should not engage in speculation about counterfactual situations: so one may not confidently state what would have happened had the 1979 bailout not taken place.

However, the record shows numerous financial collapses and bankruptcies which ultimately led to new and larger business opportunities.

Opposing the doctrine of ‘creative destruction’ is the political notion that some enterprises are ‘too big to fail,’ meaning that they are too big to be allowed to fail.

This political approach argues that the government should intervene in the natural workings of marketplace to sustain large companies which might otherwise declare bankruptcy. Allegedly, this policy avoids a ‘domino effect’ or a ‘chain reaction’ of other business failures.

Both the administrations of George W. Bush and Barack Obama implemented this policy, despite purported differences between those two presidencies.

Two sectors received this attention: the financial industry and the automotive industry. General Motors Corporation and Chrysler Corporation received massive government support at a time when there was at least the possibility that they might have to declare bankruptcy.

Concerning the continuity of the two administrations in this policy, Paula Gardner, reporting Sandy Baruah’s analysis of Obama’s actions, wrote:

Politics also might enter the message. Baruah said, “He vastly underplays the role President Bush played in setting the stage. The U.S. auto industry would not have been saveable in 2009 if George W. Bush had not taken the action that he did.”

One must be careful not to confuse the “U.S. auto industry” with a collection of auto companies. One or two companies can go bankrupt, and in so doing, strengthen the industry. By “saving” a company, one can weaken the industry.

Had GM or Chrysler declared bankruptcy, it would not have been the loss of thousands of jobs. The physical facilities of those corporations would have been maintained, the companies would have been restructured or sold or broken up, and new owners would have happily invested, getting manufacturing equipment and buildings at a bargain price.

The result would have been an energized industry and a burst of economic activity.

The tension, then, lies between two competing and mutually exclusive economic doctrines: ‘creative destruction’ vs. ‘too big to fail.’

The question is whether to save a company at the expense of the economy, or to save the economy at the expense of a company.

The organic functioning of an economy includes as a regular feature the failure of businesses. That the businesses are large or small makes no difference. David Stockman, Director of the Office of Management and Budget from January 1981 to August 1985, reviews the recent history how individuals were willing to let the economy work its own course:

Certainly President Eisenhower’s treasury secretary and doughty opponent of Big Government, George Humphrey, would never have conflated the future of capitalism with the stock price of two or even two dozen Wall Street firms. Nor would President Kennedy’s treasury secretary, Douglas Dillon, have done so, even had his own family’s firm been imperiled. President Ford’s treasury secretary and fiery apostle of free market capitalism, Bill Simon, would have crushed any bailout proposal in a thunder of denunciation. Even President Reagan’s man at the Treasury Department, Don Regan, a Wall Street lifer who had build the modern Merrill Lynch, resisted the 1984 bailout of Continental Illinois until the very end.

Prosperity is the result of allowing individuals and businesses to trade freely, and allowing them to experience the consequences of those trades - for good or for ill. It is tempting to intervene, with the well-intentioned desire to alleviate the short-term financial turmoil brought about by bankruptcies.

But, in avoiding that short-term pain, one prevents the economy from maximizing its long-term gains. A worker might be spared a few months of unemployment, but he is now left to languish at wages lower than if he’d been laid off in bankruptcy and later rehired by a business which was more competitive than the original one.

Monday, February 8, 2016

Immigration: Legal vs. Illegal

Immigration is a central question for national and international politics during the end of the twentieth century and the beginning of the twenty-first.

In the United States, this question takes the form of analyzing the distinction between ‘legal’ and ‘illegal’ immigration. It is a question of law, and of knowing and deliberate violation of the law.

Legal immigrants are, by definition, welcome in the United States. They contribute to the economy and pay their prescribed taxes. They can eventually become full citizens.

Illegal immigrants are criminals, because they have understood the rules beforehand, and have chosen to break those rules. They pay fewer taxes and are therefore not fully supporting the system from which they draw significant social benefits.

The political controversies emerge when people use the word ‘immigrant’ without clearly stating whether they are referring to legal or illegal immigrants.

The example of California is instructive. In 1994, the state had between 1.3 and 2 million illegal immigrants. The exact number is, of course, difficult to determine, because illegal immigrants are constantly working to conceal the fact that they have violated the law.

In that year, California’s voters approved Proposition 187, which was designed to ensure that legal immigrants received social benefits, and to ensure that illegal immigrants did not take those benefits from the legal immigrants.

Because 1994 was a statewide election year, the voting on Proposition 187 was linked with the voting on candidates for various statewide offices. An editor for the University of Michigan’s Michigan Law Review writes:

In 1994, Governor Pete Wilson of California pulled off an amazing come-from-behind victory by tethering himself with titanium cords to Proposition 187, which prohibited illegal aliens from collecting public services. Wilson went from a catastrophic 15 percent job-approval rating to a landslide victory. Suddenly he was being touted as presidential material.

Unrelated the California’s Proposition 187, but related to the topic of immigration, journalist Martin Wolf, writing for the Financial Times, reports that

The share of immigrants in populations has jumped sharply. It is hard to argue that this has brought large economic, social and cultural benefits to the mass of the population.

Note that immigration itself is not harmful to national economies, but illegal immigration does serious damage. Legal immigration has beneficial effects, but illegal immigration is a major problem facing various world governments in the early twenty-first century.

Monday, January 18, 2016

Why the Government Wants You to Pay More for Your Groceries: Radical Distortions of Natural Market Forces

In mid-2015, Shandra Martinez reported about an ongoing legal action. A regional grocer, Meijer’s, was suspected of selling food at prices which the government deemed “too low.”

This seems, to say the least, counterintuitive, especially at a time when the national economy had been sluggish for several years. Is it such a bad thing for consumers to get a good deal on groceries?

To be sure, there have been well-intentioned arguments for minimum prices: to avoid, e.g., “dumping” by which a large retailer can drive smaller retailers out of the market, only to dramatically raise prices once the competition has been eliminated.

But if such a tactic ever worked, it would certainly not in a market as liquid and active as consumer groceries.

Yet the government practice of setting minimum prices for various retail items remains a widespread practice. Shandra Martinez writes:

Meijer's recent opening of two Wisconsin stores has led to a state investigation to determine if the Midwest retailer violated a Depression-era law that keeps products from being sold below cost.

Products reported to be priced too low range from 28-cents a pound bananas to a $1.99 gallon of milk.

From beverages to fuel, price controls set minimum prices for a wide range of products. When competition would drive prices lower, the government steps in to stop prices from dropping.

Arguments are made that this regulatory intervention protects various industries. If this were true, it would ironically be foreign industries receiving protection from American governments, because in many cases, the goods so regulated are imported.

The minimum prices, however, often fail to protect industries, and in fact prevent them from expanding. Lower prices would lead to increased production, higher employment, and more net income to the manufacturer.

Both from the perspective of common sense, and from the perspective of technical economics, government-mandated minimum prices for retail goods serve only to force the consumer to pay more. The practice of legislated minimum prices creates inefficiencies in which selected “crony” companies can generate abnormally high incomes enabled by government favoritism.

The removal of government price controls benefits consumers and producers alike, strengthening the overall economy.

Sunday, January 17, 2016

Rising Wages, Sinking Families: the Paradoxes of Income

Despite economic “hard times” in the 1970s, when the U.S. economy was hit separately but simultaneously by inflation and by the Arab Oil Embargo, and in early 2010s, statistics still show long-term growth in real wages, as Charles Murray notes:

In the 1960 census, the mean annual earnings of white males ages 30 to 49 who were in working-class occupations (expressed in 2010 dollars) was $33,302. In 2010, the parallel figure from the Current Population Survey was $36,966 — more than $3,000 higher than the 1960 mean, using the identical definition of working-class occupations.

Alternatively, the same trend can be quantified as a rising standard of living. During that period, 1960 to 2010, the percentage of households possessing the following items increased: televisions, color televisions, microwave ovens, DVD (previously VCR) devices, cell phones, electric garage door openers, personal computers, smart phones, etc.

In any case, we can mark that fifty year period as a time of net economic gain: which means we can use this half-century as social specimen of what happens to a population when prosperity increases.

This occurred despite the decline of private-sector unions, globalization, and all the other changes in the labor market. What's more, this figure doesn't include additional income from the Earned Income Tax Credit, a benefit now enjoyed by those making the low end of working-class wages.

At one time, demographers often accepted the hypothesis that higher income rates led to higher marriage rates. Accordingly, they made few or no policy recommendations to directly increase the marriage rate. Instead, they assumed that improvements in the economy would automatically fix social problems like low marriage rates.

Perhaps, at some point in the past, that hypothesis was true. But, as Charles Murray points out, the last few decades have yielded numbers which have created great doubt about this conjecture:

If the pay level in 1960 represented a family wage, there was still a family wage in 2010. And yet, just 48% of working-class whites ages 30 to 49 were married in 2010, down from 84% in 1960.

If the hypothesis which linked rising wages to a rising marriage rate is not true, then the question arises about which causal relations, if any, exist between income levels and marriage rates. Additionally, the question arises about whether there are other factors, economic or noneconomic, which might determine marriage rates.

A second hypothesis had predicted that rising wages would cause rising participation in the labor force. Better wages, it was assumed, would lure more workers into the workplace. But this hypothesis also showed itself to be questionable.

What about the rising number of dropouts from the labor force? For seven of the 13 years from 1995 through 2007, the national unemployment rate was under 5% and went as high as 6% only once, in 2003. Working-class jobs were plentiful, and not at the minimum wage. During those years, the mean wage of white males ages 30 to 49 in working-class occupations was more than $18 an hour. Only 10% earned less than $10 an hour.

Contrast the average real wage in the period from 1995 to 2007 to the wage from 1960:

If changes in the availability of well-paying jobs determined dropout rates over the entire half-century from 1960 to 2010, we should have seen a reduction in dropouts during that long stretch of good years. But instead we saw an increase, from 8.9% of white males ages 30 to 49 in 1994 to 11.9% as of March 2008, before the financial meltdown.

In the face of the failure of these two hypotheses, statisticians, economists, and demographers had to explain why rising wages didn’t trigger an increase in workforce participation and an increase in the marriage rate.

If changes in the labor market don't explain the development of the new lower class, what does? My own explanation is no secret. In my 1984 book Losing Ground, I put the blame on our growing welfare state and the perverse incentives that it created. I also have argued that the increasing economic independence of women, who flooded into the labor market in the 1970s and 1980s, played an important role.

There are, however, alternatives to Charles Murray’s interpretations. The question is why rising wages didn’t have the anticipated social effects. But from a different perspective, perhaps the wages didn’t really rise.

Although Murray’s numbers, as quoted above, seem to indicate an increase in wages, his numbers also show an increase in single-parent households, whether through divorce or through illegitimacy. Unwed motherhood erodes the expanded purchasing power expected from nominally rising wages.

When parents do not live together with their children, the need arises to sustain two households. If mother and father do not live permanently in the same dwelling, then there is a need for two stoves, two refrigerators, two furnaces, two lawnmowers, etc.

This redundancy is very expensive, nearly doubling the cost of living. So if wages rise at the same time that single-parent families arise, then the latter number will to some extent counteract the former.

With more women in the workforce, single mothers were often able to maintain themselves financially. So the average standard of living across the population was able to sustain itself and even increase.

This increase in standard of living was, however, achieved very inefficiently, because for each working single mother, there was a corresponding male either working or potentially in the workforce. Had the two of them joined forces, the standard of living would have risen even faster, and with less work.

These inefficiencies arise when there are two parallel households when there should be one. Then net cost to individuals and to societies is significant.

Charles Murray examines the cultural factors behind the increase in single-parent households, whether through divorce or through unwed motherhood:

Simplifying somewhat, here’s my reading of the relevant causes: Whether because of support from the state or earned income, women became much better able to support a child without a husband over the period of 1960 to 2010. As women needed men less, the social status that working-class men enjoyed if they supported families began to disappear. The sexual revolution exacerbated the situation, making it easy for men to get sex without bothering to get married. In such circumstances, it is not surprising that male fecklessness bloomed, especially in the working class.

In the early twenty-first century, these cultural factors are powerful influences in society. Culture seems to trump economics, not the other way around:

I barely mentioned these causes in describing our new class divide because they don't make much of a difference any more. They have long since been overtaken by transformations in cultural norms. That is why the prolonged tight job market from 1995 to 2007 didn't stop working-class males from dropping out of the labor force, and it is why welfare reform in 1996 has failed to increase marriage rates among working-class females. No reform from the left or right that could be passed by today's Congress would turn these problems around.

The disclaimer needed when examining these trends is, of course, that we are dealing with averages and trends. There are exceptions: widowhood is, for example, almost never by choice.

Physical disability is another important exception.

But as percentages in the general population, these exceptions are small segments, and the general trends are not affected by them.

As single mothers have proven themselves more adept and creative at sustaining themselves and their children in single-parent households, the corresponding males, who are on average not supporting any children, find it easier to support themselves with little or no constructive economic activity.

Society has created a giant loophole, an easy way out, for men who father children, abandon them, and then live easily without contributing to the economy, to their children, or to the mother of their children. They are allowed to act irresponsibly.

The prerequisite for any eventual policy solution consists of a simple cultural change: It must once again be taken for granted that a male in the prime of life who isn't even looking for work is behaving badly. There can be exceptions for those who are genuinely unable to work or are house husbands. But reasonably healthy working-age males who aren’t working or even looking for work, who live off their girlfriends, families or the state, must once again be openly regarded by their fellow citizens as lazy, irresponsible and unmanly. Whatever their social class, they are, for want of a better word, bums.

The use of the word ‘responsible’ is central to Murray’s argument here. For many people, the word or the concept it represents need not be invoked: they are naturally inclined to contribute to their families and to society.

But there is a large enough segment of the population - and especially of the male population - which needs some external encouragement to seek and maintain employment, and to support their families, that it is necessary to provide structures which incentivize socially constructive economic behavior.

Saturday, January 9, 2016

AIDS and Politics: Reagan Seeks a Cure

Under the headline “Reagan Defends Financing for AIDS,” the New York Times stated, in the words of reporter Philip Boffey, that President Ronald Reagan and

his Administration was already making a ''vital contribution'' to research on the disease.

The Times article appeared in September 1985. This was relatively early in the history of the disease, which had been utterly unknown only several years before. Early in Reagan’s first term in office,

he had been supporting research into AIDS, acquired immune deficiency sydrome, for the last four years and that the effort was a “top priority” for the Administration.

Reagan encountered some resistance for directing both funding and attention to the illness. But he “publicly addressed the issue of the lethal disease that has claimed thousands of victims, primarily among male homosexuals, intravenous drug addicts,” and other high-risk demographic segments.

Reagan’s opponents were concerned about sending large amounts of taxpayer dollars to various types of medical research. They argued that, while AIDS was subject to therapy, management, and treatment, it would be misleading to raise hope of an actual ‘cure.’

Nonetheless, Reagan’s “administration had provided or appropriated some half a billion dollars for research on AIDS since he took office in 1981.”

Reporting about Reagan’s effort to find help for those who suffered with the disease, Carl Cannon writes about Reagan’s meetings with people like

Los Angeles gay activist David Mixner, a friend of future president Bill Clinton. “Never have I been treated more graciously by a human being,” Mixner said of his meeting with Reagan.

As a former Hollywood actor, Reagan was a friend of Rock Hudson, who was dying of the disease. Although Reagan was advocating for AIDS funding,

it was Hudson who wouldn’t discuss AIDS; Reagan actually mentioned the disease publicly for the first time two weeks before his friend passed away.

Reagan had been addressing AIDS since early in his first term in office. By the beginning of his second term, he was becoming more vocal on the topic.

Although some opponents claimed that he never mentioned sickness until he’d already been in office for seven years, he had in fact addressed it clearly and repeatedly several years earlier.

Reagan first mentioned AIDS, in response to a question at a press conference, on Sept. 17, 1985. On Feb. 5, 1986, he made a surprise visit to the Department of Health and Human Services where he said, “One of our highest public health priorities is going to be continuing to find a cure for AIDS.” He also announced that he’d tasked Surgeon General C. Everett Koop to prepare a major report on the disease. Contrary to the prevailing wisdom, Reagan dragged Koop into AIDS policy, not the other way around.

But more than speaking early and often about the matter, Reagan consistently provided a budget for it.

The administration increased AIDS funding requests from $8 million in 1982 to $26.5 million in 1983, which Congress bumped to $44 million, a number that doubled every year thereafter during Reagan’s presidency.

Nor did Reagan shy away from direct involvement in the matter. Carl Cannon notes that, “in 1983, early in the AIDS crisis,” Reagan’s “HHS Secretary, Margaret Heckler,” with Reagan’s approval,

went to the hospital bedside of a 40-year-old AIDS patient named Peter Justice. Heckler, a devout Catholic, held the dying man’s hand, both out of compassion and to allay fears about how the disease was spread.

“We ought to be comforting the sick,” said Ronald Reagan’s top-ranking health official, “rather than afflicting them and making them a class of outcasts.”

“I’m delighted she’s here,” Justice said. “I’m delighted she cares.”

Peter Justice and other AIDS patients like him appreciated Reagan’s sincere desire to support and help them.

Because of Reagan’s efforts, significant progress has been made in managing the disease with various therapies, treatments, and medications. There is reason to expect more progress in the future. A true ‘cure,’ however, remains unlikely.

Since that time, substantial progress and meaningful help has been developed for AIDS patients. Ironically, not much came of the federally-funded efforts, during Reagan’s administration or under later presidents. The most effective medications, treatments, and therapies were developed outside the government in private-sector research.

Nonetheless, Reagan’s efforts were laudable and demonstrated an ethical attempt to render assistance to those who were suffering.