Thursday, December 12, 2013

Analyzing 9/11

The task of understanding exactly what happened on September 11, 2001 has gone on for a decade, and will go on long into the future. To be sure, the basic events are simple and clearly acknowledged. Nineteen Islamic terrorists, mostly from Saudi Arabia, hijacked four airplanes, flying two of them into the World Trade Center (WTC) and one into the Pentagon. The final plane crashed as a result of passengers who resisted the hijacking; the passengers had learned of the plot, and prevent the final aircraft from reaching its target. Approximately 3,000 people died.

Beyond those basic facts, many details of the attacks remain the topic of research. Discovering the minutia of the plot is difficult because it was conceived in secrecy, and because much of the reporting is biased, coming from sources in the Muslim world. Senator Al Franken offers an example:

Six months after 9/11, the Gallup Poll of Islamic Countries found that an overwhelming majority of those surveyed believed that the attacks on the World Trade Center and the Pentagon had not been the work of Arabs. Well-educated Egyptians and Saudis believed that the Israelis were behind the murder of the three thousand innocents on 9/11, in large part because of articles in their countries' official state newspapers. One of the widely disseminated stories was that no Jews died in the collapse of the Trade Towers because they had received calls telling them not to go to work that day.

When such stories are widely circulated and believed, the historian's task becomes more difficult. Sources must be examined carefully. Another factor which makes the work difficult is the premature release of information. If data are published while investigation and research are still in progress, the released data can contaminate the data which is still to be gathered by creating expected narratives. If an expected narrative about an event exists, then researchers may be predisposed to fit evidence into that narrative, rather than letting the evidence suggest other possible alternatives. Likewise, witnesses being interviewed may reformulate their memories and statement to conform to the expected narrative. This process may be conscious or subconscious.

The same types of concern are at work when a crime lab is asked to examine a sample, without being told the details of the case from which the sample comes. The goal is to keep the research as unbiased as possible.

Naturally, it is expected that all such data will eventually be made available to the pubic.

Franken offers an example data released prematurely in the chaos and emotional trauma following 9/11:

A clearly rattled Orrin Hatch was all over the news that day, blaming Clinton because he had "de-emphasized" the military. Hatch was also the first to confirm al Qaeda's involvement by disclosing classified intercepts between associates of Osama bin Laden about the attack. Asked about it on ABC News two days later, a miffed Donald Rumsfeld said Hatch's leak was the kind that "compromises our sources and methods," and "inhibits our ability to find and deal with terrorists who commit this kind of act."

Hatch's gaffe was twofold. First, by highlighting Clinton's lack of military preparation, he biased historians' analyses; other contributing variables should have received consideration in the absence of Hatch's emphasis on this one variable. Second, Hatch unwittingly alerted Muslim terrorists to the fact that their communications had been compromised; had Hatch not done this, further data might have been mined from such intercepts. As it was, the terrorists quickly changed their communications protocols.

In hindsight, while Hatch's blunder deprived investigators of valuable data which might have saved lives, it did not contaminate the general understanding of 9/11. But it is an example of the type of slip which could have misdirected the analysis. Franken continues:

The disclosure that al Qaeda was responsible did allow Representative Dana Rohrabacher (R-CA) to identify the "root of the problem" just hours after the attack: "We had Bill Clinton backing off, letting the Taliban go, over and over again."

Documents revealed that Clinton had been briefed on the Taliban, on al Qaeda, and on Osama bin-Laden. Clinton had nixed various action plans to neutralize the threat of al Qaeda, had weakened the intelligence-gathering of the United Stated, and had weakened the military's ability to carry out such operations.

When Clinton left the White House in January 2001, the incoming administration was concerned about the weakened state of both the military and the various intelligence agencies. Incoming National Security Advisor Condoleezza Rice noted:

I knew that there was a serious threat. I'd made that clear in a radio station interview in Detroit during the campaign, stating, "There needs to be better cooperation [among U.S. intelligence agencies] because we don't want to wake up one day and find that Osama bin Laden has been successful on our territory."

Clearly, Condoleezza Rice was well aware of bin Laden long before the 9/11 attacks. Although the various intelligence agencies were aware of the threat from al Qaeda, they had few details, and even fewer concrete suggestions about what to do about that threat. If they had such suggestions, the military lacked the resources to carry them out at that time. The NSC's counterterrorism advisor, Dick Clarke, briefed Rice when the new administration moved into the White House. She recalls that Clarke's presentation was

short on operational content. There was a lot that described al Qaeda but not very much about what to do. He made the point that al Qaeda was a network dedicated to the destruction of the United States. There were numerous slides with faces of al Qaeda operatives and a discussion of their safe haven in Afghanistan. There was very little discussion of Pakistan or Saudi Arabia. At the end I asked Clarke and his team whether we were doing all we could to counter al Qaeda. He made mention of some covert activities and said that he would later brief me on some other efforts.

Despite the numerous failures of the Clinton White House, the new administration did not want to spend time enumerating such shortcomings. In support of George Tenet, a Clinton appointee, Vice President Cheney wrote that

I was a strong supporter inside the White House of what Tenet and the CIA were trying to do. When there were suggestions after 9/11 that we have a group similar to the Warren Commission investigate intelligence failures, I had argued against it, saying it would too easily turn into a witch hunt and that what we needed to do was focus on preventing the next attack.

It is worth noting the broad agreement: liberals and conservatives, Democrats and Republicans, Clinton appointees and Bush appointees. Al Franken, Condi Rice, Orrin Hatch, Dana Rohrabacher, Bill Clinton, George W. Bush, Dick Clarke, George Tenet, and Dick Cheney - that is indeed a broad spectrum of political views.

Monday, December 2, 2013

Desert Storm: the View from inside a Tank

The first Gulf War, as Operation Desert Storm is sometimes called, presents the historian with a good object for study, because it was limited in both time and space, allowing the student to capture a comprehensive overview of the conflict. By comparison, a scholar might study WWII for years, only to realize how much more he has yet to learn about it.

There was a clear and defined buildup to the war. Planners and strategists had access to reliable intelligence, knew the terrain, and measured their resources carefully. Historians Lawrence Freedman and Efraim Karsh write:

The Vietnam War had as profound an influence on American calculations as the war with Iran had on Iraq. Key actors in the American political process were determined not to repeat the mistakes of the 1960s: the administration was resolved not to get trapped in an unwinnable war; the military would not allow civilians to impose artificial restrictions that would deny them the possibility of a decisive victory; Congress refused to be railroaded into giving the executive carte blanche to wage war; and the diplomats did not wish to find themselves supporting a military campaign in isolation from natural allies.

But there was more to the Desert Storm strategy than merely working to avoid "another Vietnam," a phrase which was common in public discussions of the conflict. As the Foreign Service's Sol Schindler writes,

Gen. Norman Schwarzkopf’s strategy, crafted by a special team of planners brought in from the Pentagon was fairly simple. The Marines would attack along the coast hitting the heavily fortified Iraqi Army positions. It was thought the Republican Guard would then stream south to reinforce the troops under attack. At that point, the VII Corps under Lt. Gen. Frederick “Freddie” Franks on the left would hook round to come in on their flank and crush the Republican Guard.

Schwarzkopf was commanding a coalition of at least 35 nations. Iraq stood essentially alone, with no material or military support from any ally. The Republican Guard was the elite unit of the Iraqi army. Although the coalition was militarily far superior to the Iraqi army, Schwarzkopf and other planners did not want overconfidence to become a weakness, and so they planned as if the enemy were strong, and attempted to organize a provision for worst-case scenarios. Yet, as is almost always the case in war, once hostilities commenced, the most careful planning and strategizing can quickly dissolve in the fog of war, as something will inevitably not go according to plan. Despite the attempt to foresee every unexpected possibility, there's always one scenario for which nobody accounted. War begins, and plans start to dissolve.

Fighting began not on the ground, but in the air, as Freedman and Karsh report:

The war began at 03:00 Kuwait time on January 17. A million men (with some 32,000 women on the coalition side) faced each other across the border but, as predicted, the initial stage of the war was turned over to the air campaign. The coalition command had earlier intended to begin with a phased campaign; the sustained attacks on ground forces were to be held back for a late stage. In the event, the considerable air armada gathered by the start of the war made it possible to begin attacks on ground forces from day one. Despite the intense speculation accompanying the lapse of the United Nations deadline, effective tactical surprise was achieved. Iraqi air defenses, confused by electronic warfare, achieved little. A high sortie rate, averaging about 2000 per day, was achieved almost immediately and sustained thereafter. A strategic phase of considerable efficiency was directed against Iraq's ability to command and supply its ground forces, and to develop and produce weapons of mass destruction.

Various forms of smart bombs - munitions guided by laser, radio and radar - gave the coalition both control of the air and the ability to inflict devastating damage on the enemy's military installations on the ground. After establishing air superiority, the ground war began, as Sol Schindler writes:

When the signal for the ground war in Kuwait was given, the 2nd Squadron (the Cougars) was more than ready. The troops had trained relentlessly in the desert, were sick of desert sand in their coffee, underwear and bedding, tired of the general dullness and boredom of their surroundings and, at the risk of being politically incorrect, could be described as eager for combat. They knew that only through offensive action could the war be brought to an end and they could finally leave the desert and return home.

Armor would play a major role in this war. Covering large amounts of desert quickly meant that the infantry would be less crucial than cavalry. Both the Iraqis and the coalition forces understood this. Freedman and Karsh report that

The coalition also had good reasons not to be overawed by Iraq's military capability. The major uncertainties surrounded its readiness and ability to use chemical weapons, and the potential effects of its ballistic missile force. Although fear of an eventual Iraqi nuclear capability was one of the reasons for defeating Saddam, no one thought that such a capability was then already available. Only a limited number of Iraqi divisions were considered compe- tent, and only the elite Republican Guard had modern Soviet T-72 tanks. Nearly half of the troops were mobilized reservists who had shown a read- iness to surrender during the war with Iran when the opportunity arose. There was also evidence that Iraq's less capable and youngest troops were being put in the lightly defended forward positions. The air force had been ineffective in close air support and the pilots were judged to be poor. The chain of command was heavily centralized and unresponsive. Generals who had made their names in the war with Iran were retired, dead, or under arrest. The defensive methods developed during the war with Iran had been based on massive earthworks combined with flooding to channel any offensive onto a killing ground. The Kuwaiti border did not offer the same potential for water barriers, nor were there any natural barriers such as the Shatt-al-Arab waterway. It was also apparent that the Iraqi force on the border to the west was more thinly spread.

In later years, popular memory would confuse and conflate the two Gulf wars, but a decade and significant differences lie between them. In 1991's Operation Desert Storm, both the Israeli and Saudi governments confirmed Iraq's possession of, and willingness to use, chemical and biological weapons; Senator Donald Riegle confirmed that Iraqi used these weapons, and that coalition forces had been exposed to them. Further, coalition forces were exposed to low levels of radiation in form of depleted uranium, a material used in manufacturing armor-piercing shells. The Iraqis did not have a functioning nuclear warhead (fission) during the conflict; one of the coalition's goal was to disrupt the Iraqi weapons program which was in the process of building such warheads.

In Operation Iraqi Freedom, by contrast, which began in 2003, featured less use of chemical and biological weapons; the coalition forces moved so quickly that the Iraqis did not have time to deploy them. Coalition troops found stockpiles of chemical and biological weapons - called weapons of mass destruction or WMDs - as well facilities producing material for nuclear weapons, facilities which had been restarted after the 1991 conflict had ended.

In both conflicts, armor was central. Tanks were decisive in Operation Desert Storm, and important in early phases of Operation Iraqi Freedom. In 1991's conflict, the superiority of the coalition forces was demonstrated in tank battle between the 2nd Squadron of the 2nd Armored Cavalry Regiment of the VII Corps and the Republican Guard. Sol Schinlder writes:

After two days the 2nd Squadron, which was leading the advance finally made contact with the Republican Guard. The conflict that ensued was overwhelmingly in favor of the Americans. One squadron (cavalry speak for a battalion) wiped out an entire mechanized brigade. The entire battle cost the Americans one fatality while they managed to kill hundreds and captured even more.

Both in terms of equipment and in terms of training, the Iraqis were outpaced by coalition forces:

The Russian-built T72 tank was mechanically reliable but inferior to the American Abrams in the guns it used and in its range finders. More important, however, the American crews were better trained, better schooled, better led and infinitely more capable, making the results of the battle logical if somewhat unbelievable. What is truly unbelievable, however, is the faith the American leadership had in the fighting abilities of the Republican Guard, which prevented them from finishing it off.

Coalition officers overestimated the Republican Guard, and did not press the attack as quickly, as far, and as powerfully as they could have. The result was that many from the Republican Guard escaped or retreated, leaving them as threats for later. To which extent this surviving remnant of Republican Guard forces contributed to the second Gulf war, over a decade later, is not clear. But had the coalition been able to neutralize more of the Republican Guard in 1991, the Iraqi army of 2003 would have been to some extent weaker.

Thursday, November 28, 2013

Debt, Taxes, and Spending

While it has been clear since the mid-twentieth century, if not earlier, that the United States needed to reduce its national debt, its taxation, and its governmental spending, Congress has not succeeded in doing any of these. This may be in part due to the fact that it is impossible to reduce any one of these alone. Any attempt to reduce debt will fail without simultaneously reducing taxation and spending. Likewise, reducing taxation is not possible without cutting spending and the debt. Finally, a reduction in spending alone will not benefit citizens unless it is accompanied by cuts in taxes and debt.

It may seem counter-intuitive to suggest that taxes and the debt can both simultaneously be reduced. In order to trim the debt, it would seem necessary to keep taxes at the current level, or even increase them. But this is not so. It is quite possible to diminish the debt while curtailing taxes, as long as spending is lessened at that same time. The technique is this: to lower spending at a slightly quicker rate than the cuts to taxes. In this way, more money is made available to reduce the debt even while cutting taxes.

Debt, taxes, and government spending form a trio which consistently and inevitably put a damper on the economy. While the numbers change so quickly - amount of national debt, unemployment rates, inflation rates, government spending budgets, tax rates - that any detailed comment about the economy is outdated as soon as it is printed, the general principles remain unchanged. In 2010, Ted Nugent wrote:

President Obama's stated program is to eliminate the tax cuts of President Bush, to raise the top marginal tax rate on the so-called "wealthy" who already pay the vast amount of income taxes, raise Social Security taxes, and provide tax dollars to fund private retirement programs. He supports throwing more tax dollars into the government's feed trough to gobble up. Not me.

Shortly after Nugent wrote those words, taxes on all workers were increased. Every job-holding American is paying more taxes. The national debt has more than doubled in a single four-year presidential term. In the same year, Chuck Norris wrote:

Washington's most recent financial spiral started with the Bush Wall Street Bailout (TARP) of $700 billion (what I call new debt #1). But then it continued under President Obama, who pushed for the next $787 billion stimulus bill (debt #2). And that wasn't enough either. Then they tried the $410 billion omnibus spending bill (with 9,000 earmarks - 60 percent Democrat and 40 percent Republican in origin), which like the others was railroaded through Congress (debt #3). Then President Obama informed us that another $634 billion would be required for a down payment for universal health care (debt #4) and so on. All of that doesn't include other economic stimuli needed on the government horizon, as Representative Daniel Inouye (D-Hawaii), chairman of the Senate Appropriations Committee, noted when he called the mammoth $787 billion spending bill "stimulus No. 1."

Note the bi-partisan guilt: both President Bush and Obama supported "stimulus" bills. Both Democrats and Republicans maneuvered to get earmarked funds. The notion that certain companies were "too big to fail" and needed to be bailed out was simply false. If the government had simply refrained from intervening, and let the free market work its own way to equilibrium, the economy would have found a route to self-correction. Perhaps that self-correction would have included major businesses going bankrupt. Although that would have constituted a short-term hardship, the individuals experiencing that hardship would have also found their way to an economic self-correction. Workers laid off would have eventually found new jobs. Stock market declines would have been reversed as new companies emerged to replace the failed ones.

Instead, government intervention ensured that the temporary and transitional unemployment which is part of an economic self-correction became instead a structural and semi-permanent unemployment. Because the major businesses did not collapse, they did not create the space for new start-up businesses which would have taken their places and which would have lent to the economy that growth associated with such new start-ups.

When taxes, government spending, and national growth are cut, leaving room for the free market to energize economic growth, then the United States will have a chance to reverse some of the damage inflicted on it during the last few decades.

Thursday, November 14, 2013

The Paradoxes of Modern American Education

The following fictional vignette represents a real and all too frequent state of affairs:

It's another day in the elementary school classroom. While most of the children remain relatively focused on their activities, Johnny, as usual, presents management challenge. The teacher is used to it by now, because he's that way almost every day. Early in the school year, when the pattern became apparent, the teacher conferred with the usual sources of advice - the school's psychologist and social worker - and learned about Johnny's home life. Johnny's parents are either incapable or unwilling to parent properly. Johnny's diet consists mainly of potato chips and cola. He is allowed to stay up as late as he wants, watching cable TV unsupervised, playing video games, or surfing the web. Electronic media has exposed him to unreasonably high amounts of extreme violence and deviant sexuality. Johnny wears whichever clothes he wants - which usually means that he wears the same clothing day after day, without benefit of having laundered the clothes or having had a bath or shower himself. Other children in the classroom complain that "he smells bad," and the teacher doesn't want Johnny to overhear these complaints, lest his feelings be hurt, and yet she is forced to acknowledge that the children are correct in their assertion.

The teacher wishes more than anything that she could simply tell Johnny's parents that they must give Johnny nutritious food, ensure that he gets enough sleep, and limit the hours he spends with electronic entertainment. Why can't somebody force them to be better parents?

The teacher's sentiments are perfectly understandable. Yet there is a danger lurking in what seems to be an obvious, commonsensical, solution to such problems. As much as we want to intervene in the case of negligent parents, we must respect a core American belief which asserts, coarsely phrased, that the government can't move into a person's private life and tell him how to live it.

A public school is a government institution. It's part of the same system that brings you the IRS, the Post Office, the FBI, and the Army. To preserve freedom in our society, which is the stated goal of our governmental system, a government school can't intervene into private life and restructure families - even if it would be good for them. In the vignette above, Johnny would certainly be well-served if, in fact, someone did force his parents to take better care of him. But if the government is the one doing that forcing, Johnny's gain would be the world's loss. While such intervention might improve his life marginally, society as a whole would suffer, because a precedent would be set which endangers the goal of maximizing freedom.

So if it's best to tell the government that it can't intervene in Johnny's home life, are we condemning him to misery? The situation does, after all, have a material effect on Johnny's quality of life: his education is already greatly impaired. If, in the name of liberty, we don't allow governments to barge into homes and regulate parenting, are we consigning some children to permanent neglect?

No. Because while government intervention into home life constitutes a net loss of freedom, the concept of freedom of association allows private sector organizations to influence private life and private decisions without harming liberty. While government bureaucrats should not show up at Johnny's house and tell his parents how to raise him, other, more organic, structures in society can and should do precisely that: neighborhoods, clubs, chambers of commerce, synagogues, mosques, churches, teams, extended family, etc.

In a case of glaring parental neglect, such organizations have the ability and the obligation to intervene. Because they are not part of the government, their intervention does not defile liberty.

Government must not only refrain from intervening in private life, it must also refrain from obstructing those societal structures which can and should so intervene. Sadly, it is sometimes now the case that not only does the government intervene into private matters when it should not, but it restrains those private associations which should intervene. Often citizens are afraid to intervene in cases of parental neglect or domestic abuse, knowing that any altruistic effort in this direction could in fact be seen as actionable by the government: the benevolent intervener might find himself accused in court.

Because freedom is the preeminent public value in the United States, education fits differently into American society than into many other societies. To say that freedom is the ultimate public value does not mean that it is the highest private value. Indeed, if freedom were the highest private value, the result would be an anarchy filled with selfishness and violence; if freedom were the highest private value, it would actually bring about the demise of political liberty; if freedom were the highest private value, it would lead humanity into that famously grim situation described by Thomas Hobbes.

In society, people are free to choose a highest private value: friendship, altruism, charity, family, faith in God, and other ways of finding meaning in life. But in order for each person to have the freedom to find a highest private value, the highest public value must be freedom. If anything besides freedom becomes the highest public value, then the individual is no longer free to choose altruism or friendship or charity as her or his highest private value.

The contrast between the need to place freedom as the ultimate value in the public sphere and the need to allow the individual to choose some other good as the highest personal value becomes perhaps more intelligible in concrete form. There are many specific instances in which this tension manifests itself - in freedom of speech, in economic freedom, for example.

This principle is also at work in education. There is a dynamic at work which limits the effectiveness and quality of American public education. The desire to improve our governments educational system must be limited if it is not to damage our freedom. The utopian drive which lies, explicitly or implicitly, within many political ideologies would have us able to obtain maximum levels of both public education and personal freedom. In reality, however, both of them cannot be simultaneously maximized.

Four instances of the tension between personal liberty and the maximization of public education can be identified.

First, the principle of personal freedom means allowing the individual to make choices which may conflict with any given set of values - simply put, the freedom to make the right choices is the freedom to make the wrong choices. To the extent that parents and children are private individuals and citizens, optimizing government-run education would require compulsion of some type. This is present already in truancy laws and mandatory attendance until a specified age. We allow students to choose from an offering of certain books or courses or schools; the desire to optimize education would deny this freedom to the individual. Likewise, to maximize the achievement of our public schools, the government would override non-academic choices made by the individual student, inasmuch as they affect the educational process. All of this would conflict with the stated value of freedom in society. The society which values freedom recoils at the specter of government officials making endless educational decisions, decisions effecting students, decisions in which students and parents have no say.

Second, the principle of freedom means respecting the parent's authority over the child, even when the parent makes decisions which do not maximize educational achievement. The government, if it is to respect freedom, must refrain from intervening, even if it means that the parents are free to make decisions which will not maximize educational achievement. As much as it might be clear to the public and to common sense, decisions regarding how much sleep a child gets, good nutrition, physical exercise, general transmitted attitudes toward education and learning, etc., lie with the parents, even if the parents fail to make the best choices in these matters. To be sure, society has a duty to intervene in the worst cases of negligence, but even then, it may be society which intervenes and not the government. In any case, there is a distinction between intervening in the worst cases and intervening in cases which are merely suboptimal. An institutionalized respect for liberty shudders at the prospect of government bureaucrats managing a family's private life - determining menus and bedtimes for children, supervising grocery and laundry.

Third, the principle of freedom means that government schools are under the jurisdiction of elected officials. Obtaining a majority or a plurality of votes in a school board election does not guarantee that the people thus elected will, or can, make decisions which maximize achievement. Indeed, it guarantees that optimization will not be achieved. There is no ideal expert who can refine our educational systems to perfection. Yet the utopian desire will seek to find those who even come close, and place them into positions of power, instead of the freely-elected representatives of the people. However proficient a technocrat may be, the cause of freedom is better served by elected representatives than by appointed experts. Free people must not allow the hope of efficiency to persuade them to relinquish the power of their ballots.

Fourth, the principle of freedom means that the vertical separation of powers - city, state, federal - along with the horizontal separation of powers - legislative, executive, judicial - is necessary to ensure that freedom is protected. But this same mechanism which protects liberty also ensures that policies, like educational policies, will never be completely, or even largely, consistent. To maximize public educational achievement would require a bureaucracy which is guided by a unified vision, and which is largely consistent with itself. While this is certainly impossible, the attempt would be dangerous. Entrusting power and control to a government which is not hamstrung by both vertical and horizontal separation of powers is a sure step on the road to tyranny. It is in the citizen's interest, and in the service of freedom, to ensure that the government is partially handicapped. But this also means that public education will lack a unified program which might optimize it. Despite a great desire improve the performance of the government's schools, it is more important for the sake of freedom that we keep our government weak and fragmented. A strong government might foster the illusion that it can better manage its educational institutions, but will in reality merely reduce personal liberty.

These four tensions reveal why public education cannot be optimized, and why we should not even attempt to optimize it. The effort to maximize achievement in government-run schools cannot lead to the best possible educational outcome, but will certainly result in a net loss of liberty.

Yet society recognizes the importance of education and wishes to maximize it. While efforts to improve education may never yield a utopian purity of achievement, there are promising routes to developing the educational process. While the principle of freedom means that the citizen's life and decisions must be protected from the government at every turn, the tension between the desire for freedom and the desire for refining our educational institutions dissolves when the institutions are neither owned nor operated by the government. Privately-owned schools avoid the four quandaries enumerated above.

Private schools are founded on the principle of free association. Thus the life and choices of the individual are not violated by an institution which she or he has freely joined. It is to be noted that the smaller units of government mimic this aspect of privacy, inasmuch as a city government is more accessible and more responsive than the federal government, and with ease one can leave a city for another, while only with great difficulty can one leave one nation for another. Two routes are thus available for the improvement of education: either privatization, or the complete exile of the federal government from educational matters. In the latter case, state, county, and city governments would be left to the task.

To be sure, smaller local governments have their own weaknesses and flaws, as do privately-owned and privately-operated schools. Any arrangement which removes the federal government from education is good; any arrangement which removes state and city governments as well is even better. A purely private system will not be perfect, and is not a panacea for any set of social ills. But a private system is the best available mechanism for education.

Although the matter at hand is the improvement of education, the funding of education cannot be separated theoretically from its improvement. As matters stand at the beginning of the twenty-first century, greatly reduced funding for education would not be incompatible with simultaneously increasing its quality and increasing teacher salaries, so great is the room for increased efficiency in the use of such funding. Senator Goldwater wrote:

I agree with lobbyists for federal school aid that education is one of the greatest problems of our day. I am afraid, however, that their views and mine regarding the nature of the problem are many miles apart. They tend to see the problem in quantitative terms – not enough schools, not enough teachers, not enough equipment. I think it has to do with quality: how good are the schools we have? Their solution is to spend more money. Mine is to raise standards. Their recourse is to the federal government. Mine is to the local public school board,the private school, the individual citizen – as far away from the federal government as one can possibly go. And I suspect that if we knew which of these two views on education will eventually prevail, we would know also whether Western civilization is due to survive, or will pass away.

Goldwater's way of framing the question is instructive - quantity versus quality. The qualities which one would hope to find in an educational system are those which are a priori unlikely to be found in any governmental undertaking. Even when the government succeeds in shifting its attention from quantity to quality, it is unable to produce, or even properly identify, the desirable qualities.

To put this somewhat differently, I believe that our ability to cope with the great crises that lie ahead will be enhanced in direct ratio as we recapture the lost art of learning, and will diminish in direct ratio as we give responsibility for training our children’s minds to the federal bureaucracy.

In general, the best a government can do is to produce mediocrity. Any effort at increasing quality results in resources being drained from society for a futile effort to improve something which is of inevitable necessity mediocre. This effort will consume not only material resources, but will damage personal liberty in the process, as the demand will invariably arise for more governmental control in order to get everything just right.

Let us put these differences aside for the moment and note four reasons why federal aid to education is objectionable even if we grant that the problem is primarily quantitative.

Goldwater goes on to make his four reasons. First, federal involvement in education is unconstitutional; education is the business of cities, counties, and states, but not of the national government. Second, the need for federal funding in education has never been demonstrated; if more money is needed for schools, money from cities, from counties, and from states is just as effective. Third, federal funding distorts a citizen's perception of the economics of education: federal money is never "free money," but rather it is taken from the taxpayers; it is mistaken to think that having the federal government acquire the educational system from the state or city government will somehow ease the burden on taxpayers. Fourth, federal aid to education inevitably means federal control of education, which can have no good effects but multiple bad effects.

Common sense, and most experienced educators, will inform us that it is not in the interests of a child, or a child's education, that she or he be fed exclusively on potato chips and cola, that she or he spend the majority of her or his time surfing an unsupervised internet, playing unsupervised electronic games, or watching unsupervised television. Yet we shudder to think of government bureaucrats barging into a family home to regulate parental decisions; this would be an unacceptable violation of the personal freedoms on which the United States was founded. Thus the hands of government schools are and should be bound, to preserve liberty; but this binding also means that mediocrity will be product. If, instead, society's influence, rather than the government's control, is brought to bear on such situations, we may often, if not always, find correctives while at the same time preserving the individual from government intrusion.

Friday, October 4, 2013

A Black Woman Views the Current Political Scene

Although most voters reject "identity politics" or "the politics of identity" - the cynical view held by some candidates that a voter votes the way she or he does because of his or her membership in some demographic niche - it is nonetheless instructive to learn the views of voters who are old or young, male or female, rich or poor, - and voters of various races, religions, and ethnicities.

In rejecting the politics of identity, voters are expressing that they vote as they do, not because they are old or young, male or female, but rather because they are human. In opposing identity politics, voters recognize the universals in human nature: all people desire freedom and liberty; all people want to be given an equal chance to succeed or fail; all people want to be relieved of the burden of taxes; all people benefit when governments reduce their national debts.

But demagogues embrace the politics of identity, not because such politics in any way benefit voters, but rather because identity politics benefits the candidates and political parties. These cynical politicians encourage divisions in society, whether in terms of race, gender, or economics. Insincere candidates seek support from African-American voters by encouraging Blacks to see themselves as victims rather than to see themselves as people who are seeking opportunity. Reflecting on the hollow rhetoric of such "community organizers" - who really only want to organize their own financial profits - Deneen Borelli, an African-American activist and spokeswoman, writes:

They are just repeating the old message - victimization: Those old faces aren't fighting for their constituents or the black community any more - they're just causing them more problems. We are in crisis in this country - but it is an economic one. Fight to fix that problem rather than throwing up your hands and casting blame by using the "woe is me" rhetoric.

Individuals like Charles Rangel, Jesse Jackson, Al Sharpton, and Barack Obama are happy to receive financial support and votes from the Black community, but are also happy to ensure that African-Americans remain economically disenfranchised. In fact, such alleged "leaders" need to make sure that Blacks are in the condition of a permanent underclass. Every African-American who makes it out of the lower class and into the middle class is a vote lost. In fact, elected officials like Rangel and media personalities like Jackson and Sharpton need Blacks to both be, and view themselves as, a permanent underclass. In this way, such officials and personalities create a dependency and ensure votes and support for themselves. If more African-Americans were given economic opportunities, they would cease to see themselves as dependent upon organizers and activists, and would enjoy the financial stability of the middle class - this benefit to Blacks would be a terrible loss to those who claim to represent the interests of African-Americans but who actually merely enjoy positions of leadership by keeping Blacks dependent. If a leader has his position because he claims to represent the concerns of the downtrodden, he then has a vested interested in ensuring that the downtrodden remain downtrodden; if the downtrodden rise, he will lose his job.

These race charlatans are fighting for their own personal and professional agenda. Their activities and media forays are out of self-interest. They have an obsessive need to stay in the limelight even when they have little to say. And in many instances, their time in the public spotlights seems to ensure they benefit financially.

An African-American woman like Deneen Borelli sees that she and her community are being harmed by individuals like Charlie Rangel, Jesse Jackson, Al Sharpton, and Barack Obama. These so-called leaders want the votes and support of the Black community, but don't care about the Blacks, except to ensure that some Blacks remain poor and ignorant. Such alleged leaders depend on the resentment created by poverty, and on the votes created by ignorance, to keep themselves in power. A Black woman like Deneen Borelli knows that the programs and slogans of such leaders don't help her and her community.

Career politician Rangel and social activists like Jackson and Sharpton want to preserve their special status and maintain their public persona. It is not clear they have anything meaningful to say, but it certainly helps with their personal bottom lines. Their financial self-interests seem to trump the needs of members of the black community. Why aren't they benefitting? Why aren't they seeing the same gains in their lives? In fact, it is my opinion that these guys are repressing the very group they are supposed to be helping by promoting big government solutions. Their message no longer contains inspiration. In fact, their messages no longer contain value. They adhere to the status quo on issues like school choice and are reliant on the notion that the government should just throw money at the old and failing way of doing things.

Many people in the Black community, like Deneen Borelli, feel that self-appointed leaders like Sharpton and Jackson, or elected officials like Rangel and Obama, have failed to help African-Americans. The Black community is disillusioned - the expected assistance did not materialize. Perhaps this is why fewer African-Americans voted for Obama in 2012 than in 2008. This disappointment is real, not imagined: the hoped-for support did not materialize. Rather, this leadership spent its effort enriching itself while allowing the poorest segment of Americans to live without economic opportunity.

Specifically, we can see the misuse of influence and power in the example of Al Sharpton. In the 1980s, Sharpton was caught on videotape by the FBI arranging a cocaine deal. Over the next decade, Sharpton used his connections in the media, and his leverage among government officials, to ensure both that the incident would not become public knowledge and that he would not face legal consequences for the crime. The video was finally discovered in 2002, nineteen years later; when it became public, Sharpton again used his clout to see to it both that he faced no prosecution for the crime and that the media gave the least possible amount of coverage to the matter. Instead of spending his time and effort to help Blacks gain access to economic opportunities, Sharpton had spent two decades using his resources, and the resources of the African-American community, to cover up criminal activity.

Likewise, Jesse Jackson used time and money, both donated raise inner-city Black communities out of poverty, to orchestrate a two-year-long coverup of his mistress and the child he sired with her. Instead of serving African-Americans, Jackson was serving himself - with funds donated to help create opportunities for urban Blacks. Like Sharpton, Jackson used his connections in the media, in the Democrat Party, and in the government.

Millions of Black voters like Deneen Borelli are frustrated. They elected people like Obama and Rangel and supported activists like Sharpton and Jackson, men who promised hope and change, but who delivered no benefit to the African-American community. In fact, as the U.S. economy spiraled downward after 2009, all demographic groups in America experienced lower real incomes, but the African-American community saw its income fall further and quicker than other groups. After 2009, all demographic segments saw their net worth decrease, but for Blacks, the decrease was greater and happened faster. It cannot be surprising that Black voters are angry with these self-proclaimed representatives of the African-American community.

Wednesday, September 25, 2013

Another Variable

Demographers and statisticians have produced seemingly endless numbers about poverty and economics in the United States. Studies reveal correlations, but proving causation is, of course, a much more difficult task. To further complicate the matter, politicians tend to use statistics in a most unscientific way, and the additional matter of race is added to the mix, because race is both a correlate to economic variables and a politically explosive topic.

Although adding other factors to the discussion may seem to complicate beyond reason an already-complex situation, it also may be that additional variables can bring clarity. To this end, some researchers are looking at population growth in relation to poverty and race.

The population growth variable may explain why some anti-poverty programs failed to yield significant benefits - or even worse, seemed to actually increase poverty. Although students may view FDR's New Deal as the first big wave of such social legislation, other measures pre-dated Roosevelt's 1933 initiatives. The largest mass of bureaucratic systems aimed at reducing poverty arose starting in the 1960's, under LBJ's "Great Society" slogan.

The middle class was hit with a crushing onslaught of taxation as new programs mixed with significant increases in already-existing programs: Medicare, Medicaid, Food Stamps, etc. Yet the 1960's and the 1970's saw economic stagnation and increased poverty, often in inner-city areas - precisely those areas targeted by the programs. Why did these programs fail to reduce poverty? Why did these programs in some cases correlate to an increase in poverty?

An examination of population growth trends may help to answer these questions. Social engineers and the bureaucrats who organized these programs labored under a series of assumptions which were fashionable in the 1960's: that planet earth was nearing a population crisis, that population control was necessary for the macroeconomy, and that a reduced birth rate would help individual micro-economies.

Since then, it has become clear that macroeconomies function best when population is growing steadily and moderately, that the planet has a carrying capacity far beyond what the demagogues of the 1960's thought, and that individuals who become parents tend to experience economic stability relative to those with few or no children. When population growth is stable, not erratic, at a level slightly above replacement rate - perhaps 2.3 to 2.6 children per woman - economies are more likely to experience sustained expansion. With sustainable, responsible, and renewable resource management, the carrying capacity of Earth is well above ten billion. Childlessness is a variable which correlates strongly with poverty, despite the stereotypes and cliches surrounding the image of a "welfare mother" - there are racist overtones in the phrase. Childbearing correlates not only with economic well-being, but with general satisfaction in life, and better physical health.

But in LBJ's "Great Society" programs, the assumptions went along these lines: reducing the birthrate would increase prosperity. To those ends, government programs fostered contraception, birth control, and sterilization of those in poverty - again, with racist overtones. Thus, in an attempt to reduce poverty, social programs were discouraging one of the variables which might lift people out of poverty: childbearing. This points to a solution to the mystery of why programs designed to reduce poverty actually increased poverty. Sheryl James writes:

Few social programs in U.S. history loom larger than President Lyndon B. Johnson’s War on Poverty. Launched in the social-domestic cocktail mix known as the 1960s, the War on Poverty introduced programs such as Medicaid and Medicare in an effort to boost opportunity by reducing poverty.

We have no reason to doubt LBJ's sincerity: he certainly wanted to reduce poverty. Whether out of pure altruism, or whether because he hoped to gain more voters for himself and his political party, his intentions, and the intensions of the Congress which passed the legislation in question, were certainly bent on reducing poverty. But unintended consequences are ubiquitous in history, and the "war on poverty" was no exception. Given the bureaucracy's eagerness to promote contraception, birth control, and sterilization among poor - and Black - women,

from 1964 to 1973, among the populations the federal funding served, the overall birth rate dropped by just under two percent — but a whopping 19 to 30 percent among poor women.

During those years, not only did inner-city poverty increase, it also became more intractable. Understanding the exact correlation between childbearing and individual economic stability is murky: various mechanisms emerge as candidates. Perhaps those with children are more like to seek and hold steady employment; perhaps those with children enjoy more networking within their local community; perhaps those with children are supported better during the aging process. More clear on a macroeconomic level is that children create both a steady demand for consumer goods and a steady supply of new wage-earners entering the workforce.

University of Michigan Professor Martha Bailey exposes and articulates some of the assumptions with which the social engineers of the 1960's were working:

The architects of the War on Poverty thought that family planning programs were integral to reducing poverty, and would promote opportunities among poor women and their children.

Sadly, those assumptions proved to be false. Economically, programs based on those assumptions did not alleviate poverty; in fact, inner-city economic devastation worsened in the decades after the appearance and implementation of such "war on poverty" programs. Racially, these programs amounted to a war on the African-American family. By all metrics, conditions worsened: divorce, out of wedlock births, abandonment, failure to pay child support, etc. The non-quantifiable variables worsened as well: the societal cost of living in such circumstances exacted a toll on mental and emotional well-being.

Ironically, programs aimed at reducing the birthrate among Blacks, in addition to being blatantly racist, had the unintended effect of increasing illegitimacy. The out-of-wedlock birthrate actually rose. Deneen Borelli, an African-American political activist, writes:

Welfare in the United States began in the 1930s during the Great Depression. But in the sixties, following Great Society legislation, Americans who weren't elderly or disabled could collect a check from the government on which to live. Single mothers became the biggest users of the system. In general, it became so ingrained in the fabric of the nation that many view welfare as a right. In other words, there are people that think we have the right to life, liberty, the pursuit of happiness, and hard cash from the political establishment.

It is necessary to emphasize the word "unintended" in regards to the outcome of such social legislation. There is no doubt that some voters and some in Congress had genuine and sincere concerns for those in poverty. But unanticipated effects arise from their causes with no regard for altruistic intentions.

Many like-minded thinkers are clear: Welfare has negative, unintended consequences for the black community. A 1984 book called Losing Ground by Charles Murray demonstrates that welfare has perverse unintended consequences for blacks. He argues that the system was created by elites with good intentions. They believed that the blacks population was was discriminated against and that government aid would help to redress the wrongs. He said that the rationale behind the plan was that "they system" - not the individuals - was at fault. People could not get ahead in life because of the way in the country had operated in the past. Murray's solution to the situation was, however, deemed impossible to execute: He wanted to abolish welfare altogether. Critics chastised him.

Instead, federal and state governments have hoped that various attempts at reforming the welfare system would yield a new type of program which would give the hoped-for benefits without the negative and unintended consequences. So far, this has not happened. Despite experimentation with differing configurations of entitlement programs, inner-city poverty seems intractable, and the social misery experienced by the African-American family continues.

In a four-year period, from mid-2009 to mid-2013, average real income, average net worth, and the general standard of living in the United States fell for all groups, but for African-Americans living in inner-city neighborhoods, they fell further and quicker than for other groups. After fifty years of "war on poverty," many Americans, and specifically many Black Americans, still live in poverty.

Tuesday, August 6, 2013

An African-American Woman's Voice in Modern American History

With the election of Barack Obama as America's first biracial president, the long history of race in the United States entered yet another new chapter. The many different phases of Black history, and of the civil rights struggle, have differed subtly yet importantly from each other. Each such phase requires a rethinking of the situation - the challenges and the tactics to meet them. Deneen Borelli, an African-American author, reflects on the latest changes:

Obama's election should have been a wakeup call to the traditional black leaders that their message was outdated. They should have taken a step back and reassessed their message of victimization and blame. The message needs to be recast. It should have either stopped Jesse Jackson and his friends in their tracks or perhaps forced them to strive for new relevance. So I have to ask myself - What are they thinking? Why aren't black leaders listening?

With a successful career in managerial marketing, Deneen Borelli wants to see opportunities for African-Americans. After volunteering for the Congress on Racial Equality (CORE), and working with various media outlets, she came to see that there were two sets of would-be leaders in America's Black community. One set genuinely works to remove those economic obstacles which hinder African-Americans as they try to enter the middle class, and works to create a fair chance for each citizen. But the other set has no desire to help Blacks make it into the middle class; this other set of self-appointed leader seeks to enrich only itself, and in order to keep Blacks dependent on such leadership, this set seeks to keep Blacks in the status of economic and political victims. These cynical leaders needs a permanent set of victims; these cynical leaders claim to represent victims, and if there were no victims, these leaders would have no jobs. Speaking of such leaders, Borelli writes:

Here's the problem: They need to look at modern society in the twenty-first century and initiate new ways to address its problems. Surging welfare dependency in the black community, alcoholism, children continually being born into single-parent homes - these things plague this nation. And it is only getting worse because the numbers keep rising.

Understanding that the false leaders - Jesse Jackson, Al Sharpton, Charles Rangel, etc. - have betrayed the Black community they claim to represent, understanding that these false leaders in fact work to ensure that the Black community does not make large-scale progress into the middle class, Borelli looks to a better set of leaders. She looks to leaders who have a genuine interest in creating economic opportunity for African-Americans. The false leaders actually profit only when the Black community suffers; the false leaders need to represent a community which is not striving upward, but rather they can only represent a community which is suffering in place under oppression - and so these false leaders work to ensure that this community remains economically handicapped. Of these false leaders, Borelli writes:

Their moniker and their reason for fighting is supposedly "justice for everybody" but the only ones benefiting are the guys making the noise. They are all benefiting personally. It's upsetting that they've been able to get away with this. People despise how the old guard is doing business. It's simply wrong. There's a cost to all of this. By not spending political clout on the new fight, these guys are not espousing the benefits of liberty. They are not enlightening people, nor are they advancing them. Rather they are missing the message and trying to keep the rest of us in a time warp. Here's a suggestion: Why not use your power to encourage school choice to stop soaring dropout rates?

The forward-thinking leaders are those who find opportunities and explain them to the Black community; the forward-thinking leaders are those who find obstacles which prevent Blacks from moving into the middle class and work to remove those obstacles. Forward-thinking leaders can lead by words, by actions, or by example. They are working to get Blacks in to the American economy; they are not working to damage the American economy; they are not parasites seeking to leach off the economy. But the false leaders are trying to take African-Americans down the path to permanent poverty: down the path to permanent dependency. Borelli describes the damage that these false leaders seek to inflict on the Black community:

The old way isn't working so let's pursue a new way. The Democrats are beholden to special interest groups - the feminists, unions, trial lawyers, and environmentalists. Here's an example: Jackson joins the unions' fight so he can't advocate for school choice. He'd break the alliance. But with this close-minded approach everybody suffers and windows of opportunity to try to advance people are lost.

In order for the new leaders, the forward-thinking leaders, to find those opportunities and alert the Black community to those opportunities, a change in mindset is in order: the false leaders need to get out of the way. There are excellent examples of such new leadership: J.C. Watts, Clarence Thomas, Herman Cain, Condoleezza Rice, Colin Powell, Allen West, Armstrong Williams, Thomas Sowell, Alveda King, and many others. But this new, positive leadership can't get its message out, as long as self-serving demagogues are working to keep Blacks in poverty and in dependency. Borelli writes:

Tragically, the numbers are getting worse for blacks trapped in inner cities. Why aren't black kids improving and growing at the same rate as their peers? My opinion: It's all in the message from the career black politicians who promote big government solutions that result in stagnation and government dependence. They are playing the blame game and using the race card as their ace in the hole to avoid accountability. Hey, blame your problems on race and don't take responsibility for your life, even when you mess it up. That's easier than providing solutions. And let's face it: it keeps these guys in business.

Deneen Borelli, an articulate Black woman, formulates a clear path forward and upward for the African-American community: do not make victimization and oppression your core identity. Instead, let your core identity be that of people who find opportunities, remove obstacles to opportunities, and pursue opportunities.

Monday, August 5, 2013

A Black Woman's Voice in Modern American History

At the complex intersection of race and politics - a complicated intersection in any nation, but perhaps more so in the United States than in most nations - nuances abound, and readers must be alert for the most subtle of textual distinctions. To that mix, add gender. A twenty-first century Black woman in the United States may well grow tired of the second-string leadership of that which calls itself "the civil rights movement" or the "black movement" - the replacements for original leaders of the SCLC, like Martin Luther King, Jr.

Deneen Borelli is a Black woman with the courage to speak - with the courage to demand intelligent leadership, instead of the substandard and self-serving individuals who are more concerned with lining their own pockets than with finding substantive help for African-Americans. Individuals like Jesse Jackson and Al Sharpton have grown extremely wealthy, claiming to advocate for Black citizens; but in fact those Black citizens have seen their plight worsen, not improve, in recent years - e.g., from 2009 to 2013.

As unemployment among Blacks increased, and as their annual income and net wealth decreased, in the years after 2009, independent thinkers like Deneen Borelli want to see African-Americans make economic progress - like the progress they'd made in the previous decade. Rebuking the corrupt leadership which claimed to speak on behalf of Blacks but which actually merely exploited their leadership positions to enrich themselves, she writes:

Your time has passed and your message is dated. These days you are doing more to hurt the black community than you are helping it. And in the process, you are dismantling the greatness of the American nation. You aren't just hurting blacks with your backward tactics, but the country itself. Your archaic initiatives and your self-serving agendas need to end. It's time to fix the United States, focus on the economy, and put your outdated 1960s agenda to bed - the civil rights initiatives that began over fifty years ago just don't apply to today's world. Unless by choice, we don't sit at the back of the bus anymore. Let me be clear - we appreciated what you did, but now your old guard message needs to be modernized because hanging on to it only benefits you and hurts everyone else.

America's Black community needs economic freedom and opportunity. It does not need anger in the streets. About whom is Borelli writing?

Of course, I am talking about a long list of black leaders who understand and conceptualize today's problems by looking backward rather than forward. I am referring to Jesse Jackson, Al Sharpton, and New York's censured Democratic Representative Charles Rangel. They rose to prominence year ago by telling us that the poverty that plagued blacks was someone else's fault. Members of the black community who didn't have jobs, housing, or money to feed themselves could feel better about themselves knowing they were victims rather than failures.

While at one time in the distant past it may have been true, it is certainly no longer true to tell the vast majority of American Blacks that they are helpless victims. Beyond being untrue, it is dangerous - it is dangerous to teach people to identify themselves primarily, perhaps exclusively, as victims. The original goal of the civil rights movement was to empower African-Americans and make them independent. But leaders like Al Sharpton and Charlie Rangel are teaching Blacks that they are powerless and should be dependent. Borelli explains:

But these public figures who are leading the black population down that path need to seriously rethink their approach to civil rights issues and update their commentary. Their self-serving agendas for power and control have been obtained by playing the race card and in some cases, by declaring blacks are victims in need of special treatment. In some instances, they've even turned their victimization message into a business - claiming they are going after corporations for their communities, then oddly, benefitting personally and professionally. In some cases, investigations of black politicians are racially motivated.

African-Americans must ask themselves which leaders truly represent them, and which leaders merely exploit them. Sharpton and Rangel, it has become clear, do not act in ways which measurably or detectably benefit the Black community; Sharpton, Rangel, and a host of other similar individuals act only to gain wealth and power for themselves. Beyond not assisting the Black community, these corrupt and self-appointed leaders enrich themselves by ensuring the American Blacks do not, as a community, make economic progress. The worst possible thing for these pseudo-leaders would be an emerging Black middle class. If Blacks achieved economic success, they'd have no need for the demagogues.

It's time to fight the new fight, not the old one. It's time to drop the old rhetoric and update the cause. It's time to take some responsibility for our own actions. Let me be clear. If we want to move forward, the shackles of yesteryear's rhetoric needs to be broken down and recast. Black Americans are a great people with great potential. Sometimes, everyone needs a reminder: that individuals control our own destiny rather than playing the blame game to justify personal failures.

A new generation of Black leadership is rising: J.C. Watts, Alan Keyes, Condoleezza Rice, Clarence Thomas, Colin Powell, and others lead with words or by example. African-Americans can rise by engaging in the free enterprise system. The success of individual African-Americans is both a barometer and a pattern to follow.

This country elected a black president. That alone should have put to rest the constant rants of discrimination and the overwhelming demands for affirmative action to rest. No quota system here. Obama got elected because he worked hard and promoted his policies in such a way as to garish the most votes. This fixation on victimization - the decades-old vision that the plights of the black community are someone else's fault - needs to go.

Rather than being made dependent on government programs - from affirmative action to hiring quotas to welfare to food stamps - Black Americans will be free to rise when they are free to engage in a free market. Black leaders who teach them how to use the economy - not how to live off the economy - will be the leaders who bring the Black community into a prosperous middle-class existence.

Monday, July 8, 2013

How Many Names Can One Spy Have?

Starting shortly after the Russian Revolution of 1917, and accelerating during the 1930's, 1940's, and 1950's, the Soviet Union maintained an extensive network of spies in the United States. They had various purposes: to collect information, to disseminate disinformation, and to work from inside the United States government to influence policy decisions. An obvious part of such covert operations is manufacturing identities, at which the KGB and other Soviet intelligence agencies worked strenuously.

One agent, whose probable name was Vilyam Genrikhovich Fisher, is known most commonly as Rudolf Ivanovich Abel. But he also used the name Andrew Kayotis and the name Emil Robert Goldfus. Although this may seem like a lot of names, for a relatively short career, it is not uncommon for any spy to have a number of aliases.

Walter Pincus, writing in the Washington Post, alludes

to the case involving Col. Rudolph Abel, a Soviet KGB agent, who lived in New York City under an assumed name and purported to be a commercial photographer. Abel was tried and convicted of spying in 1957, and in 1962 he was exchanged for Francis Gary Powers, the American U-2 pilot who had been shot down over the Soviet Union and was in a Russian jail.

Before his capture, Rudolph Abel visited Bear Mountain Park with his assistant Reino Häyhänen in 1955. The two of them buried $5000 in cash, intended for the wife of Morton Sobell, another Soviet spy who had been caught and was sitting in an American jail. This was apparently Moscow's attempt to take care a spy's wife, since Morton Sobell had been in prison since 1951. Later, Reino Häyhänen went back and retrieved the money for himself, and leaving Sobell's wife luckless. Before his capture, Sobell had been working on getting military secrets about Fort Monmouth. Describing the location, M. Stanton Evans writes:

The installation called Fort Monmouth was in fact a sprawling network of labs spread out among several New Jersey towns and other Northeast locations, doing research on confidential military projects. Radar, missile defenses, antiaircraft systems, and other devices involving advanced electronics were all on the agenda. There were four main research labs.

Monmouth would clearly be a tempting target for any Soviet intelligence agency. Morton Sobell was not the only Russian spy looking to get secrets out of Fort Monmouth:

The installation had been a scene of action in the 1940s for Julius Rosenberg, then a Signal Corps inspector, and to a lesser extent for his convicted coconspirator, Morton Sobell, and two other accused members of the spy ring, Joel Barr and Al Sarant.

Joel Barr and Al Sarant would later move from being "accused" to being "confirmed," as the FBI investigated further, and as the "Venona" documents were made available. Sobell and Rosenberg, in turn, had worked with another Soviet agent, Aaron Coleman. Coleman had learned to exploit three weaknesses in Fort Monmouth's security:

One was that the Communist Party had established a special unit in the vicinity of the research setup, called the Shore Club, which included former Monmouth employees among its members and which, according to extensive testimony, had as its object ferreting information out of Monmouth. Another was that numerous security suspects were indeed ensconced among Monmouth's suppliers, most notably the Federal Telecommunications Lab, prime target of the Sheehan inquest. Yet another was the seemingly laid-back attitude toward these matters in the higher reaches of the Army.

As a sense of alarm grew, G-2, the army's intelligence unit, began to investigate.

By far the most comprehensive overview of the security scene at Monmouth would be provided - after some initial hesitation - by Captain Benjamin Sheehan, a G-2 counterintelligence specialist from First Army headquarters in New York.

Sheehan had investigated Fort Monmouth in 1951, so his firsthand knowledge of security risks was up-to-date. Sheehan's work led to the arrest of Sobell, and Sheehan confirmed that there were weaknesses in Monmouth's security.

A poster boy for all these troubles was one Aaron Coleman, who held an important job at Monmouth dealing with radar defenses. Coleman had been a schoolmate of Julius Rosenberg and Morton Sobell at the College of the City of New York, and in contact with Sobell up through the latter 1940s. He also admitted having attended a Young Communist League meeting with Rosenberg when they were students at City College. In this connection, ex-Communist Nathan Sussman, a CCNY alum, would testify that he, Coleman, Rosenberg, Sobell, Al Sarant, and Joel Barr had all been members of the YCL together. (Coleman would deny this, as he would deny Rosenberg's testimony at his espionage trial that Rosenberg and Coleman had been in contact at Fort Monmouth.)

Although tangled and complex, these events - merely a sample of many more - serve to show the growing Soviet espionage network in the United States from the 1930's to the 1950's, and even into later decades.

Thursday, July 4, 2013

Defense, Not Revenge

In late 2001, the United States faced an important question: how would it respond to, not only to the Islamic terrorist attacks of 9/11, but to the sudden awareness of a worldwide terror network - a network immutably determined to kill Americans? The question of how America would respond to this grave threat would determine much about national policy and even about daily life for the next several decades.

It was and is important to understand that such terrorist networks, al-Qaida being only one of many, while ever adapting and changing their tactics, are incorrigible in their ideology. They are immovably fixed on the goal of killing Americans. Because they have such an extreme psychology, the civilized world cannot interact with them using the methods of negotiation and diplomacy. There are no conceivable actions which could be taken, or words which could be uttered or written, by any government, individual, or society which would cause such groups to change their primary behavior, which is murder.

The ways in which nations or cultures choose to respond to terrorism will both reflect and impact the deeper core values of those nations or cultures. For this reason, the United States should not react with a motive of revenge. Revenge not only clouds strategic and tactical thought, but it infects the soul. Vengeance is backward-looking. Instead, the primary goal must be to protect citizens from future attacks. Secretary of Defense Donald Rumsfeld writes:

A key element of the administration's policy was that the primary purpose of America's reaction to 9/11 should be the prevention of attacks and the defense of the American people, not punishment or retaliation. The only way to protect ourselves is to do after the terrorists wherever they may be. This was a more ambitious goal than the approaches previous presidents had set. It reflected Bush's view, which I shared, that 9/11 was a seminal event, not simply another typical terrorist outrage to which the world had become accustomed. The 9/11 attack showed that our enemies wanted to cause as much harm as possible to the United States - to terrorize our population and to alter the behavior of the American people. No one in the administration, as far as I know, doubted that the men who destroyed the World Trade Center and hit the Pentagon would have gladly killed ten or a hundred times the number they killed on 9/11. They were not constrained by compunction, only by the means to escalate their carnage. This meant that their potential acquisition of weapons of mass destruction - biological, chemical, or nuclear - represented a major strategic danger.

Specifically, al-Qaida had set up a workshop for the manufacture of biological and chemical weapons in the town of Khurmal. The facility, operated by an al-Qaida affiliate known as Ansar al-Islam, was documented to be producing ricin, cyanide, potassium chloride, and possibly other chemical weapons. Awareness of such operations was part of the heightened alertness in the years after 9/11. The facility in Khurmal was one more piece of data which the world was incorporating into its concept of who and what Islamic terror groups are.

Tuesday, July 2, 2013

Learning to Prevent Terror

Reacting to terror is a sign of an unprepared and unthinking government; responding to terror is somewhat better. Preventing terror is the proper focus for a government. The United States moved through these three phases in the aftermath of the terrorist attacks on September 11, 2001. Realizing that there was a coordinated and funded network of individuals and groups whose sole aim was to kill Americans was the first step.

The name Osama bin Laden would soon be common in the news media. This Saudi millionaire issued a fatwa - an Islamic verdict - stating that

the ruling to kill the Americans and their allies - civilians and military - is an individual duty for every Muslim who can do it in any country in which it is possible to do it.

The public learned that the group operated by Osama bin Laden, al-Qaida, was one of a long list of terror organizations. Ending the terrorist threat would not simply mean dismantling al-Qaida and getting rid of Osama bin Laden. An entire network of terrorist groups would have to be defunded and destabilized; the safe havens which had sheltered parts of this network would have to be made inhospitable to it.

Most of all, the nations of the world would have to understand that these terrorists were incorrigible: their one and only objective was to kill Americans. With them, there could be no negotiating, no deterrence, no compromise, no diplomacy, no appeasement, and no tradeoffs. As long as the network of terrorist groups existed, and as long as its members lived, they would be working diligently to kill. Innocent civilians would be safe only when the network and its members were eliminated. This realization would be possible only after the immediate shock of the attacks wore off. Secretary of Defense Donald Rumsfeld writes:

America awoke the next day a nation at war. Above pictures of the burning World Trade Center, the Washington Times had a one-word front-page headline that read, in large, bold, capital letters: "INFAMY." Across the United States, Americans expressed anger and sadness. They also voiced fear of further attacks. Many wondered if they were safe, how their lives might have to change, whether their family members or friends were in danger. Major landmarks considered likely targets were watched with anxiety. Each rumor of another attack set people on edge. Some feared for family members in the military. The financial world was in shock. The stock market suffered one of its biggest drops in history when it reopened six days after 9/11. Hundreds of billions of dollars - property damage, travel revenue, insurance claims, stock market capital - all lost in a single day because nineteen men with a fanatical willingness to die boarded four commercial airliners wielding box cutters.

Understanding the nature of these fanatics was and is a central historical task. Vocabulary is helpful: there is a difference between 'Islam' and 'Islamist' and a difference between 'Islamic' and 'Islamism' and these differences are crucial.

Islam is a religion, which people, like every other religion - Buddhism, Hinduism, Judaism, Christianity, etc. - have a constitutional right to practice. Islamism is, by contrast, not merely the religion of Islam, but rather the most orthodox interpretation of that religion, adhering carefully to Qur'an (Koran) and other sacred texts of that religion. Islamism is essentially, intrinsically, and inherently violent. Civilized people in general, and the United States government in the wake of 9/11 in particular, have no quarrel with Islam. Islamism, on the other hand, constitutes a continual danger to free and peaceful civilians everywhere.

The word 'Islamic' refers to the religion, to the culture, and to the practitioners of that religion. On the other hand, the word 'Islamist' refers to the propaganda, to the attacks, and to those who carry them out in the name of orthodox understandings of the prophet Muhammad. As leaders of all western governments have repeatedly stated, the civilized nations of the word seek peaceful relations with Islamic cultures; but from Islamists, conversely, come only violence and terror. Don Rumsfeld writes:

However, I became increasingly uncomfortable with labeling the campaign against Islamist extremists a "war on terrorism" or a "war on terror." To me, the word "war" focusses people's attention on military action, overemphasizing, in my view, the role of the armed forces. Intelligence, law enforcement, public diplomacy, the private sector, finance, and other instruments of national power were all critically important - not just the military. Fighting the extremists ideologically, I believed, would be a crucial element of our country's campaign against them. The word "war" left the impression that there would be combat waged with bullets and artillery and then a clean end to the conflict with a surrender - a winner and a loser, and closure - such as the signing ceremony on the battleship USS Missouri to end World War II. It also led many to believe that the conflict could be won by bullets alone. I knew that would not be the case.

Voices in government and in the media repeated our friendship with, and respect for, peaceful and moderate Muslims - typically those who lived in American suburbia: middle class, middle-aged, mid-western, educated professionals. Such people were not to be seen as a threat. Rather, it was the radicalized version of Islam which was both seed and fertile ground for terror. The distinction between dangerous Islamists and peaceful and moderate Muslims - nominal Muslims, not bound slavishly to the literal texts of the past - was and is a central distinction to understanding the terrorism which threatened and threatens not only the United States, but much of the free world. Rumsfeld continues:

From the beginning, members of the administration worked gingerly around the obvious truth that our main enemies were Islamic extremists. I didn't think we could fight the crucial ideological aspect of the war if we were too wedded to political correctness to acknowledge the facts honestly. While we certainly were not at war against Islam, we did intend to fight and defeat those distorting their religious beliefs - their Islamic religious beliefs - to murder innocent people. I thought that the best term was Islamist extremists, which made clear we were not including all Muslims. Islamism is not a religion but a totalitarian political ideology that seeks the destruction of all liberal democratic governments, of our individual rights, and of Western civilization. The ideology not only excuses but commands violence against the United States, our allies, and other free people. It exalts death and martyrdom. And it is rooted in a radical, minority interpretation of Islam.

For more than a decade, the United States has wrestled with the subtle and nuanced situations in which these distinctions must be applied. Learning to identify those who are truly people of good will, and learning to identify those who wish only to kill, is not always easy. But learning to make such distinctions is necessary for the survival of civilization, and necessary for the defense of the peculiarly western notion that human life is innately valuable and precious.

Friday, June 21, 2013

The Multiple Roots of Our Misery

As Director of the Office of Management and Budget (OMB) from January 1981 to August 1985, David Stockman learned first-hand the workings of the federal government at the highest levels. Educated at Harvard, his firm belief was that free market capitalism will create more prosperity for the population than “crony capitalism” - the latter being a system in which markets are nudged by the government into various patterns, rather than allowing patterns to emerge organically as free individuals make decisions about buying and selling.

In addition to favoring free markets over regulated markets, Stockman also understood that in order to get the full benefit of tax cuts, such cuts must be accompanied by roughly corresponding cuts in the federal budget. When the Reagan administration opted to take tax cuts despite congressional unwillingness to make any spending cuts, Stockman declared that the “Reagan Revolution” had failed. Only when taxes and federal spending are simultaneously reduced can a country hope to make progress against both debt and deficit. Stockman saw that clear economic prescriptions were corrupted by the political process. Disillusioned, he resigned from the OMB and never returned to politics.

From his private-sector perch, he continues to offer explanations about economic events, illuminated always by the distinction between free markets on the one side, and crony capitalism on the other side.

It requires a great deal of self-discipline for a government to oversee a truly free market: the temptation to intervene is omnipresent. While it may seem catastrophic for one or more large enterprises to fail, and seem negligent for a government to stand back and allow large businesses to go bankrupt, such events are necessary, and in the long run beneficial, to the national economy.

The specter of factories closing, workers being laid off, and large unemployment numbers appearing in reports can instill fear in the stoutest hearts. But oftentimes, enduring such short-term pain paves the way for long-term gain. Enduring a few months of unemployment often leads workers to new jobs at even higher wages, if they are working in a truly free market.

Sadly, most elected political leaders - Democrat or Republican, liberal or conservative - do not have the stomach to stand back and simply let major industries collapse. Although this crash would pave the way for an economic boom, the temptation is to intervene. Stockman writes:

By the time of the September 2008 crisis, however, these long-standing rules of free market capitalism had undergone fateful erosion: traditional rules of market discipline had been steadily superseded by the doctrine of Too Big To Fail (TBTF). The latter arose, in turn, from the notion that the threat of “systemic risk” and a cascading contagion of losses from the failure of any big Wall Street institution would be so calamitous that it warranted an exemption from free market discipline.

While a political leader can sound very confident as he labels this or that concern as “too big to fail,” there is in fact no theoretical construct which clearly defines such a category of businesses. In fact, some theoretical models of free markets predict that every firm will eventually fail, and that such a failure is not only a necessary part of the business cycle, but it is a beneficial part of the cycle - such failures create the next round of opportunities.

But there was no proof of this novel doctrine whatsoever. It implied that capitalism was actually a self-destroying doomsday machine which would first foster giant institutions with wide-ranging linkages, but would then become vulnerable to catastrophe owing to the one thing that happens to every enterprise on the free market - they eventually fail.

Even if one were to grant, for argument’s sake, the TBTF hypothesis, then one would expect, as a logical consequence, that governments would simply regulate the economy so that no corporation ever grew so large that it was TBTF. While wrong-headed, that would at least be internally consistent from a theoretical point of view. But instead, the nation’s central bank chose to simply tinker with the market place.

In fact, if TBTF implied an eventual catastrophe for the system, there was an obvious solution: a “safe” size limit for banks needed to be determined, and then followed by a 1930s-style Glass-Steagall event in which banking institutions exceeding the limit would be required to be broken up or to make conforming divestitures. Yet while the TBTF debate had gone on for the better part of two decades, this obvious “too big to exist” solution was never seriously put on the table, and for a decisive reason: the nation’s central bank during the Greenspan era had become the sponsor and patron of the TBTF doctrine.

President Ronald Reagan had appointed both David Stockman to the OMB and Alan Greenspan as Chairman of the Federal Reserve. While Stockman remained true to his economic training and worked for truly unregulated markets and for spending reductions to match tax cuts, until he resigned after seeing “the triumph of politics” over rational economics, Alan Greenspan, on the other hand, seemed to shed his free market ideology upon taking office. Whether Greenspan was truly a die-hard laissez-faire advocate, or whether Reagan merely mistakenly thought he was one, is open to debate. Reagan thought that, in appointing Stockman and Greenspan, he was appointing two ideological soulmates; that hope was quickly shattered, as was the hope that Congress would see the wisdom in cutting spending as it cut taxes. When Congress refused to cut spending, Reagan took the tax cuts as a political compromise. In politics, one can take “half a deal” as a compromise; but in economics, one cannot take half an equation and get half the results.

This was an astonishing development because it meant that Alan Greenspan, former Ayn Rand disciple and advocate of pure free market capitalism, had gone native upon ascending to the second most powerful job in Washington. In fact, within five months of Greenspan’s appointment by Ronald Reagan, who had mistakenly thought Greenspan was a hard-money gold standard advocate, the Fed panicked after the stock market crash in October 1987 and flooded Wall Street with money.

Abandoning the basis of the free market, Greenspan saw his objective as the stabilization and maintenance of certain market levels. This was one of many steps which led the nation toward economic disaster.

For the first time in its history, therefore, the Fed embraced the level of the S&P 500 as an objective monetary policy. Worst still, as the massive Greenspan stock market bubble gathered force during the 1990s it had gone even further, embracing the dangerous notion that the central bank could spur economic growth through the “wealth effect” of rising stock prices.

Greenspan found justification in the writings of Milton Friedman. Friedman, a Nobel Prize winner in economics, while propounding an orthodox version of the free market, introduced another error. Friedman endorsed the notion of floating exchange rates for currencies, even after President Richard Nixon cut the final loose connections between the US dollar and the price of gold. Friedman’s vision of floating exchange rates paved the way, perhaps unwittingly, for an entire industry of currency speculation. The worst effects of that industry would take decades to emerge after Nixon’s disastrous 1971 decision.

It thus happened that Leo Melamed, a small-time pork-belly (i.e., bacon) trader who kept his modest office near the Chicago Mercantile Exchange trading floor stocked with generous supplies of Tums and Camels, found his opening and hired Professor Friedman. Even as several dozen traders at the Merc labored in obscurity to ping-pong a thousand or so futures contracts per day covering eggs, onions, shrimp, cattle and pork bellies, Melamed was busy plotting the launch of new futures contracts in the major currencies. In so doing, he inadvertently demonstrated how radically unprepared the financial world had been for the Friedmanite coup at Camp David.

The Chicago Mercantile Exchange - known simply as the ‘Merc’ - is an institution which facilitates the trading of futures and options, two types of financial instrument. A “future” is a contract to buy or sell a given quantity of a given commodity at a given price at some specified future time. For example, we can write a “future” to sell one ton of steel for $50 three months from now, or to buy one ton of wheat for $75 two months from now. Originally designed for industries which used these commodities, they eventually began to be used for pure speculation. An “option” is a document which gives its owner the right to buy or sell a quantity of a commodity at a given price at a specified future time, but does not oblige him to do so. The trading of futures and options requires complex calculation, involves great risk, but can yield great profits. Traditionally, the commodities involved were wheat, steel, copper, cotton, corn, beef, pork bellies, and a few other agricultural and mining products. Now, that would change. Because of floating exchange rates for major world currencies, and finally because of floating exchange rates for the US dollar, futures and options would be traded, not for wheat or steel, but for currencies. This would change the world’s exchange dynamic in ways which were unpredictable and which took years to manifest themselves.

Leo Melamed was the genius founder of the financial futures market and presided over its explosive growth on the Chicago “Merc” during the last three decades of the twentieth century. He understandably ended up exceedingly wealthy for his troubles, but on Friday afternoon of August 13, 1971, it would not have been evident to most observers that either of these outcomes was in the cards.

While the speculative trading of currencies was quietly starting - the world didn’t seem to notice at the time - other harmful changes to the nation’s economy were underway. Greenspan managed interest rates and managed the money supply with an eye to keeping equities markets at certain levels.

This should have been a shocking wake-up call to friends of the free market. It implied that the state could create prosperity by tricking the people into thinking they were wealthier, thereby inducing them to borrow and consume more. Indeed, the Greenspan “wealth effects” doctrine was just a gussied-up version of Keynesian stimulus, only targeted at the prosperous classes rather than the government’s client classes. Yet it went largely unheralded because Greenspan claimed to be prudently managing the nation’s monetary system in a manner consistent with the profoundly erroneous floating-rate money doctrines of Milton Friedman.

Allegedly different from each other, both President George W. Bush and President Barack Hussein Obama would find themselves in the financial turmoil of 2008, more than 25 years after Nixon’s currency rate decision, and both would seek advice from Greenspan’s successor, Ben Bernancke, who insisted on comparing the situation in 2008 to the situation in 1929.

The great contraction of 1929-1933 was rooted in the bubble of debt and financial speculation that built up in the years before October 1929, not from mistakes made by the Fed after the bubble collapsed. In the fall of 2008, the American economy was facing a different boom-and-bust cycle, but its central bank was now led by an academic zealot who had gotten cause and effect upside-down.

If the situation in 2008 was misdiagnosed, inasmuch as Bernancke saw it as parallel to 1929, then the misdiagnosis led to incorrect prescriptions.

The panic that gripped officialdom in September 2008, therefore, did not arise from a clear-eyed assessment of the facts on the ground. Instead, it was heavily colored and charged by Bernancke’s erroneous take on a historical episode that bore almost no relationship to the current reality.

The prescription for the problems of 2008 were not appropriate for that situation, or indeed for any situation. They helped not at all, but rather created two additional problems: further distortion of the market’s natural trend, and further undermining of currency’s credibility. Yet these bad prescriptions were delicious to those business leaders who did not want to operate in a free market. Cronyism had a heyday. Instead of exposing all businesses to the hurricane of market fluctuations and seeing which of them could weather the storm, markets were warped to create safe havens for those businesses which abandoned their laissez-faire ethics and would sell their honor for a government handout.

Nevertheless, the bailouts hemorrhaged into a multitrillion-dollar assault on the rules of sound money and free market capitalism. Moreover, once the feeding frenzy was catalyzed by these errors of doctrine, it was thereafter fueled by the overwhelming political muscle of the financial institutions which benefitted from it.

From Stockman’s viewpoint, 2008 was 1985 all over again. Theoretical purity was negotiated away to political pragmatism. Such damage, once done, is not quickly or easily undone.

These developments gave rise to a great irony. Milton Friedman had been the foremost modern apostle of free market capitalism, but now a misguided disciple of his great monetary error had unleashed statist forces which would devour it. Indeed, by the end of 2008 it could no longer be gainsaid. During a few short weeks in September and October, American political democracy had been fatally corrupted by a resounding display of expediency and raw power in Washington. Every rule of free markets was suspended and any regard for the deliberative requirements of democracy was cast to the winds.

In David Stockman’s view, then, a series of bad decisions led to the nation’s economic decline: Nixon’s free floating currency, the Fed attempting to manage the stock markets via interest rates, speculation on currency exchange rates, Congress’s refusal to cut spending, and more. The lingering question remains: can it be undone?

Wednesday, June 19, 2013

Education Funding and Education Policy

Would you buy a book, even if the author hadn't finished writing it yet? Would you start playing a game, even if all the rules for the game hadn't been clarified? Would you get onto an airplane, even if you didn't know where it was going?

State legislatures around the nation have adopted a nation-wide educational program called 'Common Core State Standards' - often simply called 'Common Core' or CCSS. Educational programs have come and gone over the decades, but what is new about this one is the fact that the individual state legislatures have adopted it without knowing what it even is, because at the time they adopted it, it wasn't completely written or finalized. Tim Walker, writing in NEA Today, notes that

Forty-five states have adopted the CCSS, which means for the first time there will be consistency among states in what students should know and be able to accomplish in the two core subject areas of English language arts and math. The purpose of the CCSS is to provide a consistent, clear understanding of what students are expected to learn, no matter where they live, so teachers and parents know what they need to do to help them. The CCSS is also designed to be much more rigorous, focused, and coherent than current standards. It is also relevant to the real world, reflecting the knowledge and skills that young people need for success in college and their careers.

Thus far, the CCSS sounds good. It's difficult to argue with consistency, rigor, focus, coherence, and relevance. Who wouldn't want students to have knowledge and skills? But, in fact, while curricula and tests are being rewritten, and

student assessments are being remapped to the Common Core, the design and implementation of these new exams is still largely a work in progress, despite the expectation that they will be implemented for the 2014 – 2015 school year.

Why would anyone commit to complying to a program which didn't yet exist? Why would someone promise to follow a set of guidelines that hadn't yet been written? The answer is money. The federal Department of Education "gave" money to individual states in return for a blanket promise to follow whichever guidelines the department might issue to the states in the future. Note the years:

In 2010, two consortia of states were awarded federal Race-to-the-Top money to develop a new set of assessments that will be tied to the Common Core Standards scheduled for implementation during the 2014–2015 school year. Both groups — the Smarter Balanced Assessment Consortium (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC) — plan to administer these new exams primarily on computers, and both aim to minimize multiple-choice questions in favor of open-ended problems requiring creativity and critical thinking. Both groups also plan to develop materials for teachers, such as curriculum maps, to show how the material on the assessments can be taught over the course of the year, and they will create items that can be used formatively in classrooms. The PARCC consortium consists of 23 states, and the Virgin Islands. Twenty-seven states belong to SBAC.

"Race to the Top" was a slogan created by the Department of Education. It was a label for a contest: those states which would make the most blanket promises to the federal level - turning over the state's present and future decisions to the national government - would receive funding. Step by step, various states gave away more and more of their freedom, and promise to let the federal Department of Education dictate current and future educational policy inside the states. Both federal and state governments seemed to be ignoring the ninth and tenth amendments to the Constitution - the final two amendments in the Bill of Rights.

While criticizing previous educational programs as relying too much on high-stakes standardized testing, "Race to the Top" drove states into the arms of CCSS, which simply repackages such testing in new formats. Despite misgivings on the part of some educators, many states rushed to embrace CCSS, simply because doing so would result in money from the federal government arriving their school systems. Or so they thought. Because "Race to the Top" was constructed as a contest among states, the winners were those states which promised to turn over the most control the federal government. Some states promised to give almost complete control to the federal government; they lost the contest, because other states gave total control to the new program. But those states who lost the race - because they promised to give "almost" complete control instead of simply complete control - were still obligated to fulfill the commitments they'd made, even though they now received none of the special Common Core funding. An article in the MEA Voice notes that

Many states, including Michigan, began the process of adopting the standards in 2009 — despite not having seen them — in order to qualify for federal Race to the Top funding. The Race to the Top program, part of the 2009 federal stimulus package, doled out more than $4 billion to states “that are leading the way with ambitious yet achievable plans for implementing coherent, compelling, and comprehensive education reform.” The program also included more than $300 million to develop tests based on the new curriculum standards.

The Michigan State legislature, then, lured by the possibility of millions of dollars from Washington, agreed to implement a program about which it freely admitted that it had no clear idea - a program about which it knew nearly nothing. How they "began the process of adopting the standards" which had not been formulated is a puzzling matter. In any case, while they promised to abandon Michigan's sovereignty and ability to decide its own matters for itself, their efforts were in vain. Other states surrendered even more autonomy, and Michigan got no money from the program.

The Michigan Legislature passed numerous “reforms” in an attempt to secure Race to the Top dollars. While Michigan failed to win any of those federal dollars, the state is still responsible for implementing the Common Core State Standards by the 2014-15 school year.

The State of Michigan is now "on the hook" to pay for programs it didn't even understand when it agreed to adopt them. Not only has the federal government usurped the state's autonomy, and not only has the state agreed to this usurpation, but now the state must pay for the removal of its own freedom. The state had hoped to sell its freedom for money; instead, the state has lost its autonomy and now must pay for the removal of that autonomy.

Much of the state's educational bureaucracy will have to be re-tooled, at the expense of Michigan taxpayers, to align itself with the new federal dictates. MEA Voice cites William Schmidt at Michigan State University:

Schmidt reports that more than 90 percent of teachers support the concept of Common Core standards. However, there is a big gap between what teachers understand the standards to be and what they actually are.

In the end, the State of Michigan gave away its autonomy, got no money in return, and must now pay to re-tool its own system to make it correspond to a set of guidelines which originated neither with the Michigan legislature nor with the Michigan voters. The federal Department of Education's trickery, and the state legislature's naivete, resulted in massive damage to Michigan's educational programs.

Sadly, Michigan is not an isolated example. Many other states followed suit. The question now confronting these states is whether they can in any way regain the liberty and autonomy thus lost.