The gratitude meme

A meme on social media which I observed this summer involved expressing gratitude. The post instructed the reader to “list 3 things you are thankful for each day for 5 days” or “list 5 things each day for 3 days”. No one seemed to post a list more than once, so it seemed to be a failure, in the sense that no one rose to the challenge, at least among my friends. However, if viral propagation is the real measure of “success” on social media, this one was clearly a winner. It was popular across a wide variety of demographics.

Stuff like this fascinates me because it seems to be the future of spirituality. My own hypothesis is that spiritual (moral/ethical) practices and attitudes work themselves out almost entirely at the personal level, and mostly through individual interaction with one’s immediate social circle (friends/family) in response to loss, trauma and grief. Second in influence is popular culture (music, movies, books) and a distant third would be any formal efforts on the part of organized religion (sermons, Sunday School). I base this hypothesis on the observation that telling people how to think, feel or behave isn’t nearly as influential as showing them.

In the past a lot of this happened in church. Today it happens (also, or happens instead) on Facebook. This might make a difference, eventually. American religion was shaped first by the rural character of early American development, and then by the urban experiences of immigrants. In the first phase, people spread out to develop the land and could only get together socially for church. One of the reasons certain radical, dissident Protestant groups like Methodists and Baptists, scorned and persecuted in their countries of origin, became so overwhelmingly popular in the interior of early America (late 1700s-early 1800s) was that their religious events –e.g. tent revivals– were the only entertainment for most people in the back country.

Later, with millions of immigrants crowded into urban settings to work in factories, groups of immigrants congregated with people like themselves and socialized through denominations in which ethnic origin was much more important than religious doctrine. As late as the 1960s, when my family “immigrated” to California, we sought out “people like us” in California’s exploding suburbs, and found them in Lutheran churches, which were guaranteed to be full of people of German ethnic origin, people who would recognize the folk art on our wall as Pennsylvania Dutch hex signs.

When churches were the social centers, if you experienced a loss or trauma, condolences might come in the form of a personal visit from “people from the Church”, probably bringing food. Today, although it’s harder for people to show up at your door with a casserole, geographically dispersed families can get back together on something like Facebook. So the question on my mind is, does this work? Can moral values be determined or transmitted through social media? Does it allow for the sort of spiritual education that used to take place person-to-person, one crisis at a time?

If the “gratitude” meme is any indication, then I think the answer is a qualified “yes”. Apparently, Internet memes don’t get people to reflect deeply on what they ought to be thankful for. It doesn’t seem to encourage anything other than a brief, shallow pause for reflection. In that way it is very similar to weekly invocations of thanks in religious services, or annual secular events like Memorial Day, Veteran’s Day or Thanksgiving. What I think it demonstrates, though, is that people want to talk about this, and do so “in public”, that is, exposed to friends and family on social media. That idea is similar to announcing you are going on a diet or have started visiting the gym: you want social reinforcement for something your individual will may not be strong enough to sustain. That people want this, that people need this, and that modern communications can to some extent provide this seems to me a very good sign.

Let me just first explain (mansplain, if you will, because I know some of you already know all this) that gratitude or thankfulness is fundamental to any rational ethic system. Some off-hand remark by Cicero in a courtroom argument said it well: “Gratitude is the parent of all the virtues”.

Any rational ethical system, that is, a system for making people happier, is going to involve some element of this, either gratitude or ingratitude, as dissatisfaction: not content with what you have, or not happy with who you are. Envy, desire, greed and so forth are examples of this discontent.

When the question is not merely legal behavior but ethical conduct, when we are concerned not merely with justice but also being happy, then how we feel about things moves to the forefront. Thus, the Ten Commandments not only prohibits adultery and theft but also commands you not to covet your neighbor’s wife or goods. No sensible legislator passes a law against “coveting” but it is still good advice if you want to be happy.

The opposite of gratitude is also the central focus of the Four Noble Truths of Buddhism: (1) all life is suffering, (2) the cause of suffering is ignorant desire, (3) this desire can be destroyed, (4) the means to this is the Eightfold Path.

Here all systems which have human happiness as their object agree that unhappiness comes from desire for things you don’t have, or a craving to become someone who you are not. Some ethical systems leave room for ambition, i.e. learning, achieving or acquiring. In that case, the thing you have is your potential, and the thing you desire is to actualize that potential. In such systems, however, it is either assumed or emphasized that wishing doesn’t make it so. Desiring wealth doesn’t make me rich, wanting to read doesn’t make me literate.

All these systems are also critical of an existing social system which places a high value on desire as the engine which drives the society to ceaseless activity, striving to gain wealth or assert power over others. Where they all differ is in the details of how to overcome desire (craving, want, lack, need). Some systems, like Buddhism and Stoicism, emphasize self-control. You can destroy desire by simply not desiring. Other, like the Judeo-Christian-Islamic traditions teach that such self-control is only possible by submitting your will to God’s will.

Modern, secular, technological civilization is unique in having the capacity to generate a seemingly endless supply of products which claim to be “goods” (which promise they can satisfy desire). There are some things we desire, however, that can’t be manufactured in a factory, and regarding which we are no better off than Ancient Romans, Hindus or Hebrews: affection, social status and esteem. We can pretend to create a technological delivery system for such things (e.g. counting “likes” on Facebook) but no one is really fooled by this, except the very young, immature or exceptionally stupid.

It sometimes seems like the modern, secular, “post-Christian” world is reverting to pre-Christian values (or as Muslims say, the idolatry of the polytheists). In the Christian tradition, for example, we say you cannot serve two masters: God and Mammon. The decadence of modern society is thus often characterized by religious thinkers as a kind of idolatry of wealth or production, or worshipping one’s Self as a god, or subordinating God’s will to you own will, to your desire. It’s hard to dismiss this analysis when there exist people who will pray for a sportsball victory, as if God were an old polytheistic avatar you could make deals with. Ultimately, however, I don’t think that is really possible to revert to paganism, and these diagnoses of modern ills are misplaced. The psychology of “gratitude” reveals why.

Let’s consider some real polytheistic idolators: the Romans (before Constantine). Since Cicero was a Roman and a Stoic, some of the “virtues” he had in mind when he spoke of gratitude will seem strange to us. For example, the Romans valued “liberality”: spending money freely. The Stoics reasoned that if we first recall the favors done for us when we were young and poor, and then connect that memory which a pleasurable sensation (thus making feeling “grateful” a perfect example of operant conditioning) then we are more likely to do such favors for others when we are in a position to do so. The Romans really didn’t believe in saving money or amassing great piles of it. The purpose of money was to be spent. You didn’t spend it all on yourself, but mostly on others, who then owed you a favor. A lot of clients owing you favors was (to the Romans) worth a lot more than any pile of gold, goods or even land. The greatest practitioner of this system of “patronage” was Julius Caesar, who borrowed and gave away vast sums of money, and thereby gained so much power that Rome’s form of government had to be changed to accommodate his existence. If you were a slave and your master freed you and gave you the capital to start your own business (slavery wasn’t racial in Roman society and so things like this did happen) a Roman would then forever honor that master as patron and make himself available to help the patron in return. Soldiers viewed their generals as patrons to whom they were obligated, indeed, even conquered territories regarded the generals who conquered them as their patron (to whom they owed the favor of becoming part of the Roman Empire). Roman religion was an extension of patronage, and you honored the god who did you favors. Fortuna (the goddess of good luck) was often praised and glorified, especially by veteran soldiers who knew they were lucky to be alive.

Let’s say you are a Roman soldier who has just one a battle and been granted some of the spoils. You are alive and richer: who do you thank? Tradition suggested you thank your general and Fortuna, the god of good luck, but why stop there? The eagles carried by your legion bore the letters SPQR: the Senate and People of Rome. Doesn’t this suggest you also owed them thanks? The weather played a role, so there are other gods to thank; if you got there by boat and didn’t drown, thank Neptune. In addition to all these gods, there were also many humbler persons a thoughtful Roman solider might thank: the cobbler who made his shoes and the smiths who made his sword, shield and pylum. While he was at it, he could thank the ancestor who invented that sandal and those arms and armor. Once you start thinking of causes (and if you have the education to do so) it starts seeming like hubris for your commander to claim the honor of this victory, when it was many generations of past generals who invented the Roman system of war, its weapons and discipline and tactics, who your commander merely copied. Rome wasn’t built in a day, you know.

The Roman system of patronage required, however, ignoring all the distant causes and forces which produced all your goods and good fortune, and instead strengthening the natural emotional and social bond with the persons near to you immediately before you (or above you in the chain of command). This is a pretty powerful social system. You can scale it up from the household relations of parent and children all the way up to patron empires and client provinces. You can build an Empire that lasts a thousand years with a system like this. It was pretty rough, however, on the unlucky or those on the bottom of the social heap.

Christianity subverted this system of patronage by looking beyond the immediate identifiable patrons who gave you things or did good things for you, and attributing all good things to one God. In one way this was more realistic than the Roman way. Everything good in the world happens because of a series of causes which quickly goes beyond our view and possible knowledge. Who, for example, invented the English language? In that case, we don’t even have a particular people or place to thank: the language is creole reflecting many influences from foreign invaders: Celtic, Saxon, Roman, French. When we don’t know who to thank, or the persons to whom we owe thanks are so many as to be innumerable and unnameable, then we often invent a symbolic stand-in person. There’s more to this than simple explanation of natural phenomena, for example: where does lightning come from? It comes from Zeus! We also need to know why Zeus hurled the lightning bolt. And, while we are on the subject, thank the sky god for the rain.

As long as we are using our imagination and inventing people to thank (patron Saints, pantheons of gods) why not combine all that (imaginative, fictive) gratitude in one, big, awesome object, the Creator of Everything, or the one “who is with God, and through whom all things were made”? It’s more efficient (easier to praise one God then to figure out which of many to thank) but also a social revolution.

Praising God directly cuts out the middlemen (kings, warriors, priests). When early Christian praised Christ Jesus as their Savior they meant “saved from judgment for sin”, of course, but they also meant saved from their patrons in the Roman system.Paul counseled against disobedience (slaves obey your masters, Ephesians 6:5) but it is quite clear he was offering the opportunity to transfer all emotional ties, all love and praise and worship, to the Father, Son and Holy Spirit.

This did not go unnoticed. Christian were persecuted, and the hardcore, dangerous ones were identified by their refusal to praise the Emperor as a God. But the many opportunities to evade the bonds of patronage were far too tempting, even for the rich and powerful. Banking and credit systems based on double-ledger accounting systems replaced socially-reinforced systems of patronage with a more portable system of exchange value. Where before the advantages and privileges of “being known as rich and generous” were based on reputation and therefore limited to one city, now you could travel from Milan to Antwerp, not take any gold with you, and have evidence of your precise level of privilege written down in Arab numbers in a book.

After two thousands years of this we take it all for granted. We don’t work for people out of loyalty or gratitude for past favors, but rather work on a contract basis, labor for money. The Marxists are right when they say we no longer work for the love of persons or devotion for the things worked on (say, love of music or love for Justice) but rather have made our work a product, separated it from ourselves and allowed it assume an exchange value or market price, which is the [false] equivalence of all things to all persons.

The American people no longer thank themselves for their good fortune. Instead, we inscribe “In God We Trust” on our money, rather than reminding ourselves that we are E Pluribus Unum (From Many, One). We’ve become alienated from ourselves –or at least, alienated from those who fall outside that mythological group of immediate ancestors who “worked hard and pulled themselves up by their own bootstraps”. We say absurd things like “keep you government hands off my Medicare” as if we were “endowed by our Creator with certain inalienable rights” like medical care in our old age. For the dimmer Americans among us, everything good is a God-given right rather than a collectively funded benefit. It sometimes seems as if government of the people, by the people and for the people has indeed perished from the Earth (though, thank God, it has really only been killed in the minds of FOX News viewers).

From that it should be clear that I do not think it sufficient to express gratitude by listing a few objects in your immediate view (“I am thankful for my family and my good friends Smith and Wesson”) and I think you should take the next step and consider to whom you owe thanks. I don’t think it is wise to cut short that inquiry by simply attributing all good things to God, or any thoughtless atheistic substitute such as “Nature” or “blind chance” or what have you. I think the only proper expression of gratitude is to learn and contemplate where things came from and who was responsible. To some extent that is an impossible task, but the attempt is instructive.

Consider just the wonders of your own physical body. To be truly and completely grateful we would have to thank all our ancestors, right back into the non-human, the ones who evolved legs and hands and eyes and brains, right back to the very first single celled organism that discovered (purely by trial and error of course) that moving toward the food was better than waiting for it to come to you (or basking in the sun and making it) and thus became the first “animal”. In fact, almost everything in our world, from the shape of our visual field to the words I am using here, is attributable to some long-dead person or thing. We can’t thank them personally, be acknowledged and receive a “your welcome”. However, until we at least try to understand and appreciate all that has come before us, we have no conception of God as Creator. I say this as someone who believes God simply is all living things, but non-heretical orthodox Christians can make the same point. As I write this, Pope Francis said last week that “Evolution in nature is not inconsistent with the notion of creation, because evolution requires the creation of beings that evolve.”

The first stage of gratitude is awareness of good things, coupled with a sense of satisfaction or contentment. At this stage we don’t mean anything fancy by “good”, just a connection by definition to that feeling. I might say I am grateful for my hands, because they allow me to do this typing which gives me pleasure, but we would not say “I am grateful for this painful fracture in my index finger” (except in some really weird contrived circumstances where the broken finger prevented me from doing something which would have caused even worse pain) because we are not grateful for bad things, only good. I would include in this stage the tricky process of suppressing “desire”, diverting the attention away from what you don’t have toward what you do have, away from who you merely want to be and toward who you really are.

The next stage is to consider the source of the good things. Where did my hands, this computer, these ideas … come from? I didn’t make these ideas any more than I made my own hands. Identifying source with any precision is often not possible, but some effort to get past yourself is needed. Whether I attribute my good fortune to “God” or “gods”, “Nature” or “holistic spiritual concept”, whenever I thank I transcend my Self. Which amounts to saying “gratitude is getting over yourself”.

Posted in Uncategorized | Tagged , , | Leave a comment

Thoughts on political coercion inspired by Martin Luther King

The “Letter from Birmingham Jail” is a response to a statement (called “Call for Unity”) by a group of Alabama clergymen. A large portion of the Letter addresses their calls for moderation, patience and “legal” process towards desegregation.

Perhaps the most interesting part of that is where Dr. King lays out the four step method of nonviolent resistance: “In any nonviolent campaign there are four basic steps: collection of the facts to determine whether injustices exist; negotiation; self purification; and direct action.

This is worth studying carefully, with questions in mind such as, “why do some nonviolent campaigns succeed and others fail?” The Civil Rights movement didn’t end racism, of course, but it did end de jure segregation, and raised American consciousness concerning justice and equality for oppressed groups, and not just black people. The Peace movement, on the other hand, which Dr. King began to devote himself to before he was assassinated, was an abject failure. It did not stop the Vietnam War, and more importantly, it did not create widespread consensus that we must stop making war, stop building and exporting weapons, stop using death and destruction to achieve our political objectives. If anything, the opposite has occurred. Collectively, we are the most violent nation that has ever existed. We respond thirty-fold to every injury: 3000 dead on 9/11 results in 100,000 Iraqis dead, and an ongoing campaign of dropping missiles on people who harbor our enemies, twelve years later with no sign that we will ever stop.
Perhaps the method of nonviolence doesn’t work when there are no readily organizable groups of people to negotiate for, or negotiate with. That is to say: you can organize India to seek independence from the British Empire. You can organize black people in America to seek justice from the white, women to seek justice from men, gays to seek justice from straight. But who are you negotiating for when you organize for Peace? Who are you negotiating with?

The firehoses and dogs which so disturbed the public consciousness in 1963 elicit yawns today, and it is both sad and funny that old women can participate in protests carrying signs saying “I can’t believe we still have to protest this shit.” A lot of our problems today (peace and sustainable economies) are problems of “everyone and no one”. Everyone wants peace, but no one acknowledges that our collective will is insanely, disproportionately violent. Everyone wishes to preserve the environment, but no one will insist that everyone must endure the sacrifices and disruptive changes necessary for that preservation. (Of course there are exceptions, individuals who are dedicated to peace and to sustainable living, but I’m talking about mass political action here, and “no one” in that context means “not enough people to force change”.)

While I don’t think marching in streets means much in America today (though it seems to mean something in Egypt) and I think American progressives need to develop some new tactics of force and coercion to make political change happen, I don’t view Dr. King and his comrades as obsolete historical relics. I keep coming back to this “Letter”, because of its structures.

One small part of the “Letter from Birmingham Jail” addresses a theoretical issue with implications well beyond Birmingham and the 20th Century. It begins, more or less, when Dr. King refers to the Supreme Court case calling for an end to “separate but equal” schools (Brown v. the Board of Education) and then answers the question: “How can you advocate breaking some laws and obeying others?” It ends, more or less, with the observation that “We should never forget that everything Adolf Hitler did in Germany was “legal”…”

He makes a point of mentioning Aquinas and Augustine, though he doesn’t say much about them. They are invoked to establish, as a matter of axiom and definition, that there are “just” and “unjust” laws, and to do so in a manner which the Catholic and (Conservative White) Protestant clergy he is responding to cannot evade. Likewise, he invokes the terminology of “the Jewish philosopher Martin Buber (I think mostly because one of the clergy he is arguing with is a rabbi) and Paul Tillich (even though none of the clergy he is responding to was Lutheran) to define segregation as a sinful regime, or the “existential expression of man’s tragic separation, his awful estrangement, his terrible sinfulness?”

Impossible at it may seem today, in 1963, not all white liberal intellectuals were on board with the Civil Rights movement. The reason for that is made pretty clear in the body of the letter: demanding equality created tension and disorder and incited violence. The fact that the violence was almost entirely and exclusively by white supremacists and segregationists made no difference: Dr. King and his comrades were seen as rabble-rousers.

The prevailing view among liberals during this early part of the Cold War (which I think ends with the assassination of John F. Kennedy) was that of “pragmatism”, the philosophy of educators such as John Dewey. In their view, justice was to be achieved by reason, persuasion and education, and implemented scientifically by “experts”, by the “The Best and the Brightest” as David Halberstam’s 1972 book put it (by then, ironically).
Forcing the issue by some means of coercion was considered appropriate only as a last resort. In the case of school desegregation, especially the desegregation of Central High School in Little Rock, Arkansas, where the President had to call in the 101st Airborne Division to stand guard with fixed bayonets, because the “National Guard” wasn’t sufficiently trustworthy (since it is, in fact, a state militia, not “National”), the threat of force was considered acceptable because it was to enforce a court order. If a white male federal judge ordered it, then it was ok.
To that, Dr. King had the theoretical response, provided by a liberal theologian and social commentator, Reinhold Niebuhr:

Lamentably, it is an historical fact that privileged groups seldom give up their privileges voluntarily. Individuals may see the moral light and voluntarily give up their unjust posture; but, as Reinhold Niebuhr has reminded us, groups tend to be more immoral than individuals.

In his 1932 book, Moral Man and Immoral Society, Niebuhr wrote that political groups are inevitably immoral and unjust because they represent a compromise, not between reason and expediency, but rather as an adjustment of the purely selfish interests of the powerful and privileged members of the group. The actions of society as a whole may sometimes be “moral” if the selfish interests of the majority (or a peculiarly powerful minority) happen to coincide with the just and moral thing to do, but this happens by accident at best, and not because policy is motivated by, or intended to be “moral”. Self-sacrifice is possible for individuals, but not human collectives. For leaders or representatives to sacrifice the interests of the group as a whole to some “moral” principle is treason or dereliction of duty: human beings do not elect representatives or consent to rule by leaders in order to be morally uplifted.
When privilege is involved, as it was in the case of racial segregation, Dr. King knew that some form of coercion was necessary for justice to occur. Of course he was dedicated to “nonviolent” means, but this only meant that killing people or threatening to kill people was ruled out. “Forcing” the issue by disobeying court orders and marching in the streets was still coercion of a sort, and not just mere rhetoric, persuasion or “education”. He wouldn’t have been writing this from jail (on the margins of newspapers) if he had only just stated his opinion. He stated his opinion in a manner that broke the law and invited a police response.
I think today, or with respect to issues that effect us all and don’t pit one group against another, Dr. King’s methods are obsolete. The problems we have today aren’t so much the privileges claimed by one group against another (except to the extent that they still are problems: that racism, sexism, homophobia and the like continue to exist) but rather privileges claimed by all of us against nature. What sort of force or coercion is sufficient to convince all of us to mend our ways? It may be the case that only the threat of death (by natural disaster and catastrophic collapse of the world economy) can get our attention now.

Posted in Uncategorized | Leave a comment

The King Can Do No Wrong: “Development” of a legal doctrine

Before I kick this up to the next level, the “global” level, I should probably say a few words about my previous post, for those of you who didn’t go to law school.

I try to make a case for the word “develop” over the word “interpret” for what it is that judges and lawyers do. A law professor would dismiss this immediately for my lack of academic standing and structures: there’s not a case citation to be seen in that post. Since gathering case citations is what I do for a living, one might even ask me why I didn’t bother. There are several reasons, some peculiar to me but others common to everyone reasonably well-informed about the law.

Legal citations ultimately go to “cases”, opinions by appellate judges. Cases have a lot to recommend them as sources, but they aren’t science. There was a movement in the 19th century to make law into a science by using “cases” as the empirical data. That was stunningly naïve but it had an effect on the way law is taught which still carries forward today: the casebook method pioneered by Christopher Columbus Langdell when he was dean of the Harvard Law School from 1870-1895. Using the casebook method, law students read specific cases, discuss them, and make inferences about the general state of the law. It is now very widespread, but often side-by-side with what I call the “outline” method: memorizing outlines of important points of law to be regurgitated on law school exams and ultimately on the bar exam.

The advantages of casebooks appear mostly from the disadvantages of outlines. In an effort to remain general, outlines tend to abstract from, and become irrelevant to, the messy reality of real local law. Take, for example, the criminal law. What the criminal law “really” is at any given moment in any particular place depends entirely on what the legislature in that jurisdiction says it is. What law students study for the bar exam, however, is something called the “common law of crimes” which is a distilled version of British criminal law from the 1700s. This is no longer the criminal law in any part of the world, not even Britain, but it has influenced law throughout the English-speaking world. Most part of the English-speaking legal world, for example, maintain the distinction between “felony” and “misdemeanor”, but have rejected the original meaning: a felony was a capital crime for which the sentence was death by hanging. (Now, in the United States, a “felony” is usually defined as a crime for which the maximum sentence is a year or longer in prison).

The common law of crimes is an excellent teaching tool, because it teaches students to break down crimes into “elements”. The “elements” of burglary were “breaking into a dwelling place at night”. If the facts involve walking into (not breaking into) a commercial establishment (not a dwelling) during the day (not a night), then its not “burglary”, it has to be called something else (probably “larceny”). If, on the other hand, the facts did involve breaking into a dwelling at night, but nothing was stolen, it’s still burglary.

This is not the law where I live. New Mexico has revised by statute the common law elements: “burglary” any unauthorized entry (not just “breaking”) any structure (not just dwelling, and also entry into vehicles) and any time, day or night. What makes it “burglary” under current statutory law is not the common law elements but the intent to commit a felony or theft therein. Thus if someone reaches into your open car window and steals your camera, that’s an auto burglary, though it wouldn’t be a burglary at all under the common law.

(Why not just call it “larceny” then? Because “burglary” suggests not only taking your stuff but also a violation of your personal space, an additional affront to the public peace, which should entail some additional punishment.)

Thus, given a description of the facts of an alleged “burglary”, law students studying for the bar exam and real lawyers in the criminal system would be looking for entirely different things. The students would be looking for a “breaking”, “dwelling” and “night”, while the real lawyers ignore all that and focus on evidence of unauthorized entry and felonious intent, much more broadly defined.

The reason we stick with the “common law of crimes” rather than test on real law is to make legal education portable from state-to-state: I can go to law school in New York or New Mexico and still hope to pass the bar exam in either state. But it has the rather obvious flaw of being divorced from the reality of actual practice. Dean Langdell at Harvard came up with the ingenious solution of having students discuss the opinions of judges applying real law to real cases. This tends to be much more like what lawyers actually do.

The problem with the casebook method from an intellectual perspective, however, is that judges write these opinions. Judges are only human and have human biases and their opinions are just that: not evidence, let alone empirically established fact. My law professors exposed me to an excellent book on this topic: John T. Noonan, Jr.’s Persons and Masks of the Law. Every law student forced to endure reading the Palsgraf opinions in their first year Torts class should also read this book, which exposes how the famous judges writing in the Palsgraf case abstracted from and (inadvertently) falsified the real facts of that case.

The type of person who would read about these factual distortions in a famous case and throw up their hands in despair and exclaim “It’s all a pack of lies, lies, lies!” tends to get weeded out before law school. Even the worst law students and lawyers see something like that as a challenge: to sort out the true from the false. On the other hand, people do that with religion all the time. They find contradictions in the Bible (not hard to do, really) and then declare that God does not exist and reject the entire edifice of Judeo-Christian morality.

There came a moment for me, however, when I had a “lies, lies, lies” epiphany which has colored how I view jurisprudence ever since. Please excuse the “war story” here, but I think it helps.

My client was a volunteer firefighter who had been denied State-provided retirement benefits because he was “too old” when he began his firefighting career: 46. State law at the time simply denied retirement benefits for anyone over 45 when they were first certified as a firefighter. Both my client and I knew vaguely that we have in this country laws against age discrimination, so we figured this was wrong. A little bit of research confirmed that. Federal law allowed some “age discrimination” in retirement benefits, but only on the basis of cost (providing retirement benefits to older workers is more expensive because they contribute less over the course of their shorter careers). A flat-out ban based on age, which is what we had here in New Mexico at the time (it’s since been repealed) violated federal law.

Now let me take a page from Noonan and talk about my client. Generally speaking, volunteer firefighters are an exemplary bunch. They need just as much training and bravery to fight fires as the professional firefighters, but they do it for no pay and somehow fit it into their lives when they aren’t working. James Gill was not the kind of guy who just joins up to hang out at the fire station and chew tobacco and talk politics: he was very active in his department in Hondo, New Mexico, and served as its chief for many years. While I never met the man in person, I talked with him over the phone for many an hour and I became firmly convinced that I should fight to get him his $100 a month benefit, because he deserved it, just as much as any man who volunteers at age 20, 30 or 40. I was also intuitively certain that, given the amount of money involved and the rather well-funded plan created by the Legislature, that no one would ever come forward with the evidence of cost-differential which under federal law would justify denying my client this meagre benefit.

So I had a clear case of State law violating federal law, the question became what to do about it. In contemplating the options available, I ran smack into this thing called “sovereign immunity” which bars lawsuits for money damages against States. At this point I will jump over years of litigation and appeals and go straight to the final decision by my Supreme Court: “sovereign immunity” prevented me from suing the State of New Mexico for the past benefits lost, but not from seeking future benefits.

You can find this decision by searching for “Gill” and “PERA” on the internet, but what you won’t find there is the stuff I just told you about James Gill (i.e. the real person involved in this real life case) nor will it tell you the upshot of the case: how did it turn out? It turned out like this: the State paid me about $40,000 in attorneys fees (which was a bargain for them, let me tell you) and agreed to start paying Mr. Gill his benefits. Now a sharp law student might immediately ask: if you can’t collect money from the State because of “sovereign immunity”, how did they end up paying you $40,000? This is why I said earlier that the common law of crimes was an excellent teaching tool. Attorney’s fees aren’t “damages”, and neither is an injunction ordering the State to start paying Mr. Gill’s benefits, just like breaking into a dwelling during the day wasn’t “burglary”.

Still, I found it somewhat distressing, throughout the many years of litigation over “sovereign immunity”, that this doctrine allowed the State to get away with not paying Mr. Gill’s retirement benefits. I could not punish them for their intransigence by hitting them with a bill for past, lost benefits. And even though it wasn’t much of a benefit, they fought me so long and so hard (even tried petitioning the United States Supreme Court for review) so it amounted to something substantial by the end.

For this I had to become somewhat of an expert on sovereign immunity, a very strange doctrine which goes back to the chaos in Europe during the Dark Ages. In those days you could raise an army and conquer a chunk of territory, like William did in 1066, and thereafter you owned it, and everything and everyone in it, as William ended up owning some of England and France. You could then parcel out pieces of it to your best buddies. They became “earls” or whatnot and they then owned their chunk of land and everything and everyone in it, provided only that if some other asshat decided to take the land from William, your king, or the people there decided they didn’t like being owned by William you had to raise an army and help him fight them off or put them down. This was called “feudalism” and while many of the more extreme aspects of monarchical “ownership” of the people had been whittled away by often bloody assertions of “rights”, one thing remained: you couldn’t sue the King. This made some sense because the whole court system had been instituted by kings to resolve disputes between subjects, and King made the law and ultimately decided what was right and wrong (or delegated that to courts or parliaments). Lawyers, being the idiots they are, coined the phrase “the King can do no wrong” which, while reflecting the reality that whoever makes the law can just change it to suit himself, goes against mountains of evidence that Kings wrong people all the time.

Fast forward 700 years to America where William’s not-very-direct descendant George the Third had just been chased out of his former colonies by the people, with the help of another of William’s not-very-direct descendants, the King of France. The question then became (for the new States had a lot of war debts to pay) whether we would continue to maintain the fiction that “the King can do no wrong”. Here we had just fought a war because of the wrongs the King and his delegates had inflicted upon the colonists: were we going to keep saying “the King can do no wrong”?

The first judges to take a crack at the question said “No.” That was the Supreme Court in 1793 in the case of Chisolm v. Georgia. Sovereign immunity was inconsistent with the republican values of our new nation. This horrified politicians up and down the coast, who could foresee getting sued for a lot of war debt they simply could not afford to pay. No sooner was the ink dry on the first ten Amendments to the Constitution (the Bill of Rights) when they ratified another, the Eleventh, which reflected a compromise: States could not be sued for money damages in the federal courts when they had jurisdiction based solely on “diversity of citizenship” (the plaintiff was not a citizen of the State being sued). So, a banker from New York could not walk into federal court in Georgia and sue that State: he had to file his lawsuit in the Georgia state courts which, presumably, would be more sympathetic to Georgia’s claim that it had no money to pay the debt.

This compromise was eroded and destroyed in two increments, approximately 100 years apart. First there was the 1890s-era decision that the Eleventh Amendment, despite what its text explicitly says, applied to all kinds of jurisdiction in federal courts, not just “diversity”. Then in 1999 the Supreme Court essentially abolished the Eleventh Amendment entirely and re-instituted the full version of “sovereign immunity”, where you couldn’t sue a State in any court, State or federal. The name of that case is Alden v. Maine.

I can’t really argue that Alden v. Maine is a travesty on the level of, say, Citizens United, for two reasons. First, sovereign immunity is an obscure branch of the law effecting only a few people in narrow cases. It’s the kind of thing only lawyers care about. And as noted above, it involves a compromise in which the people who care about it are “bought off”: I got my attorney’s fees and my client at least got his future benefits, and what he really wanted, the satisfaction of proving that the State was wrong, that it was violating federal age discrimination law.

On the level of principal, though, it’s a disaster. From now on, any time anyone spouts off about “interpreting” the law (or worse, “interpreting” the law according to its “original intent”) I will remember Alden v. Maine and how they “interpreted” the simple, express and unambiguous text of the Eleventh Amendment right off the face of the planet. Antonin Scalia, who likes to call himself an originalist and has even written a book where he takes up the originalist cause for the sake of argument (A Matter of Interpretation) was in the majority in Alden v. Maine.

To be sure, I think the argument can be made, and Justice Kennedy did in fact make it in the Alden v. Maine decision, that sometimes bits and pieces of the Constitution (like the Eleventh Amendment’s bits about “diversity” jurisdiction and federal court) should not be interpreted to contradict the overall structure of the thing (in this case, “federalism”, a system of government in which “sovereignty” is divided and shared between the various States and the federal government) and whatever “intent” can be implied from that structure.

In making that argument, though, one abandons the text, or at least principles of textual construction like the rule of exclusion. For 200 years, courts have applied that rule to the Eleventh Amendment. By barring lawsuits in federal court and saying nothing about state courts, the Eleventh Amendment seemed to allow suits against States in state courts, “by exclusion”. The Supreme Court in Alden v. Maine chucked that 200 years of precedent in favor of a “structural” argument from federalism which, abstractly and generally, is indistinguishable from the “emanations” and “penumbra” arguments of Roe v. Wade, much derided by Justice Scalia and his ilk.

It’s tempting once again to throw up one’s hands and say “it’s all lies, lies, lies” and to a certain extent, I’ve done that. Since precedent has little force in Constitutional law, I don’t think it’s absolutely necessary to be citing it with academic precision (unless you are an academic of course). If I abandon my fixation on precedent and the exact words of the text of the Eleventh Amendment (going against all my training in that way), and get off the high horse of abstract principal (“republican values”, as expressed eloquently by the authors of the first Supreme Court to address the matter in Chislom) and pay attention to how the law of sovereign immunity has developed, it turns out to be not quite so bad.

What has developed is an institutional compromise in which I can sue the State to enforce federal law, get paid, and get injunctive relief for my clients. This “development” has components like the distinction between “damages” and “injunctive relief” and attorney’s fees.

There is also, I should mention a legal fiction involved, called the Ex Parte Young doctrine, which anyone who is not a lawyer had best not pay any attention to, lest their brain explode. (Under this fiction I didn’t sue the State of New Mexico at all for Mr. Gill, I sued the retirement board in its individual capacity. Does this really make any difference? No. Is it absolutely essential for a successful lawsuit of this kind? Yes. Is this the kind of silliness we lawyers invent to protect our monopoly and make sure we get paid the big bucks? Absolutely.) You can’t have Alden v. Maine (“the King can do no wrong’) and have it work without the existence of an established vehicle for addressing “wrongs” by “the King”.

In this case, there was a case once in which some lawyer had the bright idea of pretending he wasn’t suing the State, he was suing a State official who was breaking the law, and he wasn’t asking for money, just a court order telling the official to stop breaking the law. (We call that an “injunction”.) The case is called Ex parte Young, and the weird Latin name itself should suggest to you that something funny is going on. Amazingly enough, a judge went along with it, was upheld on appeal, and even more judges have been conspiring to keep this absurdity alive because it serves a valuable function: making State officials comply with federal law. In its own way, it’s less absurd than the fiction that laws are “enforced” (indirectly) by entering judgments for money damages for the harm caused by noncompliance. Now we just skip that whole rigamarole and the federal judge directly states: you’ve been doing it wrong, do it this way from now on.

So 200 years has “developed” this system whereby the States have the flexibility to pay their debts as the money is available, including the option to simply blow off creditors with no legal consequences (though there may very well be financial consequences in terms of the State’s future creditworthiness) but not the option to simply violate federal law with impunity. Rather than use the somewhat arbitrary distinctions of federal jurisdiction as the formality for deciding when States are subject to court orders (as the Eleventh Amendment) we now use the much-more developed law of “injunctive relief” which has a myriad of cases and rules for judges to rely on to decide whether and how they are going to enter legally-binding orders telling States to do this or that. This is an improvement over “damages” for wrongs done by “the King” because injunctions are forward-looking rather than relying on past wrongs. This means that, instead of having a judge tell State officials “you’ve been doing this all wrong, whether you understood that or not, and now you’re going to pay” the judge says: “what you have been doing is wrong and in the future you will conduct yourself according to the rules I’m going to tell you now.” People hate that, but at least they have fair warning and an opportunity to correct things.

In conclusion: what development is not: unwrapping a text by taking apart the meanings of the words (for example, in the case of the Eleventh Amendment, the meanings of the words “federal” and “diversity” which now have no legal effect). Sometimes we unwrap a text by negating the words themselves in favor of “developments” which are extra-textual (though they may themselves be texts, other texts, as in this case the word “damages” has been parsed very narrowly). In the process we may find that complete absurdities (like the Ex parte Young fiction) have a role to play in context and practice, though standing by themselves they are ridiculous. Assumed throughout are existing institutions with built-in attention to real problems involving real people, institutions which keep records and make those records open to people who care about them (nominally “the public”, but that’s another fiction for another day).

Posted in Uncategorized | Leave a comment

I’ve become fascinated with something I call “development”, for lack of a better word, and because I think this word “develop” is fascinating in itself.

 The problem for lawyers and judges is interpretation of texts. The invention of the book has transformed society by making it possible to record knowledge, ideas and experiences and carry them into the future and around the world. Writing, however, requires reading, which has its own problems.

 

Before books there were decrees, such as the first wage and hour laws decreed by Hammurabi. For example, ox drivers got six “gur” of corn per year (one wonders what the value of a Babylonian “gur” in 2012 dollars might be).

Before books as such there was the text backed by force. Perusing the Code of Hammurabi is a worthwhile exercise for any lawyer because it gives you an idea of how ancient some legal principles are. For example, some contracts have to be in writing, and it’s not good enough to testify, even on a stack of Bibles, that the deal was thus and such. In the English-speaking legal world we call this rule “the Statute of Frauds”, referring to an Act of the English Parliament in 1677, even though in America the appropriate statute is one of our own, and even in England, the Statute of Frauds has no doubt long since been reformed many times. Hammurabi’s Code, however, specifies that if you give money (gold and silver) to someone for safekeeping, you better record it in writing and in front of a witness, or else the debt is unenforceable. You can’t look to the King to force the man to give you your money back without a duly witnessed contract. “Get it in writing” has been good advice since 1780 BCE! Some things never really change, and commercial law probably has the best examples.

Other things do change, however, and for such changes I am using the word “develop”. It comes from an Old French word meaning to “unwrap”. If you “envelop” something you cover it (perhaps with vellum?), and if you “develop” it, you uncover or reveal it.

This sense of “revealing” was used by early photographers to describe the process of shining light through their glass or celluloid “negatives” onto light-sensitive paper to produce a photograph: they called this chemical process “developing”. Now that photography is digital and electronic I suppose this meaning is rapidly becoming obsolete. The word lives on, though, in the description of certain jobs: real estate developer or software developer.

Now, in a broad overview of the real estate development process, you start with vacant land and end up with some buildings. But real estate development doesn’t involve much, if any, building. Rather, it gathers the things necessary for that building to take place: money, permits, contracts.

I am led to believe that in “software development” something similar takes place, though it has a back-and-forth aspect to it: someone expresses a need or an idea, you go forth with your team and code it, you go back and say “Is this what you want?” And apparently inevitably the answer is “No” or “not quite” or “can you do the background color of the interface in brown?”

That back-and-forth aspect would not be appropriate in real estate (you don’t want to be building and tearing down all the time, you need to get it right the first time). But for the purposes of my analogies to law, it’s perfect.

We call it “interpretation” and there are whole schools and theories of how to interpret laws, but frankly I think that’s pernicious bullshit. “Interpretation” assumes that the framers of the text had some sort of “intent”. This is a complete fiction. Judges and professors can expostulate all they want about the need to enforce laws according to the legislator’s intent (as opposed to their own ideas, or the needs or demands of the parties) but the fact is, that intent is not clear or there wouldn’t be a lawsuit in the first place. And the truth of the matter is that legislators either come up with a general scheme or skeleton of a law and expect the judiciary to fill in the gaps, or they make a hash of things which results from a political compromise, such that the law, as enacted, reflects diverse or even totally contradictory “intents”. If you can give effect to two completely contradictory “intents” in a single, real life case which requires a definite winners and losers, then you have what it takes to be a judge in this system.

In reality that task is not as hard as I made it sound in the last sentence, because no case is decided in a vacuum or some timeless eternal moment separate from history. Judges have resources, or at least, they do if lawyers have done their job of bringing the decision-making resources to them. And after some 25 years of doing this, let me tell you, speculation about “intent” is the least useful of all such resources. The most useful are these things we call “facts” or evidence. But let’s consider an example.

In the United States of America we have this idea we call the “First Amendment”. The actual text of the amendment, the first among some twenty-odd and first of a package immediately applied to the Constitution when it was ratified, has several components. Some deal with religion (the Establishment and Exercise clauses) some deal with speech, the press and assembly. In my opinion, the only clause they were really “serious” about, understood and intended to enforce, was the Establishment clause. Because they had an immediate problem in forming a Union out of the several States: different “established” religions in each one. There was the Anglican in Virginia and the southern States, the Quakers and Lutherans in Pennsylvania, the Congregational (Puritan) in Massachusetts, the Baptist in Rhode Island, the Presbyterian in New Jersey, the Dutch Reformed in New York. Which of these was to be the Established Church of the federal Union? How about “none”? They went with “none”.

Today, however, when people talk about “the First Amendment” they are referring to a grand idea in which free expression is a necessary part of the structure of a free society. History shows this is almost certainly not what the Framers meant. Why? Because no sooner was the ink dry on the Bill of Rights, they enacted the Alien and Sedition Acts. The Framers wrote “Congress shall pass no law…” but then Congress did pass such a law. And John Adams, for whom the law was passed, didn’t veto it. Moreover, he enforced it, against people who criticized him.

The longest sentence under the Sedition Act was given out to a man who set up a Liberty Pole with the words: “No Stamp Act, No Sedition Act, No Alien Bills, No Land Tax, downfall to the Tyrants of America; peace and retirement to the President; Long Live the Vice President”. (It helps to recall here that in those days the Vice President was the second runner-up, in this case Thomas Jefferson, Adam’s political opponent.) That guy got jail time and a big penalty (Justice Story didn’t like the fact that he would not implicate his co-conspirators: so much for the Fifth Amendment, eh?) but he was pardoned by Jefferson.

 

If the Framers had expected Congress to simply heed their words, they were disappointed. If they expected the President to veto such a blatantly unconstitutional Act, they were disappointed. If they expected the Judicial Branch to step in, they had no right to be disappointed, because they had not given the Judicial Branch any explicit power of judicial review. That had to be “interpreted” into the Constitution in 1803 by the Supreme Court in Marbury v. Madison.  If you want to know what the Bill of Rights means, the last people in the world to go to are the Framers. They clearly had no idea. 

Personally, I think calling judicial review an “interpretation” of the Constitution is ludicrous. It was a necessary power that had to be “developed”. Marbury was the first of many steps that led to the modern situation where “free expression” includes wearing black armbands to protest a war, nude dancing, and cartoon depictions of child sex, and courts can (and have) stepped in to protect these expressions on the grounds of “the First Amendment” (a text which says absolutely nothing about armbands, dancing, or hentai. )

Other developments include the Fourteenth Amendment and its use to “incorporate” the Bill of Rights for the purpose of judicial review of State law, the creation of an organization of lawyers willing and dedicated to bringing cases to the courts (the ACLU) and above all an intellectual discussion that had to occur among the justices before the First Amendment could actually be used to strike down laws. This is where we get famous points like Holmes’ “fire in a crowded theater” analogy. But note well: the Supreme Court did not start enforcing the First Amendment until well into the Twentieth Century. The text is hardly self-executing nor is its “intent” clear. “Congress shall pass no law…” seems pretty clear and definite, but what if it does? Then what? 

Freedom of speech is actually one of the easier ones. The rest of Roosevelt’s “Four Freedoms” (Freedom of speech, freedom of worship, freedom from want, freedom from fear) start getting harder, right after “worship”. We find nothing definite in the text about “want” or “fear” (except general statements of purpose, such as the Preamble to the United States Constitution: “to establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity”).

Perhaps the thing to do is … write more text? Like the UN Declaration of Human Rights? I suppose that’s one way to develop these ideas. Frankly, though, the example of the First Amendment leads me to believe that more text isn’t as likely to “develop” these ideas as well as more enforcement institutions (like judicial review, like federal incorporation, like the ACLU) which take the necessary first step of presenting the cases for discussion. Because with cases come facts, and from facts we get decisions.

 

 

Posted on by Hazen Hammel | Leave a comment

On Allan Bloom and Hobbits

It’s been nearly 25 years since the publication of Allan Bloom’s Closing of the American Mind in 1987. At the time I found it a mix of the sublime and the ridiculous. The ridiculous parts have long since been subject to withering criticism. His remarks on rock music, for example, deserved the takedown by Frank Zappa, which amounted to: “So you’ve noticed that mass media has reduced popular music to a strictly commercial industrial commodity?”

While Bloom finally admits in the conclusion that he has no plan for the reform of American higher education, he did have some positive (and negative) things to say about “the cult of Great Books”:

“Programs based on the judicious use of great texts provide the royal road to student’s hearts. Their gratitude at learning of Achilles or the categorical imperative is boundless.

Personally I detested Achilles. I wasn’t all that thrilled by “the categorical imperative” either. On the other hand, when I arrived at St. John’s College (“the” Great Books college), I found myself in the company of like-minded people, and my gratitude was indeed boundless. I am very grateful to the “cult” of Great Books generally and Mr. Bloom in particular.

There are very few people who can say this, however. The need for liberal education is rare among American teenagers. Many of us at St. John’s were either refugees from the university, or simply horrified at the prospect of going to one. Before we arrived at St. John’s, we felt out of place. Wanting more, wanting better, feeling a lack and need for something not provided in most universities: that marked us as strange people. At St. John’s I found other students who, despite diverse backgrounds and origins, all felt the same need I did. I have since met similar people who are not “Johnnies”, though the mix seems heavily tilted towards people who took refuge at small liberal arts colleges.

I don’t think I can exaggerate my horror of the university when I was young. I had recurring nightmares straight out of horror movies, of sneaking through vast buildings which seemed to consist of nothing but hallways and doors, avoiding the people in uniform who were looking for me. These dreams ended when I settled in at St. John’s, with one last reprise of the nightmare, never to return. In it, I found the door to the Outside, which opened onto a stairway leading down. On the stairway (which looked like the front steps at McDowell Hall in Annapolis) I felt extreme vertigo, fear of heights and an immense weight pushing down on me. After what seem liked months of taking one step at a time, crawling at times, I reached the bottom and saw around me a lawn, on which people were lying in the sun and talking and throwing frisbees. The weight lifted and I have never since felt or dreamed of fear. Except fear of heights: I still have that.

That was St. John’s for me: the end of a troubled adolescence as a freak. Talk to any “Johnnie” and they will tell you tales, different in many ways but all the same in one respect. There is a need, a lack, a want (in Greek: aporia) which drives some people, some very few people to “liberal education”, or philosophy. There I was, for the first time surrounded entirely by people who liked to talk about interesting things.

Yes, I was grateful. But not for Achilles. And this is where I part from Bloom.

It is not the texts, but the talking about texts, which produce a liberal education. It is not the facts, but the observation in search of facts, which produces science. In education, the texts don’t have to be “Great”, and frankly many of the texts on “Great Books” lists are not great. It is the conversation which matters.

If I were to talk about proposals to alter the canon, to include or exclude, or defend it as it is, I would not be entirely serious. I would only do it to start a conversation, though among the right people I think it would be a very interesting conversation indeed. That, of course, is what Bloom offered us, and apparently we wanted it, because we made the book a bestseller and made him rich at the end of his life, much to his surprise and delight. But I was still rather disappointed by the book.

Bloom had no plan for reform. “One can not and should not hope for a general reform. The hope is that the embers do not die out.” If he wasn’t preaching some kind of reform, then what was the point? The point, I guess, was philosophy.

I write this shortly after Memorial Day with a quip by Yoda running through my head: “Wars not make one great.” While I wouldn’t expect it from the Evil Empire our nation has become, wouldn’t it be great if we had a Memorial Day for remembering our teachers? If we don’t make philosophers our kings, as Socrates suggest in the Republic, could we at least leave them unmolested and not sentence them to death?

One of the better parts of Closing of the America Mind was where he stated his vision of modern philosophy through a survey of its history. He contrasts the vulnerability of ancient philosophers, on the model of Socrates, with the modern philosophers’ alliances with the State (after Machiavelli), with Science (after Kant), and with Art (after Nietzsche) following the Enlightenment.

Mr. Bloom bemoans the fact that after joining with these forces to accomplish the defeat of Religion, philosophy does not rule the university. Philosophy is instead relegated to a small office in the corner of the humanities building where it is expected to pursue historical studies of its own literature, and above all, not bother anyone else. The results of that tend to be fatuous and wrong (“Plato’s Theory of Ideas”?  Gimme a break).

While the modern philosopher seems unmolested, she is in fact starving to death. So we are back where we started: with Socrates. This much I agree with, and this justifies the book’s brief status as a bestseller. I also rather enjoyed his thumbnail sketch of Nietzsche, but the constant harping on “moral relativism” annoyed me in the 80s, and still irks me today.

Were I an educator rather than a lawyer, I would probably feel like Bloom, that there is no hope for general reform, but I would employ my station to do what I think it is that my friends in academia do: try to ask questions and provoke some thinking. Since, however, Bloom took it upon himself to write an academic memoir and attempt to illustrate the spirit of the times, I think it’s fair to ask whether he got the zeitgeist right. Since my calling is to be a lawyer, and the kind of lawyer who acts as a foot soldier in the never-ending trench war of bureaucracy, my weapons are what the ancients called Rhetoric, and I have some well-established opinions about the art of persuasion. It is vital to know one’s audience. Did Bloom know his? In general outline, I think he did, but he had limits.

In general outline, Bloom knew and dealt with the American’s student’s lack of preparation for a liberal education, at least in contrast to the erudite European emigres who were his teachers. They knew Latin, Greek, French and German. One cannot expect American teenagers to know languages. Bloom’s translations of the Republic and Emile confront this fact, first of all by being translations and not whining about how students cannot read them in the original, but also by being semi-literal so that students can at least track and learn key words in the philosopher’s vocabulary.

He also understood that Americans (educated ones) pride themselves on being open, tolerant and respectful of others. Where we younger folk parted from him is the theme of the book: the “closing” of the American mind by the elevation of “openness”. I think the agenda of tolerance, far from being a pernicious “moral relativism” or nihilism, is the greatest accomplishment of the human race thus far, that we can at least begin to demand a better way and brighter day. Can we put to rest, once and for all: racism, sexism, homophobia, ethnocentricity? If not now, when? This is no mere “ideology”. Nor was it a quaint explosion of naïve youthful exuberance (as Bloom views what happened in the Sixties) but rather the great aspirational task of the 21st Century.

I would risk a little ad hominem: Bloom was a closeted homosexual who died of AIDS. In his youth he was thrilled by the revelations of the erotic by Freud, and he spent his entire career trying to elucidate for future generations the role of the erotics in modern literature and philosophy. But just as his generation viewed their predecessor’s Victorian attitudes toward sexuality with disdain, we too can look upon Bloom and his generation of academics as lacking a global political consciousness.

I hesitate to use words like “global” and “consciousness”, they are so worn and trite, but how else to say it? Are they not worn out for a reason? What he calls relativism we call globalism, and I think we’re right and he’s wrong.

In Bloom’s view, America is defined by “liberal democracy” which was a political theory created by the Enlightenment, and enshrined by the framers of our Constitution in that document and in supporting rhetoric like the Declaration of Independence. Having established what “America” was in theory before 1800, we can then look to gentleman like Alexis de Tocqueville, writing in 1835 and 1840, to tell us, from an aristocrat’s point of view, what democracy means and the kind of people it produces, and how they think.

Tocqueville has some merit. I recently read historian Gordon S. Wood’s book, The Radicalism of the American Revolution, which presents a very persuasive picture of the changes in American society from colonialism to democracy around 1820. The beginning of the book concentrates on just how different life and society were under monarchy. Wood does such a good job that I almost couldn’t read the book, I found the people he described so repulsive. So there’s no doubt in my mind that the Revolution and the republican values it established created massive social change. It was a “novus ordo seclorum” (“new order of the ages”) as the motto on our Great Seal states.

Tocqueville’s perspective is therefore very intriguing, but it doesn’t describe us at all. It’s like listening to Marxists. It’s all very plausible when talking about the transition from feudalism to the industrial age. But we’ve been there, done that. What next?

As a lawyer, I find discussions of the Constitution as it existed in 1790 somewhat quaint. Considering only the obvious amendments: we have since freed the slaves, applied the Bill of Rights to the States, given women the vote, and transferred the bulk of government revenue raising from tariffs to income taxes. After adopting income tax we could greatly expand the budget and powers of the Executive branch, kick fascist butt, build interstate highways, visit the Moon, and create the Internet.

Most pertinent, however, to Bloom’s theme of the Closing, and probably the most significant causal element in all the structural changes, were the successive waves of immigrants from around the world. We value tolerance because we are a diverse society, and we are diverse because of immigration, not because we hold these truths to be self-evident: that all men are created equal. (Jefferson could write those words but he still owned slaves and fucked them. There’s moral complexity for you. Who needs fictional anti-heroes? The real ones are complex enough.)

In the America that Tocqueville observed, the residents were primarily English, with their African slaves. Today we simply can’t imagine life without culture from around the world. We are the first global nation in a global society. From Bloom’s perspective it seems the only immigrants who mattered were the German intellectuals. They were or teachers, yes, and they should be honored. I honor Jacob Klein by reading his Commentary on Plato’s Meno, which is simply awesome and in some oblique way inspires this post. (It has something to do with aporia but it would take too many words to explain directly). But those guys, great as they were, are a miniscule part of the story.

The whole world has come to live with us, and most of them didn’t bring with them fluent ancient Greek. Twenty-five years later we have an African-American in the Oval Office, in a more precise sense than is typical: Barack Hussein Obama II’s father was from Kenya. He wrote a memoir which I haven’t read, and a speech to the Democratic Convention in 2004, which I have read: many, many times.

On one level the Speech is a polite “fuck you” to Bloom and his ilk. Bloom opens his chapter on the Sixties by quoting a professor saying “You don’t have to intimidate us”. The scene is Cornell in 1969, and curriculum changes have been instigated by Black Panthers, allegedly brandishing guns. I’m not going to second guess how things were done in the Sixties. In 1969 I was ten. The following year many students were shot to death while protesting at state universities, so as I recall, there was a lot of intimidation to go around.

Barack Obama wasn’t around for the Sixties, so intimidation is not his style. (Except with Islamic militants, now that he’s President and has weapons to kill people with. Yes, assassination of our enemies is bad. Wars not make one great. But would you have him decline to use Sauron’s Ring of Power to annihilate our enemies? He’s not an elf and this isn’t Gondor.)

Obama happens to be approximately my age and could have been one of Bloom’s students in the 80s (but thankfully wasn’t). Obama told the Democrats in 2004 that we must “eradicate the slander that says a black youth with a book is acting white.” Does it matter whether the book is by Nietzsche or Malcolm X? At some level, a book is a book is a book, and a black youth with one is acting human, not “acting white”.

Here is Bloom:

“One can not and should not hope for a general reform. The hope is that the embers do not die out.”

Here is Obama:

“Hope in the face of difficulty. Hope in the face of uncertainty. The audacity of hope! In the end, that is God’s greatest gift to us, the bedrock of this nation. A belief in things not seen. A belief that there are better days ahead.”

Now, Bloom is a professor and Obama is a politician, and it is Bloom’s job to examine and question our opinions, rather than reinforce them, as Obama or Pericles might.

Still, Nietzsche warned us that philosophers of the future had best be “cheerful”. He may have been a syphilitic fascist but he was right about that. For Bloom’s generation of intellectuals, God is dead, and hope is for children. Pessimism is understandable for a Jew who was alive when the Holocaust occurred, taught by refugees, and never able to express his sexuality in public, always “vulnerable” (to echo his adjective for Socrates) to a threat of violence and intimidation which lies just below the surface of his Gothic-revival fantasy life.

It is understandable but not very inspirational, particular for a generation with a counter-nihilistic culture entirely invisible to our dismal Professor. Bloom, for some reason, hated Mick Jagger. I cannot understand why. Perhaps Jagger’s blatant sexuality was too grating for a man who had to hide his. (I note with considerable glee, however, that Mick Jagger was brought to you by Ahmet Ertegun, the founder of Atlantic Records and a graduate of St. John’s College.)

He didn’t spend much time listening to or learning about his students. This was painfully clear in his chapter on “Students” when he pretends to have asked his students what books they loved, and who their heroes and villains were. He found that, except for the few students with an annoying attachment to Ayn Rand, we didn’t like books and could not name our “heroes” and “villains”.

That is preposterous. If anything, we children of the Seventies had a great surfeit of “heroes” and “villains”. We were bombarded with a seemingly endless supply of the crudest black-and-white cardboard cut-out stereotypes Hollywood and Marvel Comics could imagine. We had presidents martyred or resigned in disgrace. We had Ghandi and King. And we had whole new genres of books which Bloom was too stuffy to read or talk about.

What Bloom observed was not a lack of moral culture but the shaming of popular moral culture into the closet. I’m pretty sure no student Bloom ever conversed with would have dared to tell him what or who they really liked, for shame or fear of being regarded as an imbecile. I got a taste of that when I applied to St. Johns. I was asked to write an essay about a book that was important to me. I chose Tolkien’s Lord of the Rings, which was then and remains my favorite epic. I was told by the admissions director that I had not done myself any favors with this essay, that Tolkien was considered trashy by the folks at St. Johns.

I can no longer be shamed. “Star Wars” (yes, dammit, all six episodes) is canonical for me. You can have War and Peace, I’d rather read Foundation or Dune. And I’d gladly hold up Lord of the Rings against Homer any day.

Let me tell you why. Don’t get me wrong. I’m really found of Robert Fitzgerald’s translations of the Iliad and Odyssey. I love reading them and letting his language flow over me. But the content is abominable. Far from being delighted when I first learned of Achilles, I was disgusted. He was a bad man doing bad things for evil reasons at the behest of psychotic gods. My initial gut reaction as a teenager has now hardened into something like Socrates’ position when he declared that poets must be banned from the City.

These are not good examples for children. You know what is? Hobbits.

Hobbits who persevere and demonstrate that “Audacity of Hope”. Also: wizards and rangers and Elvish nobility who all show restraint by refusing the corruption of power. And on the negative side: wizards and kings who despair and are paralyzed, preach paralysis or make deals with the Enemy.

These are suitable heroes and villains for children and the child within us: not Achilles.

Posted in Uncategorized | Leave a comment

The Feuerbach Exchange: an Application

Let’s apply this, shall we? This notion of that God represents humanity’s attempt to understand itself. Rather than rummage through old religious doctrine, let’s examine something political.

Just perusing the news of the day, I suppose the issue of contraceptives will do. The Catholic Conference of Bishops objects that they should not have to pay for insurance coverage for contraceptives for their employees. If we want to know what “God” thinks about this issue, according to Feuerbach we should try (and no doubt fail, but at least try) to guess what opinion “humanity” as a species might have. We employ anthropomorphic imagination and various other tricks developed by religion over millennia for making “God” in our own image as an intelligent, intentional entity, capable of having an opinion and communicating it with language. The difference being, we do not limit ourselves to the canon of sacred text or official opinions (in Greek: “dogma”) crafted over the centuries by experts sitting in committees.

First, though, let’s review the best opinion crafted the traditional way. The current Catholic position on contraceptives is a long-standing position of the Church. The Old Testament is not entirely clear on the matter. It contains some artifacts of archaic practices such as the rite of Sotah described in Numbers, chapter 5, where a woman accused of adultery was given an abortifacient to drink.

By the time “Christians” appeared (as a distinct Jewish sect) that was entirely gone. This was probably a reaction to the dominant Hellenic (Greek and Roman) culture in which killing unwanted or sickly infants was permitted. Hence the Oedipus myth: he was called “oedipus” or club-foot because he was left with a stake through his foot at a cross-roads, in the then-traditional method of disposing of unwanted children. (He somehow survived and was raised by wolves, only to return to depose and murder his father and take his mother as wife). There is ample evidence that Semitic peoples, including the Jews, practiced child sacrifice for religious reasons, the most astounding example being the story of Abraham and Isaac. In that very story, however, we see a transition to a new agreement with God that child sacrifice is deprecated. Rabbis and early Christian leaders were pretty much resolved against any kind of contraceptive or abortion by the First Century. The only question remaining was the legal status of the conduct of terminating a pregnancy, and what penalty attached in different cases, i.e. spontaneous miscarriage, accidentally caused abortion, at different stages of pregnancy and so on.

Gregory XIV, in 1591, reserved excommunication (the penalty for intentional homicide) to the killing of a “formed” fetus, relying on the Aristotelian view that a human fetus is “animated” (acquires a “soul”) only after 40-90 days after conception. In a city full of prostitutes, as Rome was in Gregory XIV’s time, pronouncements on the legality of abortion had immediate and dire practical and political consequences. In these days the Pope was a temporal ruler as well as a religious leader. Gregory, for example, raised an army to intervene in the French wars of religion against Henry IV.

The Catholic Church’s willingness to make legal, if not moral compromises ended at about the same time as its political sovereignty. Prior to 1870, the Popes actually ruled (or mis-ruled) some Italian territories as King. After losing French military support as a result of the Franco-Prussian War, revolutionaries captured Rome and deposed the Pope, after which he styled himself a “prisoner of the Vatican”, that is, the few blocks of the Eternal City which the Church was allowed to control and still controls to this day. Shortly before his final, total defeat as sovereign and general, in the 1869, Bull Apostolicae Sedis, Pius IX rescinded Gregory XIV’s not-yet-animated fetus exception and re-enacted the penalty of excommunication for abortions at any stage of pregnancy.

As a result of my half-Protestant, half-humanist upbringing, I must admit I could not possibly give an unbiased account of the Pius IX and the transformation of the Roman Catholic Church under his leadership. I think it’s fair to state, however, that Church reform coincided with loss of political power. Against the background of the Italian Risorgiamento (resurgence: the name given to the century-long process of Italian unification and establishment of a democratic republic) Pius IX convened the First Vatican Council, enacted the doctrine of “infallibility” and elevated the notion of the Immaculate Conception of Mary to an official church doctrine. Pius IX was a political liberal when he began as sovereign of the Papal States, and became a conservative as he lost the struggle with republican revolutionaries for control over his territories within a unified Italian nation. What was “liberal” then, of course, was rather contextual. For example, while his predecessors had banned the development of railways in the Papal State, and Pius IX encouraged them, along with other kinds of economic development. Fighting with the republican revolutionaries for control of the Papal States, however, seems to have converted Pius IX into a bitter, reactionary conservative. For example, in his liberal phase he moderated discrimination against Jews in his Catholic theocratic state by opening the Jewish Ghetto of Rome, but in his later conservative years he closed the Ghetto once again.

From my perspective then, everything the Church says about contraception comes from a group that once ruled the “known world” (at the height of its power during the decline of the Roman Empire) and constantly reminds us that it would like to do so again. They seem very bitter about the fact that everyone now laughs at their threats of excommunication, that someone like Stalin could dismiss the Pope’s relevnce with the question “how many divisions has he got?” (The Second World War(1948) by Winston Churchill, vol. 1, ch. 8, p. 105.). This is a group that would use the arm of the State and the sword of Justice to force you to drop out of school or quit your job to have a baby. While they are currently limited to using the persuasive powers of reason and superstition as the only tools at their disposal, that was not by choice.

That having been said, at least you can have a rational conversation with these guys, and trust that many of them have been thinking about this and deliberating about it for thousands of years. Not so with the conservative Evangelical Protestants, who will toss arguments they came up with about five minutes ago (and took about five minutes to bake up from their dough of whacky Biblical misinterpretation). There was no consensus on contraceptives among conservative Protestants until just a few years ago, when they jumped on the Catholic pro-Life bandwagon, apparently in reaction to the success of second-wave feminism and the entry of many women into the workforce, and into the ranks of hitherto all-male professional preserves, including the Protestant clergy in the mainline liberal denominations.

Thus, while I am biased against the Roman Catholic Church, I have some respect for its opinions. What marks Apostolicae Sedis as a legal code, moreover, is the recognition of advances in human embryology and a rejection of the use of Aristotelian theories which Gregory had found politically convenient. Evidently the Church had come a long way since its bungling of the Copernican Revolution and the trial of Galileo. While it was easier for Pius IX to take a principled stand on abortion after he had lost the temporal authority and responsibility for enforcing it, it has withstood the test of time and has been incorporated into an ever-more sophisticated “theological anthropology” as the Church continues to monitor, debate and deliberate over the implications of developments in the sciences of human reproduction. As hitherto unthinkable processes, like “asexual human reproduction”, became practical possibilities, the Vatican has poured thousands if not millions of man-hours cogitating about the implications, and you can be sure they didn’t waste too much of that time (maybe a little) reinterpreting Aristotle’s De Anima, or re-hashing the pros and cons set forth in the Summa of Aquinas.

All of which is to say, at least with the Catholic Bishops you can have a rational conversation. That having been said, just who do the Bishops represent? Our premise of God as humanity mandates that we try to ascertain this. The Vatican does not speak for (though it might speak to… with varying degrees of animosity) the billions of people who are not Christian. It lost the authority to speak for the Orthodox around 1000 CE and for the Protestants in the years around 1500. On the issue of contraceptives, the Vatican doesn’t even speak for most Catholics, who use them and will no doubt continue to do so.

Following the dictum of Dennis the Peasant (Monty Python’s Holy Grail) that “supreme executive power derives from a mandate from the masses” the Catholic Bishops would appear to have none. And as a lawyer I can assure you that their position in a United States court is completely untenable, perhaps even sanctionable as frivolous. Whatever the status of the Bishops’s position against paying for contraceptives for their employees might have as “morality”, it is certainly not a viable legal position in the United States. The Supreme Court covered this in United States v. Lee, 455 U.S. 252 (1982), which involved the Amish, who don’t approve of Social Security, believing that it is an individual or community religious duty to care for the elderly and disabled, which should not be delegated to the temporal government. Congress gave them a religious exemption, if they were self-employed. If they employed others, however, they were required to withhold and pay Social Security taxes for their employees regardless of their objections. Under the Free Exercise clause, the rule is to accommodate religious practice where possible and practical. The Court in Lee found the existing self-employment religious exemption was enough of an accommodation, reasoning that without some limit, the tax code would be swallowed up with religious exemptions (the opinion mentions pacifists objecting to war taxes). The analogy to insurance coverage seems pretty straightforward. Mandating employers to pay for health insurance for their employees is a kind of tax. A religious accommodation has been made for employees of religious institutions, but not church-affiliated commercial or charitable agencies.

The sphere of autonomy with respect to our bodies and our reproductive decisions was expanded in the Griswold case to include laws prohibiting the sale or distribution of contraceptives. This wasn’t moral approval or disapproval, it was an announcement that government (whether by a “mandate from the masses” or not) cannot be allowed to make decisions in this area. While many people take Griswold as legal “approval” of the use of contraceptives, just as they take Roe v. Wade as approval of abortions, that is not the intent, though it seems to have been the effect. The difference can be illustrated by considering whether the government should be allowed to require certain people to use contraceptives, as in the case of China’s One Child policy. Put in that way, I think even the Catholic Conference of Bishops would have to agree that government or majority rule does not have jurisdiction over the matter, though of course they would defend their agreement on the grounds that forcing people to use contraceptives would be unjust or immoral, rather than a deprivation of an inalienable right to privacy.

There are now 7 billion people in the world, and few of them express even nominal allegiance to the Church, and even most Catholics ignore the Church’s positions on contraceptives. This doesn’t result from a failure of education. It results from the fact that the dogmas of the Church are utterly preposterous and the human mind simply refuses to accept, absorb and live by them. Authority in the Church, however, doesn’t come from opinion polls, from the conduct of Church members, or from “a mandate from the masses”. It is based on the inspiration of the Holy Spirit as revealed in scripture and tradition.

Many positive things can be said about tradition, primarily that it moderates and mediates the conflicts between personal and individual moral positions, and tends to smooth off the rough edges created by personal self-interest or the biases inherent in solutions to immediate crises in particular times, places and cases. We have a saying in American law, attributed to Oliver Wendell Holmes, Jr.: “widows and orphans make bad law”. Of course we make exceptions for unusual or sympathetic cases, and we should continue to look out for the “widows and orphans” and disadvantaged of all types. General principles of law, however, should not be based on the exceptional cases, they should be acknowledged as exceptions. In the United States, and everywhere the Anglo-American legal tradition holds sway, we give precedent a high value, up to the point where it conflicts with another kind of precedent: the Constitutional recognition of human rights, with which no government should be permitted to interfere.

Tradition is always relative and particular. It is narrow-minded because it comes from a particular time and space occupied by particular human beings, however impressively long that time period might be or the space occupied by the people who follow that tradition. The dodge used by Catholics is that the Holy Spirit inspires and elevates their peculiar apostolic tradition to the level of a universal and absolute morality for all times and places and all people. This is obviously bullshit. The use of religion as a means of social control created the “Catholic” church in the first place. Before its adoption as the state religion of the Roman Empire, Christianity existed in a splendid multiplicity reflecting the diversity of its geography, adherents and the preceding religious practices which it absorbed. (For example, the Eucharist symbolism of a feast of bread and wine as “body” and “blood” seems to have been based on Mithraic rites.) After the Roman Emperor Constantine adopted Christianity, however, he convened a number of councils to create a uniform dogma, a “catholic” (Greek for “universal”) theology. These councils were the source of the Nicene Creed and various articles of faith such as the mystical doctrine of the Trinity. The dogma were deliberately crafted to be illogical and preposterous (ask any knowledgable Muslim about the notion of “God” having a “Son”, for example) and thus suppress reasoned debate about the nature of God. Christians did, of course, continue to have reasoned debate about God’s nature, but forever afterward had to circle around certain mysteries as “given”.

This was the thrust of the second, “critical” part of Feuerbach’s Essence of Christianity, culminating with Chapter XXVI on the distinction between “Faith” and “Love”. “Faith” for Feuerbach is the negative tendency to make distinctions between humanity and God, and between believers and non-believers, and thus to insist on dogmas such that people’s religious ideas and feelings can be determined to be “true” or “false”, and thus leading to Heaven or Hell. Of course, critical dichotomies like this have been a constant feature of Christian thinking ever since Paul’s letters distinguishing “Law” and “Gospel”. A distinction between “faith” and “love” is a rhetorical device which has been used to convey the opposite of what Feuerbach meant (as with Karl Barth, who championed the absurd and the irrational qualities of Christian “faith” and deemed them essential to Christian theology and practice) or something entirely different, as in the case of Dietrich Bonhoeffer, whose expressed his horror over the transformation of the state church in Germany into a propaganda arm of the Nazi Party by talking about the distinction between “religion” and “revelation” (with “religion” taking on a distinctly negative and evil connotation not all that dissimilar from Feuerbach’s “faith”, but definitely referring to an organization which could be put under Fascist control through the simple expedient of appointing party agents to leadership positions in rigged elections).

We lawyers speak of “presumptions”. It is the nature of dogmatic religion to presume that dogma is true and seek to conform our understanding to it. With Feuerbach and virtually all liberal theology since, the presumption is reversed: the dogma is presumed to be a finite, limited, aesthetic image or symbol, a product of human imagination and not self-sufficient divine revelation, which must be interrogated to make it explain itself to us. An individual’s failure to understand is thus not evidence of “sin”, “brokenness” or serving the Devil, but to the contrary a natural starting point in a search for the meaning or beauty in the doctrine. For me the mysteries are not “given”, but to the contrary stand accused of being unintelligible, or superstitious gibberish. It is the nature of “presumptions” in the law, however, that they can be rebutted with evidence to the contrary. I remain open to the possibility of truth and beauty in apparently nonsensical expressions such as the Easter story of Jesus’ resurrection, and mindful of Paul’s dictum that “faith” is the “evidence of things not seen”. Meaning could well emerge, but I’m not going to hold my breath waiting for it to reveal itself.

So the position of the Catholic Bishops of the United States stands as an attempt to invoke the authority and general validity of a tradition as such (in the case of contraceptives, one that substantially predates any scientific understanding of embryology and the technological ability to control reproduction via hormone therapy). A tradition is always going to incorporate more facts, more points of view, more applications to a variety of specific problems, than a single person’s hastily conceived opinion based on personal biases or limited to a specific problem at hand. Tradition, as such, therefore moves toward the conception of God offered to us by Feuerbach, that of species-level self-understanding. But it still falls well short. The Catholic tradition is primarily European and limited to a relatively brief slice of historical time. The human species as currently constituted, however, has existed for hundreds of thousands of years before that, and hopefully will continue for hundreds of thousands more.

A full account of the human stance toward contraceptives would have to include viewpoints from the deep past, and to the extent we can imagine it, the future. For me, what emerges from trying to think like that is no general rule, with which we can critique the Catholic position as “right” or “wrong”. Rather, I have a strong feeling that “it depends”. Precisely what “it” (morality) “depends” on is something Feuerbach is rather weak on, if we consider carefully what his Marxist critics had to say about him later. They in turn asserted that “morality” was an ideological superstructure erected upon the basis of economics: class struggle over the means of production, to be precise. One could criticize the Marxists in turn for being overly narrow in their fixation on a particular period in history and the economic transition which dominated it: the transition from feudal to industrial means of production.

One cannot simply take a poll or hold a referendum of all humanity, past, present and future. Attempts to find propositions which everyone at all times would agree with either suffer from generality to the point of uselessness or a peculiar political bias (such as that of the Catholic Church) which refuses to recognize itself as biased.

Nor does “science” help us out. One could state as empirical “fact” that we already have quite enough human beings, and that population growth at current levels is not ecologically sustainable. We’ve heard this “fact” restated in many different ways and with different timelines since the days of Malthus (1798). On the other hand, it appears that once people have reached a certain standard of living, population growth tends the slow down to below replacement rate and thus, without any overt interference from Church or State, our population “bomb” seems to defuse itself. Of course, “standard of living” includes access to contraceptives, and what follows from that, a real and viable opportunity for women to pursue education and a personally and financially rewarding career. So it seems that “science” suggests a pro-choice position on contraceptives, but that’s not really “science” there. That’s “science” plus certain political and moral choices about the right way to treat fellow human beings. From a strictly empirical, “scientific” standpoint, just killing a few billion people would be a more direct and efficient solution to overpopulation. We reject that solution for moral reasons.

When the Catholic Church consecrates a female Pope, or even just a bishop or two, I suppose we can take them more seriously on this issue. In the end, I suppose that’s all I can say for my experiment of holding this particular Catholic doctrine up to the Feuerbachian measuring stick of God-as-humanity. Until you include the viewpoint of half of humanity –the female half– you can’t really say you’re speaking for God.

Of course, this indictment of the Catholic Church as narrow-minded and patriarchal flows directly from the issue chosen: contraceptives. Other religious groups and dogmas could be criticized in a similar manner if we chose some other issue in which “religion” or “morals” sought to intrude into the political sphere. And, because I started with an “issue”, the use of Feuerbach had to be negative and critical. There is also a positive side to Feuerbach which I haven’t mentioned in this essay at all. Suffice to say that I remain open to a Christian God who is something more than all-of-humanity, something or someone “transcendant”, but until I find a Christian sect that at least passes the all-of-humanity test, that receptiveness remains unchallenged.

Posted in Uncategorized | Leave a comment

The Feuerbach Exchange, Part Three: On Method

“Don’t we all agree now that Truth is dead?”

I refer back to my conversation with my friend the philosophy professor regarding 19th Century theologian Ludwig Feuerbach.

Truth-with-capital-T may very well have been gassed and burned at Auschwitz or starved to death on the Russian steppes, and continues to suffer millions of smaller deaths at the hands of the machete-wielding tribes of ethnic hatred or the car-bombing explosions of sectarian strife. Every one these murders thinks they possess the Truth, and a “decent respect to the opinions of mankind” requires that I reject such claims. I am therefore duly chastened by my friend and skeptical about whether Feuerbach exposed the world historical “truth” about Christianity in his exploration of what seemed to him to be the material causes of the religion. But I was never in it for the Truth, as such, but rather more concerned with method.

There are several methodological advantages to thinking of God as humanity rather than the traditional anthropomorphic ghost with super-powers.

The first and most important is that it eliminates logical absurdities and opens up theology as a rational inquiry. By absurdities I mean the sort of contradictions that arise when you claim super-powers: can God create a weight He cannot lift? What was God doing before he created the Universe? With God no longer the unconditioned end-point of infinite series of causal conditions, whole families of logical problems simply disappear.

Even more liberating, the “Problem of Evil” is resolved into a morally coherent, rational dualism. That is, if God is not the Creator of the universe, but rather just another product of the Creation, of “what is”, we no longer have to ask dumb teleological questions about the natural world. Why would God create mosquitoes and malaria? Why are there diseases and accidents and natural disasters… as if an entity who could create such things had human intentions and purposes. I say “dualism” because what we are left with is an acknowledgement of the struggle we have always faced: Man v. Nature. This obviates all the morally repugnant statements about human suffering being a part of God’s “plan” or God’s “will”. This is incoherent to the point of reprehensible: we can no longer justify spouting nonsense like “this child’s suffering from terminal cancer is God’s will.”

If you want to think of it as Good v. Evil you will not be far off, provided you remember it is no longer an “absolute” Good v. Evil but one which is relevant and relational: “Good” (for us) v. “Evil” (to us). Moreover, you have to always keep in mind we are talking about what is good for the human species as a whole and not you personally. You may wish to have your local sports team prevail in the championship, but the rest of humanity may be indifferent.

Now, I am not unaware of or insensitive to the liberating power (for some individuals) in the absurdity of the traditional theology. By making God transcend human reason, and mystifying the causes and purposes of the natural world, they created a free space for themselves to pursue their own individual spiritual quests. One can see how this worked in the lives and works of some of the more mystical saints: St. Teresa of Avila, for example, and her contemporary St. John of the Cross. In the case of Teresa, in particular, here we have an individual who was at a distinct disadvantage in the society in which she lived: being female and spiritual in patriarchal post-Reformation Spain under the Inquisition. Teresa worked tirelessly to reform the convents of Spain into places where women could pursue a spiritual and religious life with dignity and independence, rather than a sort of sexual slavery: convent as holding-pen for future wives or whorehouse for the clergy. She was constantly threatened with either death, imprisonment, or worst of all for her, censorship and political repression. She used the mysticism of the Catholic Church and its insatiable thirst for miraculous proof of the love of God to create a space in which she could work to give women an independent, alternative lifestyle devoted to good works in the world and spiritual exploration s individuals.

I have to say I admire the courage and perseverance of Teresa, and I have been astounded by reading about her internal mental explorations in books on prayer and “the Castle” of the mind. She raises Catholic understanding of the life of the mind to levels previously attempted only by Sufis, Hindus and Buddhists. In the end, though, I can’t help but compare her model for the reform of the convent with the expedient of simply abolishing it. Who, under the circumstances, had a more independent and liberated life: Teresa of Avila, or Kathryn the ex-nun wife of Martin Luther? Which was the better path toward equality and freedom for women: the convent… or the beginning of the evolution of the civil, secular marriage, first in Protestant Christendom and now in all of the civilized world? Mysticism may have been a good incubator for Teresa’s spirituality, but it was a historical dead end.

Let that serve as my reply to anticipated objections that superstition and mysticism are liberating for some individuals. Mysticism may have spared Teresa from the Inquisition. In the long run, however, it would be better to, in the words of Voltaire, “écrasez l’infame” (crush the infamy). However difficult it may be and however long it may take, it is better to fight oppression, hypocrisy and cruelty than retreat and hide in superstition. Of course, if your choice is between being burned at the stake and beating the Inquisitors at their own game, the latter is both preferable for your own sake and more edifying for posterity.

This is hardly our situation in the 21st Century. Our situation is not without its burdens and difficulties. Having liberated ourselves from the mind-traps of the past, it becomes our duty, responsibility and sign of our maturity to test our moral assertions against a rational measure (the good for all rather than the good for me, the good for the long run rather than short-term personal gain). It may have been somewhat of an improvement when we (meaning by “we” us intellectuals) stopped asking what “God” wants and instead asked what the “proletariat” needs or wants (Communism) or the “nation” or “Volk” (people) wants (Fascism). This was, at least, somewhat more practical and concrete than inquiring about the motives and standards of an inscrutable Deity based on the few scraps of text deemed authoritative. As the horrors of the early Twentieth Century are there to remind us, however (those of us with some grasp of history at least) that what we gain in practical attention we also stand to lose in monstrous crimes on a new, industrial scale. Authoritarian utopian idealism showed us how political ideologies which focus on a particular class, nation or group result in previously inconceivable cruelty to those outside that group (genocidal Nazi anti-semitism, mass-murder in the partition of India, the Rwandan genocide) or catastrophic stupidity which dwarfs even warfare as a reproof to human rationality (the failed attempts to collectivize agriculture in Russia and China which resulted in the death of millions from famine). The traditional “God” at least transcended class and nation, which served medieval Christendom well, at least until it ran into religious borders (the clash with Islam) or fractured in sectarian schism (the Wars of the Reformation).

On the other hand, the popularity of ethics of selfishness suggests to me that Western intellectuals as a group are rather immature and not quite ready to take on the burden of globalization. There is a new kind of “mysticism” in which the undeniable power and motive force of self-interest (in the words of La Rochefoucauld, “l’amour-propre“: love of one’s own) is magically transformed into the Good as Such. While only dunderheads and blow-hards like Ayn Rand and her followers dare to explicitly articulate this hand-waving on the philosophical and intellectual plane, it is the political philosophy of nearly half of the Western world, as represented in conservative political parties everywhere: Republicans in the United States, Tories in the United Kingdom, and so on.

In the words of a liberal slogan from the late Twentieth Century, we need to “think globally and act locally”. It is so easy to say, it conceals the difficulty of actually doing it. In all of my training in law, in the resolution of concrete disputes between real people in action, I almost never reach the “global” level of policy consideration, not because I wouldn’t want to if I could, but because the institution itself considers as its highest authority the Supreme Court of the United States, or at it most lofty transports of idealism, the People of the United States. While I have no doubt that we are the most powerful and perhaps the most interesting people ever assembled into some degree of political unity, we are not humanity as a whole.

Posted in The Feuerbach Exchange | Leave a comment

The Feuerbach Exchange, part Two

Why Feuerbach? Most important to me: Feuerbach posits that God is a representation of humanity’s attempt to understand itself.

I wouldn’t recommend actually reading the Essence of Christianity unless you are the sort of person who can dive into something old and mostly useless and find nuggets of truth (which describes some of my friends, to be sure). To employ a Minecraft metaphor, if you’re the sort who’s willing to dig down through 64 blocks of stone, dirt and gravel, fight off the creepers, skellies and zombies, and avoid falling into the lava just to get a couple diamonds, go for it. Otherwise, let me sort it out for you.

Feuerbach himself seems to have wanted to promote a contrast between a religion of “faith” and a religion of “love”. In this he seems to have anticipated much liberal theology of the 20th Century, particularly after two World Wars and the Holocaust left much of Christendom a crater-pocked ruin. Others do it better, however.

What made Feuerbach famous among German leftists of the 1840s was his “materialism”. Among the Marxists this became “dialectical materialism”. By “materialism” they meant something different than what we would call “science” today, more of an orientation towards facts rather than fact-gathering. What a Hegelian philosopher of the 1840s would mean by “materialism” is simply the rejection of idealism, the philosophical trend which presumes that ideas cause reality, rather than being themselves merely the result of natural and/or mechanical processes. I would venture to suspect that both camps would be overwhelmed by the advances in determining the material causes of natural phenomenon since the 1840s, with quantum physics giving both sides plenty of new material to argue over.

Another feature of contemporary “science” is transparency: it is a collaborative and peer-reviewed process. While this may mean that your reports are fully comprehensible only to other specialists in your “field”, there is at least some attempt to make yourself understood by others, for the purpose of having your work replicated by others. Hegelians like Feuerbach liked to pretend that their work is perfectly transparent (and I daresay Feuerbach is a bit easier to understand than Hegel himself) and they certainly imagined they were building conceptual “systems” which could be understood and reproduced and enhanced by others, they mostly failed at this. The exception, of course, was Marxism, which did indeed create a set of portable concepts which could and did spread around the world and find application in an amazing variety of circumstances. Feuerbach…. not so much.

By “anthropology”, the word Feuerbach uses to describe his account of the origins and purposes of Christianity, he means merely the study of man. He does not mean a science based on empirically-gathered data, that is, what we would call “anthropology”. Basically he is saying his viewpoint is human, not divine; it’s about about human beings want and need, not what God wants and needs. Why the Christian God would want or need anything was always a mystery anyway, but traditional theology was always trying to discern and explain God’s agenda for us. Feuerbach, instead, is working on our agenda for God, which you might think of as an exegesis of the import of Voltaire’s remark “if God did not exist, we should have to invent Him.” Why, indeed, would invention be necessary?

Today we can discuss, for example, the “hyperactive agency detection device” theory as to why people believe in supernatural entities. Ideas like this can now be grounded in notions of natural selection and evolution, which do not ensure accuracy but at least allows us to understand one another. The intellectual tools to discuss such theories didn’t exist in the 1840s. Darwin’s Origin of Species wasn’t published until 1859. That is not to say that such ideas might not have occurred to a thinker like Feuerbach. What he lacked, however, was a set of presumptions and methods which enjoyed widespread acceptance among his readers, which now make the job of giving “material” explanations for religious beliefs much easier.

Likewise, a historian of science might be interested to trace Freud’s ideas on the psychology of religion back to Feuerbach. There’s an obvious connection between Feuerbach’s remarks on “wish-fulfillment” and ideas developed more fully by Freud. (Briefly, we fear death and wish for immortality, so it makes us feel happy to believe in an entity that has the power to grant us eternal life.) But again, these ideas were developed more fully and freely by Freud in the 20th Century, so there’s no reason to go back to Feuerbach to read them in mere embryonic form (unless you are an academic, which I am not).

I don’t read Feuerbach as “science” but rather as political theory. Ultimately, the “materialism” of Feuerbach is a political statement against conservative, traditional approaches to religion which allow supernatural explanations to pass as unexamined fact, or grant authority to Scripture as the revealed word of God, rather than a set of texts written and compiled by human beings.

While it may be contrary to his intent to read his work as a political tract rather than a scientific monograph, today’s readers cannot indulge Feuerbach in this respect. If a “science of religion” is what we are looking for, we need to look elsewhere. Despite efforts to be “concrete”, Feuerbach has no data and doesn’t seem to think he needs any data. I find support for my approach, however, in how contemporary readers understood Feuerbach. His development of what came to be known as “dialectical materialism” gave him fans and financial supporters among the emerging radical socialist movement in Germany. This transformed the thinking of the “Young” or “Left” Hegelians and made them Marxists. For posterity, however, those fans and supporters pigeon-holed Feuerbach as the transition between Hegel and Marx. Today, you can find an online version of the Essence of Christianity, translated by George Eliot (!) at marxists.org. Marxists, however, have little interest or use for religion, and so most of Feuerbach’s insights are wasted on them.

Events of the time also doomed Feuerbach’s theories to obscurity. The revolutions in Germany in 1848 were a dismal failure. Germany did not become a democratic republic then, indeed, Germany failed to become a unified nation of any kind, until a generation or so later (1870) it was unified under Prussian militarism. When you read Marxist comments on Feuerbach, then, they have a bitter disappointment to them born of political failure. Here, for example, is how Marx concludes Theses on Feuerbach: “The philosophers have only interpreted the world, in various ways; the point is to change it.

I admire Marx’s attempt to ground his ideas in concrete, historical and factual settings, and in human social practice, rather than mere ideological speculation. That this is difficult does not mean we shouldn’t try. But today we can’t read “the point is to change it” without thinking of the failures of Communism and hearing the Beatles’ “Revolution” play in our head. Disaster waits for those who seek to change the world without first understanding it.

Posted in The Feuerbach Exchange | Leave a comment

The Feuerbach Exchange

The email said meet at the Albuquerque Press Club at 5. The Press Club is a log and stone mansion on a hill overlooking the freeway. Back in August, in went there to see an old friend while he is passing through town. I don’t see him much these days because he is now a philosophy professor down in Texas.

That was last August, and although the conversation was brief (the part I would share publicly, anyway) it continues to reverberate. I’ve known Stephen since we were teenagers and he always manages to say something “oracular” which gets me thinking. (Of course, oracles aren’t to be trusted: “A great empire will fall”, says the oracle, but doesn’t tell you which empire: yours or the enemy.)

This time I asked Stephen about Ludwig Feuerbach. You have to understand: we are both preacher’s kids and have read a lot of theology. Stephen even made Kierkegaard the subject of his doctoral thesis. Now, however, neither of us can read traditional theology any more. The self-deception, the sacrifice of intellect to some pointless standard of orthodoxy is too painful: at least when reading pre-existential or non-ironic writers.

I dropped Feuerbach’s name as a way of suggesting a bridge between old ways of thinking about religion and our current situation. Stephen made a number of remarks, which might seem to you to be total non-sequiturs. To explain, or rather, expound each one (because I can’t ever pretend to know what Stephen really means when he says these things) would take an essay, or in the current vernacular, a longish blog post. Here’s a sample of coming attractions:

“Wasn’t it you who told me: one must pass through the firey brook?”

“Don’t we all agree these days that Truth is dead?”

“These days I only like to read and teach Japanese Haiku.”

First I should mention, though, what I mean by “current situation”. Internet fora of various kinds can reveal trends in religion in America, most of which are extremely disappointing to the thinking person.
There is the conservative “Christian” fascist tripe, about which the less said the better. While my interest in religion is almost entirely political (I am a lawyer by trade, not an academic or clergy-person) the obviously reactionary form of right-wing religious politics doesn’t deserve much thought. Not much thought goes into it. You know it when you see it, you knock it down, kick it in the face and laugh at it. Better to just avoid it.

On the other hand, the modern “atheist” trend isn’t much better, though there’s at least some hope for intellectual growth there. We won’t be getting it from the heroes of the movement, writers like Richard Dawkins. He makes it pretty clear (in the first few chapters of The God Delusion, for example) that he is aware that liberal Christianity exists and just doesn’t care. And this is understandable, since liberal Christians aren’t mounting any political challenge to the very existence of Dawkin’s chosen profession: evolutionary biology. One also gets the impression, however, that Dawkin’s idea of “religion” is forever fixed in his upbringing as a conservative Anglican. He can’t be bother with, say, a Tillich-style systematic theology of God as “ultimate concern”, because to him a religion without an anthropomorphic God dispensing punishment and reward through supernatural miracles just isn’t worthy of the name “religion” of “Christianity”.

Young people who make posts on reddit’s “atheism” subreddit (using “young” here not in the sense of age, but as a synonym for “someone without a professional/institutional axe to grind” or simply “one who’s intellectual curiosity remains”) could possibly benefit from my sketchbook.

I say “sketchbook” because I survey the state of Christianity in Western Civilization much like the ruins of Rome from its period of decline and depopulation in the Early Middle Ages. Artists and architects have, ever since, been sketching the ruins. While Rome was never fully abandoned, and went through a rebuilding and re-decoration in the Renaissance, the form of the ancient temples and circuses became somewhat of a mystery after their functions ceased. Reviving worship of the pagan gods might be a fantasy entertained by a gentleman sketching the ruins in the 19th Century, but seriously, what one seeks to evoke is “the grandeur of Rome” as shown in the form and line of concrete and marble. Likewise, the intellectual decline of Christianity since the age of Revolutions left the religion in a similar state of decline. I view cultural artifacts like the doctrine of the Trinity in similar manner: like a pile of rubble and columns that was once a temple to Castor and Pollux. We can appreciate them in their outline but not as living institutions.

Posted in The Feuerbach Exchange | Leave a comment