Friday, December 11, 2009

On Autism and Internet Dating

A few months ago I read a short work of fiction, The Curious Incident of the Dog in the Nighttime. The story, which begins with a neighbor’s dog who’s been murdered, is told from the first-person perspective of an autistic teenage boy. The author, Mark Haddon, does a marvelous job of (apparently) capturing something of the inner world of an autistic person, while still keeping the story interesting. The boy’s awkwardness and inability to navigate the normal social cues and expectations highlights all that we “normals” take for granted as we make our way through the world. At the same time, and especially once the account moves beyond the murdered dog and becomes a story of the boy’s dysfunctional family and his parents’ struggles to relate to him and to each other (and perhaps his own struggle, in his own way, to relate to them), we feel, curiously, a growing closeness to this always-distant boy.
I probably would not have written here about The Curious Incident if I hadn’t subsequently read another book that, while apparently about a quite different subject, turned out to be surprisingly relevant. When I attended a conference in Vienna recently, I told my hostess, an old friend from Heidelberg days, about some of my experiences with internet dating. She immediately bought me a book that’s been all the rage in Austria and will, I hope, someday be translated into English: Gut gegen Nordwind (Good Against North Wind) by Daniel Glattauer. It’s the story of a man and woman who, while not engaged in internet dating per se, encounter each other by chance online – and subsequently fall in love through dozens, even hundreds, of emails back and forth, without meeting. The novel consists of the collection of their emails, nothing more – no authorial descriptions or commentary. It brilliantly captures, I think, the allure – and ultimately the danger – of romance by words alone. In the novel, each person projects on to the other all sorts of hopes that run much less risk of being dashed as long as the couple never meets; each person “is there” electronically for the other, at much less cost than a real presence would demand. The romance feeds on itself, as real romance does, too – but here without almost any of the usual checks. In the dramatic ending, the fantasy dissolves – without the two ever having met.
So, an autistic boy, lost among the turbulence of human interaction, just barely registering his parents’ need to connect with him, and two internet-lovers kissing with words, only words, building their mirage of intimacy. From opposite directions, surprisingly, the two novels show us some of the obstacles to bridging the divide from “I” to “I.”

Wednesday, December 9, 2009

The "One Percent Doctrine" and Environmental Faith

Tom Friedman's piece today in the Times on the environment ( is one of the flimsiest pieces by a major
columnist that I can remember ever reading. He applies Cheney's "one percent doctrine" (which is similar to the environmentalists' "precautionary principle") to the risk of environmental armageddon. But this doctrine is both intellectually incoherent and practically irrelevant. It is intellectually incoherent because it cannot be applied consistently in a world with many potential disaster scenarios. In addition to the global-warming risk, there's also the asteroid-hitting-the-earth risk, the terrorists-with-nuclear-weapons risk (Cheney's original scenario), the super-duper-pandemic risk, etc. Since each of these risks, on the "one percent doctrine," would deserve all of our attention, we cannot address all of them simultaneously. That is, even within the one-percent mentality, we'd have to begin prioritizing, making choices and trade-offs. But why then should we only make these trade-offs between responses to disaster scenarios? Why not also choose between them and other, much more cotidien, things we value? Why treat the unlikely but cataclysmic event as somehow fundamentally different, something that cannot be integrated into all the other calculations we make?
And in fact, this is how we behave all the time. We get into our cars in order to buy a cup of coffee, even though there's some chance we will be killed on the way to the coffee shop. We are constantly risking death, if slightly, in order to pursue the things we value. Any creature that adopted the "precautionary principle" would sit at home - no, not even there, since there is some chance the building might collapse. That creature would neither be able to act, nor not act, since it would nowhere discover perfect safety.
Friedman's approach reminds me somehow of Pascal's wager - quasi-religious faith masquerading as rational deliberation (as Hans Albert has pointed out, Pascal's wager itself doesn't add up: there may be a God, in fact, but it may turn out that He dislikes, and even damns, people who believe in him because they've calculated it's in their best interest to do so). As my friend James points out, it's striking how descriptions of the environmental risk always describe the situation as if it were five to midnight. It must be near midnight, since otherwise there would be no need to act. But it can never be five *past* midnight, since then acting would be pointless and we might as well party like it was 2099. Many religious movements - for example the early Jesus movement - have exhibited precisely this combination of traits: the looming apocalypse, with the time (just barely) to take action.
None of this is to deny - at least this is my current sense - that human action is contributing to global warming. But what our response to this news should be is another matter entirely.

From Bauhaus to Our House

For my recent peregrinations into the city, I picked up Tom Wolfe’s slender volume on modern architecture. Wolfe writes wittily and acerbically about the long dominance of the “International Style,” which banned any kind of decoration and non-functional elements from its geometric, identical-looking buildings (think almost any Manhattan skyscraper built from the 1940s to the 1980s). This movement originated in Weimar Germany and, through sycophantic American architects and especially the immigration to America of such men as Walter Gropius and Mies van der Rohe in the 1930s, took hold in the United States. Wolfe asks how so many corporations, foundations, universities, and private individuals have been convinced to pay for buildings whose designs they themselves often find sterile and unappealing.
Wolfe’s explanation – if correct (I know far too little about this particular field to be sure if it is on the mark, but it certainly has the ring of truth) – sheds light on three broader issues. First, the Bauhaus “compound” was one of the early breakaways from the official art academies, part of the liberation of Europe’s bourgeoisie from state tutelage. Yet, according to Wolfe, the breakaways hardly contributed to a true liberation. Rather, they merely established their own, internally generated orthopraxy, placing any apostates under anathema. That is, they became a new “clerisy,” a priestly class dictating taste. Remarkably, this class, perhaps through its generally united front and elite prestige, convinced its customers to “take it [their unappealing designs] like a man.”
Second, the Bauhaus clerisy (and no doubt others) made rejection of everything “bourgeois” the touchstone of their style – hence the banishment of all useless decorative elements. The fact that all of the Bauhaus architects themselves were eminently bourgeois hardly gave them pause, let alone derailed their project. Indeed, this instance of bourgeois self-hatred would seem to be typical of a vast, still under-explored and, to my mind, tremendously important phenomenon stretching from at least the 19th well into the 20th (and probably 21st) centuries. Didn’t bourgeois self-hatred contribute significantly to the popularity of Marxism, which found its most devoted following not among actual workers, but among the bourgeoisie, the great exploiting class, itself?
Finally, the “International Style” took hold in America because American intellectuals continued long into the 20th century to be in thrall to European trend-setters, a manifestation of what Wolfe calls the “colonial complex.”
All told, then, this is an interesting exploration of elite formation, elite self-delusion, and elite enthrallment.

Thursday, December 3, 2009

Capitalist and Communist Fictions

When I was a scruffy grad student at Harvard earlier this decade, Marc Zuckerberg was a gawky undergrad at the same school. As I put the finishing touches on my recondite treatise on the German workforce, young Zuckerberg (whom I never encountered) was creating Facebook, which just booked its 350 millionth user. What an accomplishment, to have thought up such a transformative social medium! The user-milestone has gotten me thinking about just what Zuckerberg wrought. Has he really created value? Before suggesting that any positive answer, at least the standard, obvious one, likely involves what we might call a “capitalist fiction,” let me set the stage by recalling what Gunnar Myrdal termed the “communist fiction.”
This had nothing to do with Marx, Lenin or Stalin, but rather with how we thought about, compared, and aggregated measures of individual well-being. After having “solved” the problem of value by means of subjective utility, neo-classical economics wrestled with the problem of interpersonal comparisons: if subjective taste or utility truly is the measure of value, how could one ever construct an aggregated, collective measure of an entire society’s well-being? How should one compare and add together, say, my enjoyment of a chocolate ice cream cone to your preference for a peach? Weren’t these really fundamentally subjective, and hence incommensurable, valuations? Thoughtful, philosophically-curious and informed economists around1900, above all those in the Austrian School around Carl Menger, wrestled with this problem. Later economists, less given to doubt and less broadly educated, “solved” the problem by means of measuring well-being in terms of the dollars (yen, pounds, etc) we are willing to pay for something. These units of currency are standardized and hence commensurable. One can build on them measures of national well-being such as GDP. This is what Myrdal termed – and criticized as – the “communist fiction.” In fact, he suggested, we can’t really measure collective well-being.
In addition to this communist fiction, it seems to me that we also face what might be called a “capitalist fiction.” This relates not to the problem of comparison and aggregation, but to that of personal, inter-temporal choice. How much value has Facebook created? The simple answer is to measure Zuckerberg’s and his share-holders’ wealth, and say his creation has generated this much additional value. (The number must run into the billions.) But for the individual users of FB, how much has it added to her well-being? Again, we could refer to the communist fiction and respond that we simply can’t come up with an aggregate number. But even for the individual user him or herself, can we really say? Like so many of these social media, FB is or can be addictive. How does one assess the value of an addiction? Addictions typically involve discrepancies in individual inter-temporal preferences. In simpler terms, that means that one and the same individual may well enjoy using Facebook, when he is actually logged on, but afterward he may regret all the time he spent on trivia, time which could have been better used on some other activity. His own preferences vary depending on when you ask him about them. So, how should we even begin to assess FB’s contribution to the individual user’s well-being? How – when (from which temporal vantage point) – should one begin to estimate the “value” of heroine to an addict?
If you add this “capitalistic fiction” to the “communist fiction,” there may well be less to celebrate in Zuckerberg’s announcement of his 350 millionth user. Of course, I doubt this is disturbing the party he’s throwing for himself. At least at that one time, that one (very, very wealthy) pioneer will most likely not be suffering much from either fiction.

Tuesday, December 1, 2009

The half-baked Church

Teaching the first half of a world history course for the first time this semester has provided numerous opportunities for me to learn. I’ve had to develop at least a rudimentary understanding of the non-European civilizations and to think much more about global patterns, commonalities, and differences. At the same time, as I’ve discovered over the last couple of days while preparing my lectures on the European middle ages, I’ve gotten a chance to think more about questions with which I thought I was already somewhat familiar.
Many scholars, in trying to explain Europe’s sui generis path, have pointed to the division of power between secular and religious authorities going back to the middle ages and even to the origins of Christianity and the Roman empire. This division may have been the mother of all divisions of power. One way to think about the sources of this ur-division is to say that western Christianity, the Catholic Church, was half-baked in its origins. A common feature in other civilizations was that religious and secular power were fused, they were held in the same hand. This was the case with the Muslim Caliphs as well as the Chinese emperors. It would become the case with the Roman/Byzantine emperors in the eastern, surviving half of the Roman empire, as they developed caesaro-papism. Western Christianity developed differently because of the nature of the empire within which it grew, because of the Church’s experiences in the first centuries of its existence, and especially because the western half of the empire collapsed, preventing a likely drift into caesaro-papism. The eastern Church was fully baked, one might say, while the western was only half-baked.
For the first 250 or so years of organized Christian churches, they were persecuted by Roman authorities, and hence developed a healthy suspicion of political power. Constantine’s conversion and edict of toleration in 313, of course, brought the Church great patronage and benefits, but it was only in 395, with the declaration of Christianity as the only tolerated religion within the empire, that Christianity became fully allied with, and thus potentially subservient to, the state. By this point, however, the western empire only had another 80 years to live. That is, the western Church tasted the fruits of an alliance with power for only a relatively short time.
But its incubation within the Roman empire for 400 years, first – and for the longest time - as persecuted, then as tolerated and promoted, and finally as the only officially backed religion, did form crucial features of the Church: in its bishoprics, it imitated the structures of Roman urban life; in its hierarchy, that of the Empire; in its legalism, the Roman tradition going back to the Twelve Tables. The basic structures of much of Church life were partly formed – half baked - by the Roman empire, including some of the remnants of the Republic, without the Church becoming too wedded to power – without it being fully baked by power.
Of course, the caesaro-papist tradition didn’t remain fully absent in the west, at least on a small, decentralized scale: during the “Dark Ages” between 500 and 1000, secular rulers often treated priests and bishops, who were often family members or directly dependent on the secular power-holders for their positions, as their vassals. Yet, at the latest by the Investiture Struggle (circa 1100-1300), when reforming Popes claimed not just equality with and independence from the secular authorities but, in fact, dominance over them, the caesaro-papist option was foreclosed. (Whether a papal-caesarist option was ever viable is another matter.) The two authorities, worldly and heavenly, were to be permanently divided, providing perhaps the most basic model to the West of the division of power. What I’m suggesting here is that this was not merely the overcoming of secular meddling by any old religion. Its success required a Church, a Church that could muster enormous intellectual and organizational resources. Without that half-baking in its early centuries, that is, if the Church had not developed at least incipient structures and legal traditions with which to counter Henry IV’s armies and claims, it seems hard to imagine how it could have stood up to worldly authorities. Thus, it seems to me that for its success – and perhaps for that of the West – it was crucial not only that the Church avoid a complete caesaro-papist baking, but also that it undergo a half-baking, one whose impact would only be felt a thousand years later.
(For a scholarly, quite masterful, treatment of related questions, I recommend John Hall’s Powers and Liberties.)

Monday, November 16, 2009

The Singularity and singularity

When I lived in Boulder, I joined a group of fellow nerds in a “Future Salon” to discuss, well, the future. Several of the group’s members, not least its leader Wayne Radinsky, made an impression on me by focusing intensely, one might even say obsessively, on the pace of technological change in semi-conductors. They were constantly updating us on the relevance (yes, it’s still valid) of Moore’s Law: computing power has been doubling every 18 months or so since the late 1960s. Anyone aware of compound growth will immediately understand what this means. Quite frequently Wayne and others would let us know about Intel’s latest breakthroughs at the nano-level. I had heard of Moore’s Law before, but the Future Salonnieres’ obsession with it opened my eyes to its central importance for our economy and world.
This led soon to the Singularity, which I hadn’t heard of previously. The Singularity is a term coined by Ray Kurzweil, a singular genius who has invented or contributed to much of the technology underlying our high tech world. As far as I could tell (and I have only dipped a small toe in one of Kurzweil’s books), the Singularity refers in this context not to the convergence of space and time in a black hole, but to something just as irreversible: the imminent (within the next few decades) merging of computer and human intelligence, under the lead of computers whose abilities will have inexorably outpaced those of us humans. Most of the devotees of the Singularity appear to believe that this will be a wholly or largely good thing, i.e. that the computers will take over or merge with us with generally good purposes in mind. Kurzweil himself has apparently placed himself on an ultra-low calorie diet so that he may live another 30 or so years and thereby participate in the Singularity.
I found all of this fascinating, but also rather dubious. I can’t judge the technical likelihood that computers will actually surpass human intelligence. Yes, exponential growth is tremendous; but will some fundamental physical limits be reached at the atomic level (I thought I’d heard that we were nearing some with our current semi-conductor sketching technologies, but Wayne tells me we haven’t) which will stop Moore’s Law cold? Can raw computing power, no matter how great, really match and then exceed the creativity of wet human brains forged in millions of years of evolution? Maybe. I don’t know. Regardless (and again, I admit to making these judgments without having read the literature), the advocates of the Singularity seem to overlook the possibility, indeed the likelihood, that humans would intervene well before the stage when computers actually surpass us across the board. Why should we view technological advance as inevitable and out of all human control? The advocates seem to me to be most na├»ve in regard to the alleged desirability of the computer-human merger. Why should we believe the computer(s) will act benevolently, either in terms of our interests or even their own? If there’s one super-computer running the show, won’t it establish an electronic closed society as have all the human dictatorships in history? And if there are many, why wouldn’t they compete with each other, simply replicating all of humanity’s foibles at the speed of electrons? The nerds’ dream of an electronic heaven seems to me to be a fata morgana: it replicates the illusions of both organized religion and of secular movements of the 19th and 20th centuries built around the hopes of “scientific management” and centralized knowledge. They dreamt of an omniscient (and benevolent) overlord and refused to acknowledge that we are on our own in this world, engaged in different forms and levels of unavoidable combat or at least competition, without any overall referee. There can never be an Archimedean point of knowledge or control.
More interesting to me than the Singularity is singularity, the question of what kinds of individuals will exist and thrive in a world in which Moore’s Law applies. Computers are doubling their power every year or two because many different kinds of people want it to be so. Nerds love the challenge. Corporations and their marketers love the money. Ordinary people love the stimulation (ever more options on smart phones, ever more realistic video games, ever faster internet, ever more cable stations, etc.). As I see it, the first two groups are benefiting most. Ordinary people think they’re benefiting, of course, but I think an enormous craze for stimulation is sweeping over us. I don’t know how deleterious the addictions are or will become. I know that many of my students cannot put away their cell phones in class, even when I threaten them with severe penalties, and many of them admit they have an addiction. Will these habits undermine people’s abilities to achieve what they say they want – success in career, friendships, relationships? How will they affect deep human relations, which studies show to be the best predictor of happiness and are probably a prerequisite for other “goods” as well, such as civic engagement? Will all the gadgets and social media turn out to be more like coffee – addictive, to be sure, but for most people an enrichment of social life – or like cigarettes – addictive and in the long run destructive of health - or like crack cocaine or methamphetamines – immediate wreckers of lives? Perhaps at present the harm seems minimal. Perhaps it will never become as visible as the harm done by cigarettes or hard drugs. But for that reason, it may grow to be all the more insidious. Nobody will see the damage, least of all the addicts themselves. How will this change power relations in our society? Will the masses not be lulled into complacency and indifference? (If I'm sounding stangely like Adorno here, so be it.) These new narcotics have grown out of fully legitimate industries and desires. They enjoy the full backing of the law. No government, at least no democratic one, will ever ban their use, though perhaps more laws will restrict them in particular settings, as with limitations on texting while driving. And Moore’s Law says that the power of the drugs – the speed and effectiveness of the stimuli – will only continue to grow.
Who’s right – the optimistic visionaries of the Singularity, or the pessimistic mourners of disappearing singularity and individual independence and even "sobriety?" And if the latter, what should, what can we do about it?

Saturday, November 14, 2009

Bismarck and Kafka

Since no one comes to Queens, where I live, I often end up taking the subway into Manhattan and Brooklyn to see friends, a trip that takes from 20 to 40 minutes. I recently started using this time to at least dip into some of the many books on my shelves that I’ve never read, in particular literature and poetry (a recent interest).
A couple of days ago, on a trip to Brooklyn, I started reading a collection of love letters from Bismarck to his beloved wife, Johanna. The very first letter, not to Johanna, but her father, revealed a whole, vanished world. In it, Bismarck asks the father for the hand of his daughter. In rich, complicated sentences (it’s hard to imagine any politician, indeed almost anybody, nowadays formulating such complex thoughts, or writing such a long letter, not to mention asking a potential father-in-law for permission to marry) the future German unifier reveals a deeply personal side of his past. He describes the spiritual emptiness that engulfed him as he lost faith in God – and then the rebirth he experienced, not least thanks to Johanna, as he rediscovered that belief. It’s a remarkable confession – in its revelation of weakness and despair by a strong man, in its self-reflection, in what it shows about the role of Christianity and faith and about the social relations governing courtship in earlier times. To compare all this to the world of internet dating today! It’s almost as if we’re dealing with two different kinds of humans, two different worlds.
Then yesterday I began Kafka’s In the Penal Colony, which I finished today (short enough that it took just four rides). Remarkable how Kafka uses language so sparingly to create an entire atmosphere of alienation, dread, mutual incomprehension (the characters among themselves, us and the characters). At the same time, the story seemed more obviously political – there are clearer “sides,” more easily recognized good and deranged parties – than in the other works of Kafka’s I’m familiar with.

Wednesday, November 11, 2009

A Sketch of Human History

If I were to channel Condorcet or Hegel on the big sweep of human history, this is what I would say.
Human history begins about 200,000 years ago with the Great Leap Forward: human language capabilities become fully-formed, enabling social learning to blossom. I.e. to a far greater extent than even their closest hominid ancestors and relatives, homo sapiens are no longer limited to their genetic repertoire combined with individual learning; they can now learn from each other, socially. This launches a whole new, much more rapid, stage of evolution – cultural evolution. Religion, art, significantly more advanced tools, even trade networks all emerge after 200,000 BP.
However, this potential for rapid progress is held back by the *sparseness of human population*. Foragers (hunter-gatherers) need lots of space to survive, and the expansion of humans starting around 100,000 BP out of Africa into Eurasia, then Australia and the Americas allows foragers to maintain this scattered lifestyle. In the absence of dense populations, social learning does not occur as rapidly as otherwise might.
The agricultural revolution starting around 10,000 BCE and the first emergence of agrarian civilizations around 3,500 BCE appear to “solve” this fundamental limitation of the foraging phase. Humans can now live in much denser settlements. Social learning and progress should take off. However, agrarian civilizations remain stagnant in many ways across the millennia. Why? Because a new impediment to social learning and progress has arisen hand-in-hand with agriculture and agrarian civilization: *hierarchy*. Whereas foragers lived in basically egalitarian (and small) groups, agrarian civilization is characterized by enormous gulfs of wealth and power. The powerful hold back progress for millennia. Political power-holders (“macro-parasites,” in William McNeill’s phrase) leach off their peasants, making property insecure and preferring (their own) political security to the threatening dynamism of economic growth. Meanwhile, intellectual power-holders (religious authorities) guard their monopolies, preventing alternatives from arising and intellectual innovation from occurring. One impediment to social learning and progress – sparseness of population – has been replaced by another – hierarchy.
Finally, in the last few centuries, parts of north-western Europe pioneer a path out of hierarchy. Arbitrary political power is tamed (through parliaments and the rule of law) and the intellectual monopoly of the religious authorities is broken by science, religious toleration, and legal guarantees of intellectual pluralism. In the modern age, 200,000 years after the possibility of social learning emerged, its promise is finally being realized, as humans are free for the first time from the successive handicaps of sparse population and hierarchy.
– Georg Wilhelm Friedrich Meskill

Wednesday, November 4, 2009

The Great Divergence

If the transition to modernity was the main spur prodding the early giants of social science to investigate how society works and changes, a secondary and related puzzle was why this transition happened in Europe first (or even exclusively). To this day, Why Europe? – namely, why industrialization, the break-out from Malthusian traps, the breakthrough to parliamentary, law-based polities, and the emergence of science all began in this relatively small, unprepossessing corner of the great Eurasian landmass – remains THE great, framing question of the social sciences.
Kenneth Pomeranz’ The Great Divergence: China, Europe, and the Making of the Modern World (2000) has been one of the most notable attempts in recent decades to take on this challenge. Pomeranz’ strongly revisionist account rejects most of the dominant strands of current thought, which have sought deep historical origins for Europe’s special path. Instead, he proposes that Europe diverged from other great civilizations, notably China, only in the 19th century. Furthermore, this special path was not inevitable, but rooted in historical contingency.
By the 18th century, Europe, China and other leading agrarian civilizations were all fast approaching ecological limits to growth. Crises induced by deforestation, declining soil fertility and related problems threatened to derail the low capital-intensity proto-industrial expansions underway in some places well short of true industrialization. While exactly this happened in China, Europe escaped the same fate thanks to the fortuitous presence of coal deposits near centers of proto-industry and commerce (in China, on the other hand, coal was in the northwest, hundreds of miles from the commercial center in the Yangzi delta) and thanks to the windfall of the vast territories of its colonial empires, above all in the Americas.
The Great Divergence is a terrifically impressive book. Pomeranz has command of vast amounts of literature relating to European, Chinese, and other economic histories. He’s able to summarize and categorize arguments in very helpful ways. I learned a good deal about European economic history and historiography from this China expert! Furthermore, he argues very methodically and empirically, doing his best even when the data support only speculative conclusions. I came away almost convinced that as late as the 18th century, parts of China were as advanced or poised for a breakthrough as the northwestern regions of Europe were (Pomeranz emphasizes the importance of moving away from the scale of “China” or “Europe” and instead focusing on smaller regions within each). I was particularly struck by his argument that economic historians have overemphasized the importance of labor-saving devices while ignoring a problem of at least equal weight, at least before the dawn of scientific chemistry and other technological innovations in the 19th century and the vast improvements in productivity these permitted, namely, the central limiting role of land and physical resources.
Nonetheless, Pomeranz’s argument left out some crucial matters. First, he discusses European colonies and the advantages these conferred as if they were a windfall, a matter of luck, and not something that itself was in need of explanation. In the early 15th century, nearly a century before Columbus, the Chinese sent out fleets that reached as far as East Africa and whose ships dwarfed the Nina, Pinta, and Santa Maria. Yet those expeditions did not lead to a Chinese global empire. Why not? Second, he doesn’t adequately treat the European lead in science, which was evident and growing by 1600 at the latest. There’s a debate about just how important the scientific revolution was to European industrialization, at least before the second half of the nineteenth century. Many scholars have argued that there was little transfer into economically relevant technology before the chemical and electrical industries developed. Floris Cohen, however, a respected historian of science, has argued that the European understanding of the physics behind the vacuum was crucial to the development of the steam engine. Third, Pomeranz’s argument that Europe benefited from the Americas *in the 19th century* runs afoul of timing. By this point, namely, the colonies in North and South America had gained their independence; their former European masters had to trade with them for their raw materials. What was preventing China from doing the same? Finally, and relatedly, once industrialization took off in England, it spread fairly rapidly to other parts of western and central Europe – but then stopped abruptly, as if at a firewall. The Ottoman lands, India, and China didn’t immediately jump on board and try to emulate the Europeans. So, again, we must ask, if China was equal with Europe in so many respects in 1800, what prevented the Chinese from adopting such successful innovations?
The first two critiques of The Great Divergence – about colonialism and science – point in the same direction, toward the critical role of institutions and how societies were organized. Two outstanding works address Europe’s advantages in colonialism and science, respectively, in remarkably similar ways. David Abernethy’s The Dynamics of Global Dominance and Toby Huff’s The Rise of Early Modern Science argue that Europe’s advantage lay in its greater organizational and institutional capacity. Abernethy shows that European states, chartered companies, and Church bodies all contributed, together or singly, depending on circumstances, to a “triple assault” on weaker societies that even the strongest of the other civilizations, built around despotic rulers, extended families, and individual entrepreneurs, could not hope to match. Similarly, Huff points to the semi-autonomy and longevity the corporate structures of European universities granted to intellectual inquiry. In China and Islam, by contrast, scholarship and science depended much more on the support – on the whims – of individual rulers or benefactors. I suspect Europe’s organizational capacity, which was itself rooted, as Huff points out, in the so-called Papal Revolution of the 11th-13th centuries and the concomitant legal revolution, will provide an important element of any successful explanation of Europe’s divergent path.

Thursday, October 29, 2009

A Third Concept of Liberty

Isaiah Berlin famously distinguished between two concepts of liberty. Negative liberty is constituted by the limits protecting an individual, inside of which no other individual or entity may interfere. As long as the person is not harming others, she may, within those limits, do as she pleases. Let the couch potato be. Friedrich von Hayek or, much more simple-mindedly, Ayn Rand are advocates of negative liberty. Positive liberty is harder to define. It basically means the freedom to do, not just anything, but what is right or correct, which itself must be determined by some criteria other than merely what the individual “wants.” For many advocates of positive liberty – for example, Rousseau or Marx – letting the couch potato remain a couch potato is not to defend his liberty, but to leave him in servitude. The threats to positive liberty can come, then, not just from outside individuals, but also from within, from one’s own weak character or temptations, from false consciousness. Indeed, defenders of positive liberty often see outside interference in what they believe is only ostensibly free choice as a prerequisite for true, positive freedom. One can be “forced to be free,” in Rousseau’s memorable and chilling phrase.
I believe it’s worthwhile considering what two other thinkers said, or at least implied, about the best kind of liberty. Adam Smith and John Stuart Mill, both advocates of the emerging market-based societies and liberal politics, but also both (especially Mill) wary of an overly exuberant individual liberty, suggested a third concept of liberty, I believe. Crucial to this alternative, as we’ll see, was something else Smith and Mill shared: a methodological individualist approach to society (avant la lettre). Namely, society emerged, unintended, out of the independent actions and interactions of all its millions of constituent individuals. In turn, these could be shaped, at least partly, by the interactions with their fellows.
Smith was, of course, one of the earliest and most influential advocates of dismantling mercantilist interference in the economy and unshackling individuals to pursue their own interests. Out of their strivings, unanticipated by anyone, would emerge the greatest wealth possible as well as a fair distribution of what Smith called “the real happiness of human life.” So far, he sounds like an advocate of negative liberty. However, his warnings about the unforeseen consequences of overweening ambition and his most forceful arguments for the free market raise the strong suspicion that Smith will not be categorized so easily. Time and again, but most memorably in the story of the “poor man’s son,” Smith suggests that great ambition rarely, if ever, leads to happiness (He warns, for example, “Never enter the place from whence so few have been able to return; never come within the circle of ambition.”) Instead, real happiness comes from “tranquility,” the society of one’s fellows, and knowing not only that one is loved, but that one is lovable. And how can one achieve these things? For the bulk of society, for the “middling and inferior stations,” it is only through a market-based order. The market demands qualities – honesty, thrift, reliability – that happen to be virtues, and that also earn one tranquility and the love of one’s peers. Furthermore, it is only the market which overcomes the abject poverty that would, Smith thinks, result from Rousseau’s autarkic state. Poverty is not just painful in and of itself, but for Smith of even greater concern is how poverty undermines the possibility of living ethically. This is the upshot of his observation that in poor societies people feel compelled to commit infanticide: their poverty preempts their morality. Smith’s greatest concern is with the “improvement in the circumstances of the lower ranks of society” – and by circumstances he has not only their material well-being, but also their internal dispositions - their happiness, tranquility and even virtue - in mind.
Smith believes, then, that the market, and only the market, can help achieve what appear to be elements of positive liberty: tranquility, approval of one’s fellow man, a kind of virtue, etc. These, I believe, were what he ultimately wanted for society. While he saw the freedom of the market as good in and of itself, he makes his strongest case in terms of the market’s consequences. The market is primarily an instrument. Smith thus advocates negative liberty, especially the ultimately misguided liberty of such overly ambitious people as the poor man’s son, in order to achieve positive liberty for the bulk of society.
In a very similar way, Mill combines an instrumental view of negative liberty with the goal of – gently - achieving a kind of positive liberty. Mill is of course famous for his “harm principle”: each should be able to do as he will, as long as he doesn’t harm others. But this streak of pure negative liberty is subordinated to a particular kind of utilitarianism, one much closer to positive liberty. Unlike Bentham, Mill doesn’t believe that Pushkin and push-pin (a simple game) are equal pleasures. He wants people to love Pushkin, to learn to appreciate more noble joys. Thus, he says that he regards “utility as the ultimate appeal on all ethical questions; but it must be utility in the largest sense, grounded on the permanent interests of man as a progressive being.” Writing in the middle of the 19th century, when the democratic impulse was fast spreading through society and culture, Mill was most concerned about how liberty and “utility in the largest sense” could be preserved at the same time, how free people could be led to appreciate higher over lower pleasures. His solution was to trust in the powers of education and the guidance and, indeed, the political privileging of the better-educated. Even more important, people needed to be exposed to a variety of circumstances and ways of living; only then would their freedom to choose lead them, more or less on their own, to the higher pleasure. “Freedom and a variety of situations,” Mill quotes Wilhelm von Humboldt, are the two preconditions necessary for the achievement of “the end of man” - namely, “the highest and most harmonious development of his powers to a complete and consistent whole.”
Thus, like Smith, Mill advocates liberty – negative liberty – primarily for its utilitarian effects of a particular kind, that is, for something very akin to positive liberty. Neither of these liberals, however, would ever sanction the compulsions that Rousseau and Marx condone or advocate; they wouldn’t speak of forcing people to be free. Rather, Smith and Mill, guided by their overriding awareness of unintended consequences and the complex, almost organic emergence of social patterns, look to the salutary effects of myriad free social interactions to lead people on their own to positive liberty (Mill, it is true, writing in a different, and as he perceived it, more fraught era, did come closer to abandoning a consistent advocacy of negative liberty when he endorsed greater political influence for the better-educated.) These thinkers, it seems to me, thus suggest a third concept of liberty, one which uses negative freedom as a means to achieve – gently – the end of positive liberty. This positive liberty might even appear to be a kind of emergent property of interacting individuals endowed with negative freedoms.
How prescient and realistic were Smith and Mill? Has our free-market society led to forms of positive liberty, to “better” pleasures and behaviors. This must remain the topic for another post (but see my entry In Praise of Rousseau and Marx, along with the reader comments). Also a topic for the future will be the recent work by Richard Thaler and Cass Sunstein on what they call “libertarian authoritarianism” or “authoritarian libertarianism.” I think this represents a very promising new stage in thinking about a topic at the center of both Smith’s and Mill’s work: preserving liberty while achieving the good.

Thursday, October 22, 2009

On Solipsism and Human Connections

Nothing pleases us more than to observe in other men a fellow feeling with all the emotions in our own breast - Adam Smith, The Theory of Moral Sentiments

Happiness is only real when shared - “Alexander Supertramp” (Christopher McCandless), Into the Wild

The essential, vital role that sharing emotions plays in our lives is all the more striking when one considers the fundamental solipsism of our existence. Two observations illustrate this profound isolation. It was Adam Smith who suggested that a man gives more thought to a cut on his finger than to 100,000 Chinese who have died in an earthquake. Sadly, I observe this in myself all the time. Ask yourselves, how often since July (when was the accident again?) have you thought of the more than 200 people who died on that flight from Brazil to Paris. Recently, I did, but it was the first time in months. Not only did they suffer, but their families have been mourning the unbearable loss ever since. I’ve spent considerably more time cursing the Red Sox’ dismal post-season performance. How many other deaths, how much other immense pain, we easily put out of our minds. Or if you are suffering yourself – from a professional disappointment, a failed romance – how easily everybody else’s suffering, indeed everybody else, disappears beside your own woes.
And I don’t think it can be otherwise. Perhaps meditating monks can let go of their egos, but not many of the rest of us can. I’ve heard that when you become a parent, for the first time in your life you are able extend your ego outside yourself, to that other little, dependent being. Still, the compass is never very wide.
It gets worse. Even our self-centered minds can’t focus for long. You’ve almost certainly seen those optical illusions that “flip” back and forth, for example, from the profile of Freud to the body of a naked young woman. The images flip automatically, whether we intend it or not, roughly every three seconds. This is because of a metronome in our brain circuitry, which shifts our attention at those intervals. Follow your train of thoughts carefully. Don’t they usually jump to and fro, from what’s immediately in front of you to the upcoming lunch to yesterday’s news, and back again? We live in small temporal bubbles, each lasting just a few seconds. We all have ADD. I suppose that THIS form of solipsism is easier to overcome, at least temporarily, than self-centeredness. Short-circuiting it seems to depend on the presence of some stimulating, constantly changing external source that captures our attention. When we are reading a fascinating book, listening to a good friend talk about her heartbreak, or watching a gripping movie, we are generally able to concentrate for much longer than three seconds (even here, though, I suspect the mind wanders quite often). This is because the external source itself presents a changing kaleidoscope of strong impressions, never letting us get bored.
Nature seems to have made us this way, and I suppose yammering about it is about as useful as complaining about the coldness of space. Nonetheless, I find myself occasionally saddened by this basic condition of our lives (only occasionally – thank God for my wandering mind!).
Given our deeply self-centered concerns and vagabond minds, it must seem amazing that we are ever able to form such powerful bonds to other people. Perhaps part of it is that we are, in fact, grateful to be asked to step outside our selves and to share - for more than a few seconds - their joys and sorrows and interests, we're actually happy to step outside our little bubbles.
I think awareness of this solipsism needn’t only depress us; it might also make those connections to others seem all the more precious.

Thursday, October 15, 2009

Discovering God

The book of this title, by Rodney Stark, is peculiar. I read it at the beginning of the summer (which now feels like several years ago), so my comments will be a test both of my memory and of Stark’s power as a writer and scholar.
For a course on world history that I’m currently teaching, I wanted to find a single book that covered all the world’s major religions – and made some argument about their development. Ideally, of course, a convincing one. This is how I stumbled across Stark, whose name I was familiar with, since he wrote an influential book on the rise of early Christianity, explaining it in terms of social networks. Stark has carved out a space for himself in religious studies with his “supply-side” theory of religious change. Namely, he posits that religious demand is constant across all societies, but constant in its variability within each society. That is, all societies show the same pattern: some people are indifferent, others are fanatical, and the bulk fall somewhere in between. Religious changes, then, such as the Reformation or the American Great Awakenings, are caused, not by changes in this constant demand side, but rather by shifts in the supply. Stark believes that religious authorities have tended to establish monopolies, and these religious monopolies, like their economic confreres, have not served the people well. They have settled into routine and calcification, serving their own, not the people’s needs. In turn, this stultification can lead to the people’s “natural” religiosity going underground, for example in the form of sects. (Stark’s argument for the strong, if varied, natural religiosity of the people is one of two positions that challenge the earlier orthodox view in religious studies that modernization equaled secularization. The other is advanced by Thomas Luckmann, who suggests that while religion has, indeed, become less formal and organized, it has by no means weakened. All it has done is become “invisible,” in individuals’ quests for spirituality.)
Stark builds his book around this theme of the supply (usually) being choked off by monopolists. It’s generally convincing, but also, I found, rather repetitive and, in the end, unedifying. Each religion follows almost exactly the same pattern. Perhaps I unfairly compared Stark to Max Weber. I wanted to hear more about the differences and interactions between elite and popular religions, especially in regard to theodicy, “magic,” asceticism, congregational forms, etc. As a good Weberian, I also wanted to find out more about the varied effects of religions on their societies’ secular development.
Stark is a superb writer, who clearly loves to engage in polemics. Even when disagreeing with him, it’s a pleasure to engage with him.
One of the more interesting points he made was that the common notion that archaic (i.e. hunter gatherer and early agricultural) religions were all animistic or, at most, polytheistic is simply wrong. Relying on a survey of several hundred such societies, Stark makes a convincing case that a majority or at least a plurality of them had some conception of a “highest god.” Stark thus suggests that the polytheism of the earliest temple religions (in Sumer and elsewhere) was, in fact, regressive. However, he doesn’t adequately explain why the temple religions opted for polytheism – did it somehow facilitate their monopoly? It wasn’t clear.
The peculiarity of the book, and something that no doubts will rub many readers the wrong way, is that Stark very strongly hints that he thinks at least some religions really were discovering God – i.e. they were not just imagining or inventing him. Such a position is, indeed, quite rare in a scholarly book, although perhaps I’m just unfamiliar with the norms of religious studies. Stark tries to justify his hinted-at position (toward the end, he finally becomes more explicit). He draws up criteria for deciding which religions have truly discovered God, or at least have approached knowledge of Him, and he finds that only two make the cut: Judaism and, even better, Christianity. Q.E.D.! I didn’t find these arguments convincing, though I did enjoy the gusto with which he made his case and – especially – attacked his opponents. Imagining a debate between him and Christopher Hitchens brought a smile to my face.

Monday, October 5, 2009

In Praise of Rousseau and Marx

The title, to put it mildly, should surprise anyone familiar with my political and intellectual views. But those views do evolve, even if slowly and in within narrow confines.
Of course, I still consider both men to bear at least some responsibility for the awful things done in the name of secular salvation religions in the twentieth century. Both men, but especially Marx, promoted ways of thinking that, after many twists, ended in the Gulag Archipelago, the Great Leap Forward, the Cultural Revolution, and the Killing Fields. These include the idea that there is only one right form of positive freedom, to be achieved, if necessary, contrary to the wishes of actual people (“force them to be free”); the resulting fundamental intolerance of diversity; the certainty that they (especially Marx) knew the future, a belief that supported an ends-justify-the-means mentality; Marx’s support for a dictatorship of the proletariat and, within the proletariat, for the far-sighted guidance of bourgeois thinkers – such as himself.
On a very different, and much less consequential, scale, at the core of Marx’s thought are all sorts of untenable principles. Jon Elster (Making Sense of Marx) and other members of the so-called no-bullshit Marxism school have rightly excoriated the Hegelian elements in Marx: the belief in dialectic, in teleology, etc.
However, I come not to bury Rousseau and Marx, but to praise them. For what, though?
My central point involves no more than an application of John Stuart Mill’s plea for the value of intellectual diversity, even in the case of thinkers who committed many solecisms (Mill explicitly refers to Rousseau in this regard). Rousseau and Marx sketch an alternative model of human well-being, one which I believe it has become especially important in the past decade or so to keep alive.
For both of them, the key to living well is living within oneself. That is, they want people to live less – or not at all – for the gaining of external approval or values, whether in the form of money, power, or prestige, and more for the intrinsic value of their own actions – above all, the creativity of work – and for the satisfactions of non-instrumental sociability.
I first engaged more deeply with Rousseau and Marx while teaching in Harvard’s social studies program in the mid-2000s. At the time, my reading (and, I have to admit, probably my teaching) was often dominated by my deep dislike of the characteristics of their thought mentioned at the outset.
Over the last year or two, however, I find myself thinking more often – and more positively – about their vision of human freedom and well-being. Why the change? Mainly, I think it’s because of changes in technology, or at least my exposure to it.
First, it was only between 2007 and 2009, while out in Boulder, that I had cable TV for the first time. When I came home tired after work, I turned to the soft consolation of the TV far more often than I had wanted to. I knew, even as I was doing it, that watching snippets of the UFC, reality shows about Alaskan fishermen or Oregonian loggers, and even book readings on C-Span was not how I wanted to be spending my time. Even following the daily jostling and jiggling of the presidential campaign on CNN last fall seemed - deep, deep down, after an evening gorging myself on it - unworthy. But I continued to do it, because it did feel good, at least in the short term. However, I gained a very different perspective – in Rousseauvian or Marxist terms, I discovered positive freedom – quite by accident this summer. Namely, I had no cable, indeed no TV whatsoever, for parts of the summer. And when I THEN, on occasion, stumbled across Anderson Cooper’s breathless reports on the Michael Jackson murder investigation or some “breaking news” about Chicago’s bid for the 2016 Olympics, I recognized quite clearly just how trivial and unedifying – but addicting – these news stories are. The contrast was salutary and has led me to go without TV in my new apartment.
Second, as a teacher I’ve become much more cognizant of how students bombard themselves with stimuli, through Ipods and music, but above all through cell phones and texting. In class this year, despite several attempts, I haven’t been able to stamp out texting. I’m not sure whether texting has just become more prevalent in the last couple of years, or whether the students at my last two schools engaged in it more. I suppose, in part, my disapproval stems from evaluation of the students’ “life chances” to begin with. If Harvard (and similar) students fall into this addiction, other incentives will drag most of them out of it before it becomes too disruptive. At many other schools, where the students are less motivated and less disciplined to begin with, I think the temptation only exacerbates serious existing handicaps.
Both of these activities – TV watching and texting (or otherwise using gadgets for stimulation and mood-management) – can easily become addictions. I want to HIGHLY recommend two books that shed light on the pervasive presence and the history of addictions, respectively: Richard Herrnstein, The Matching Law, and Daniel Lord Smail, On Deep History and the Brain.
I believe that my familiarity with Rousseau and Marx has allowed me to gather and process these disparate observations of myself and others in a more coherent, and useful, way than I otherwise would have been capable of. Without their pleas for human well-being centered on internal valuation and creativity, rather than external stimulation and consumption, I might have been frustrated by these recent cultural and technological developments. But I think I wouldn’t have sensed so clearly how I might live differently. Their ideas, then, helped me to crystallize an alternative.
I end, as I began, with a critique of Rousseau’s and Marx’s views. What I’ve just talked about is a personal response to social phenomena. It’s my private answer. It’s not a political program. As deleterious as I think these addictions are, I wouldn’t endorse Rousseau’s or Marx’s political responses to similar developments in their time. For one, both of them lack an adequate conception of human nature (perhaps understandable given when they were writing, but not understandable for those people today who are still sympathetic to their politics. The Origins of Species, after all, was published 150 years ago.) My leftist friends today often deplore the culture industry’s or capitalism’s role in these trends. They ignore the contribution made by human nature, human frailty. And hence they underestimate the difficulty and cost of stamping out the addictions. But this topic – that central error of the left: ignoring what biology has revealed about human nature – must be the subject of another posting.
For now, and in this limited sense, I praise Rousseau and Marx.

Tuesday, September 22, 2009

The Long Island Expressway and Brownian Motion

I recently moved back to the east coast and now have to commute to work from New York City out to my college on Long Island. Contrary to expectations, driving against rush hour traffic has not proven to be smooth. Despite leaving the city in the morning and returning in the evening, I have to press my way through 70-90 minutes of dense, mostly highway traffic. I’ve had both the time and the motivation (frustration) to ponder what’s going wrong.
Accidents (probably an average of one a day) snarl traffic, even on the side where they didn’t occur, due to rubber-necking delays (one of my favorite phrases to explain to a foreigner; I have yet to encounter another language with the same concept). But accidents are not the only, or even the main, problem, I think. A kind of coordination problem is. And this brings me to Brownian Motion – i.e. random movements. Of course, the traffic is not moving randomly. That would be truly unpleasant. Not only are cars moving in the same direction, but generally speaking cars in the left hand lane are moving fastest, those in the middle next fastest, and those on the right slowest. But on the margins, random fluctuations play a major role in gumming things up. The fluctuations seem to occur in two forms. I admit that the first kind is not truly random, at least not initially, but I think random fluctuations do play a role after the first disturbance has occurred. Namely, somebody who doesn’t understand the left-lane-is-the-fast-lane rule sits in the left lane, blocking all the speedsters (like myself) who can see beautiful open road ahead, but just can’t get there. It suffices if, say, just one in 20 drivers does not understand this crucial rule in order to slow things down for miles. But this is not a truly random effect; rather, it’s the result of incompetence. However, subsequent reactions do reflect elements of randomness. Namely, speeding drivers, once they come upon the slow poke, can respond in several ways. In an ideal world, each speeding driver would slow down merely to the speed of the slow poke. Many do do this. But with others, random variation kicks in. Some don’t break enough and hit the slow poke. This is, of course, the worst possible outcome (and I’m only speaking of the other drivers here, not of those involved in the collision). Others, no doubt the majority, brake too hard, slowing to, say, five mph below the slow poke’s speed. The chain reaction unleashed by a series of cars behaving this way (i.e. -5-5-5...) leads, ultimately, to standstill, those apparently mysterious cases where everyone comes to a halt, but then after things get going again you can’t figure out why. This first process, then, we might call incompetence-inspired Brownian Motion.
The second is a purer form. Namely, the faster cars will be in the left lane, cruising along at the same high speed. But then the driver of a car somewhere in the chain will let his attention lapse - and this happens not because of incompetence, but because of the basic, random flightiness of our attention-spans - and his speed will either increase or decrease. The results will then mirror what happens in the first case: either an accident, or overreactions by the following cars. The net result is the same.
Everything could flow much more smoothly on the LIE, which would make everyone happier. It's incredibly frustrating to witness - to be stuck in - these unnecessary slow-downs and traffic jams. I bet I could cut my journey from 70-90 minutes down to 50 if they didn't occur. All we’d need to do would be to eliminate incompetent drivers and eliminate Brownian Motion among all drivers. My guess is that the first part of the solution, as hard as it may seem to accomplish, would be easier to achieve than the second. Traffic delays on the LIE may just be an inescapable feature of the universe’s architecture. So maybe I shouldn't feel frustrated, after all.(Or would there be some way to set up Biased Brownian Motion on the highways?)

Friday, August 28, 2009

Of Voles and Men

Can the study of voles (genus Microtus) prepare one to produce valuable insights into human history? Peter Turchin thinks so. With Historical Dynamics: Why States Rise and Fall, the theoretical ecologist joins Jared Diamond, Peter Richerson, and a growing number of other biological scientists who have recently turned their sights on human history. Turchin believes his work on the dynamics of vole and other animal populations has given him the basic tools not only to illuminate a particular topic in history – the rise and fall of agrarian empires – but even more ambitiously, to advance history as a discipline.
In Historical Dynamics, Turchin develops and compares several theories in order to explain the dynamics of empires during the long agrarian phase of recorded human history. Early on he introduces a basic and very helpful distinction from population ecology, that between three “orders” of dynamic change or growth: 1) linear or exponential; 2) asymptotic or logistic; and 3) oscillatory. The first kind, limitless growth, almost never occurs, not only because growth requires resources and almost all resources are limited, but also because growth triggers debilitating feed-back mechanisms. This leaves asymptotic/logistic and oscillatory patterns, in each of which the nature of the feed-back mechanisms plays the decisive role. The key difference – something not at first self-evident, but then readily grasped – is that the former can occur only if the feed-back is immediate and singular, whereas oscillations occur either if there is a lagged feedback or if more than one feedback is in operation. To explain the rise and fall of states, Turchin thus argues, we need to look for lagged or multiple feedback mechanisms. (Later on, I will provide an example of how this distinction between dynamic orders suggests one quite basic way in which mathematical models can be useful.)
Turchin considers four processes that may contribute to the rise or fall of states. These are 1) the logistics of expansion, 2) ethnic assimilation, 3) ethnic frontiers as sources of internal cohesion, and 4) the interactions between population growth and political stability. For the first process, Turchin draws on and elaborates Randall Collins’ work on the geopolitics of expansion (which is similar to Paul Kennedy’s theory of imperial overreach). This theory focuses on the gains of expansion, such as increased resources and people to draw on, as well as the drawbacks, such as longer borders to defend and longer routes between core and periphery. Because these negative feedbacks should act fairly quickly, Turchin argues, Collins’ theory accounts at most for asymptotic dynamics: it may explain why states reach a point where they stop expanding, but it can’t explain why they eventually collapse. The second process – ethnic assimilation or lack thereof – also isn’t the main focus of Turchin’s book. It’s not rejected as a possible explanation for the fall of states, as geopolitics was, but rather it is not developed fully enough to address the central question adequately. Even so, Turchin’s brief treatment of processes of assimilation is fascinating and demonstrates the power of mathematical models to reveal unsuspected patterns. Relying, for example, on Rodney Stark’s argument and data about the social networks that were crucial in the spread of Christianity in the Roman Empire, Turchin shows that the “threshold” or “take-off” arguments that have often been proposed to explain Christianity’s “sudden” popularity in the third century are simply superfluous. The explanations they offer are a dead-end. The same exponential rate of growth (which Stark explains in terms of social networks) can account for both the increase of Christians from, say, .0017% of the Roman Empire’s population in 40 CE to still less than 1% in 200 CE, as well as the apparent “take-off” from 1.9% in 250 CE to 10.5% in 300 CE and 56.5% in 350 CE. An understanding of exponential growth takes the mystery out of something like Christianity’s only apparently explosive development in the third century – and thereby redirects our search for causes.
The main interest of Turchin’s book are the other two explanations of the rise and fall of empires. His most original contribution relates to the role of what he calls “meta-ethnic frontiers.” Symptomatic of the breadth and productive eclecticism of his thought, Turchin begins here with an idea borrowed from the medieval Muslim thinker Ibn Khaldun, who is sometimes called the world’s first sociologist. Ibn Khaldun, in trying to explain the cyclical rise and fall of Arab states in North Africa, pointed to the key role of asabiya, the internal cohesion and moral strength of a group. The originally high asabiya of mountain tribesmen allowed them to conquer effete low-lying cities, but after some time the allures of urban life would reduce their own collective fortitude – opening the way for the cycle to begin all over again. To explain where asabiya comes from in the first place, Turchin points to Frederick Barth’s seminal study of tribal identity in Afghanistan: group identity is always highest in conflict with another group. When things like religion, language, and way of life (agricultural vs. pastoral) are radically different, the people on each side of the border – which Turchin refers to as a meta-ethnic boundary - will develop a fierce sense of loyalty to their own people.
After developing his model of the waxing of asabiya along meta-ethnic frontiers and its waning in the imperial interior, Turchin tests the theory in considerable detail against the history of state formation and decline in Europe from the Roman Empire to 1900 (sweeps of this kind are nothing seldom in the book). He does so by dividing Europe into more than 50 regions and coding each of them in century-long intervals for the elements forming a meta-ethnic frontier (religious conflict, etc.), together yielding more than 2000 data points. He then plots these results against the later emergence of states above a certain threshold size. Meta-ethnic frontiers, he finds, explain most – though not all – subsequent instances of later formation of large states. The unexplained exceptions remain a problem, Turchin readily admits, and he explicitly solicits improvements to his theory. For now, however, he argues, what is crucial is that the meta-ethnic frontiers theory has much greater relative explanatory power than its main rival, Collins’ geopolitical approach. (A possible criticism at this point is that Turchin’s test of his theory may have involved some ad hoc adjustments. Already knowing the outcome to be explained (in this case, where states did, in fact, form) Turchin may have tweaked his parameters (for example, the choice of the boundaries of the sub-regions) in such a way as to confirm his theory. Such adjustments, often unintentional, are hard to avoid. Thus, other tests, in which the parameters are chosen without knowledge of the outcomes, would be desirable. Turchin, ever aware of methodological matters and modest in his claims, would almost certainly agree and, indeed, himself repeatedly encourages improvements on and challenges to his theories.)
If asabiya-generation on the frontier accounts mainly for the expansion of states up to a certain equilibrium, excessive population growth largely accounts for states’ subsequent collapse. Here Turchin borrows from and modifies Jack Goldstone’s demographic-structural account of the collapse of early modern states. Goldstone’s theory revolves around the indirect effects of Malthusian growth. I.e. a growing population doesn’t lead directly to starvation, but to strains on state capacity, inflation, and both more intense elite exploitation of non-elites and greater intra-elite competition.
In the end, Turchin admits that he hasn’t been able to synthesize the various strands into a single unified account. This may be disappointing, but it also fits well with his refreshing modesty and, more than that, with his broader intellectual agenda. Turchin wants to bring to history the mind-set of the scientist. Theories are not (or are only rarely) complete or simply “right”; rather, we usually only have the choice between better and worse theories, those that explain more or less. Hence, in order to make progress, Turchin argues, it is crucial that we frame our arguments in as explicit a way as possible so that we can compare their explanatory power. Constructing mathematical models, he believes, is invaluable for these purposes of testing and comparison. Furthermore, history needs math because its dynamics often involve non-linear, lagged feedbacks – as demonstrated in the rise and fall of agrarian states. The underlying mechanisms of these oscillations are usually too complex to be captured by merely verbal models, and can only be grasped with the aid of explicit mathematical ones.
While becoming a leading ecologist, Turchin has somehow also found the time and energy to acquire vast erudition not only in historical, but also sociological, anthropological, and economic, literatures. While he advocates a bold methodological position, the modesty with which he makes his case should absolve him of the charge of “intellectual imperialism” often thrown at earlier generations of social and natural scientists who encroached on historians’ grounds. As part of his attempt to reach out to historians, Turchin goes to considerable lengths to explain his models in ways that are generally comprehensible (though one may have to dust off little-used knowledge from high school or college math). Historians who focus on the many fields from which Turchin draws his case studies (France, Russia, China, Islam, Christianity, Egypt, England) may – or may not – find his explanations convincing. Regardless, he will surely welcome the scrutiny and debate.

Monday, August 3, 2009

Splitting or lumping - de gustibus non est disputandum?

I don’t read the American Historical Review, the premier journal of the historians’ professional association, very often. When I do, I frequently find myself not merely disappointed with particular articles, but deeply frustrated with the field as a whole. This is because so many historians see it as their task to be splitters, and just splitters. They try to find exceptions to others’ generalizations; they “interrogate” – in the current lingo – “grand” or “meta-narratives” they dislike. This is fine, as far as it goes. It’s the critical work necessary for any advance in knowledge. The problem is these historians rarely add to the criticism any daring new construction, which is equally necessary for the kind of knowledge we should be aiming for. They don’t offer new, truer, bolder generalizations and narratives to replace the old, discredited ones. Or perhaps they do, but only by insinuation (see Foucault’s negative Whiggism, his hints that everything is getting worse). That is, these historians are not lumpers – creating bigger pictures of reality. They revel in the specificity of their interests.
I am a lumper, an outsider in my own field. So when I recently read some articles in the AHR, articles interrogating, splitting narratives in the history of emotions (it should be acknowledged that cultural history is the home turf of the splitters), I again found myself frustrated with the whole approach. But, I told myself in resignation, this was ultimately a matter of taste. Some people just like to split, others to lump.
Is that the case, however? Between splitting or lumping non est disputandum? Do we just leave it at that?
I now don’t think so. At least a couple of methodological considerations suggest we ought to encourage historians to engage in more lumping and less splitting (or perhaps less splitting merely as an end in itself).
First, the goal of any science or intellectual endeavor should be to discover the simplest possible explanations of the broadest swathe of the world. If explanation A accounts for 99 “facts” about the world and explanation B accounts for the same plus one more, B is an improvement over A. It’s what we should aim for. It may not be possible to explain more than 99 facts in that area, i.e. explanation B may be out of reach. But it would remain a heuristic goal. The same goes for simplicity: if explanations C and D both explain something equally well, but D is simpler than C, we should prefer it over C (Ockham’s Razor). The upshot of this is that, all things being equal, a simple grand narrative is better than small or complex narratives. Now, there may not be any simple grand narratives that are also true. But I have the feeling that many historians, especially cultural historians, are asserting more than this. They are not only saying there aren’t grand narratives; they are saying, or at least suggesting, that we shouldn’t even search for them, we shouldn’t even maintain the simple grand narrative as a heuristic goal. They not only ascertain the ostensible messiness of the world; they appear to revel in it as well. But as far as I can tell, they have never provided, or even attempted to provide, any cogent reasons for abandoning the aim of achieving simple explanations of as much of the world as possible.
The second reason to prefer lumping over splitting has to do with ideas drawn from fractal geometry. This field studies “self-similarity” at different scales, for example the ways in which a cloud, a coastline, or a snowflake has the same shape (puffy, jagged, intricate) regardless of how closely you look at it. Because of this, fractals are said to be infinitely complex. Applied to history, this raises significant problems for the whole splitting project. If you want to split, just where do you stop? Why deconstruct only so far as level X? Why not keep going, to level X-1, X-2, etc. ad infinitum? If the world is infinitely complex (and this is what the splitting approach can sometimes teasingly hint at – in the spirit of Clifford Geertz’ “turtles all the way down” comment) then no real knowledge is possible, and no one should be writing anything. It only makes sense to stop – and to write – if you think the world is not infinitely complex, if you think there are identifiable regularities at some level. But once they admit this, the splitters have given up their game, or at least the spirit behind it. For if we can generalize about the world, then – see the first point above - we should try to make those generalizations as broad (and elegantly simple) as possible. Then it’s right back up the stack of turtles, as far as we can go.
I’m not suggesting the profession only needs David Christians, William McNeills and Jared Diamonds. We need lots of specialists, too. But what I think we could use less of is the urge only to split without building up, without offering daring grand narratives. Splitting alone, especially when it revels in destruction, is neither intellectually coherent (point 2) nor worthy of our intellectual aims (point 1).

Wednesday, April 22, 2009


Can historians learn anything from biologists? Jared Diamond’s 1997 book Guns, Germs, and Steel sparked a flurry of interest among at least some historians, who published a special forum in the American Historical Review and held panels at the annual AHA meetings. A more common response to Diamond, however, if I go by numerous conversations with historians, has been disapproval tinged with an almost visceral rejection. I surmise that this disdain derives from the book’s underlying biological premise: as with other species, Diamond argues, humans’ fates have been determined by the environment and the availability of natural resources. It may also have to do with Diamond’s emphasis on the long term and his relative lack of interest in individuals and events. None of these things - the biological roots of human behavior, environmental determinism, the disregard for particularities – historians can abide. In fact, however, most historians have probably not taken a stance one way or the other in regard to Diamond’s book – or to the growing number of intellectual encroachments by natural scientists onto terrain usually reserved for historians. Whether due to parochialism or indifference, we historians remain, as Daniel Lord Smail has put it, in the “grip of sacred history.” We still conceive of history as starting with civilization and written records some 5,500 years ago in Sumer.
The relatively generous attention paid to Diamond in fact only confirms the extent of the problem: Guns, Germs, and Steel was a gripping, popular (but not unserious) read. If it hadn’t been a best-seller, it almost certainly would not have earned the AHA’s attention. Less visible, but in many cases even more important, works by natural scientists usually go unnoted by historians. Since the 1980s, for example, several schools of biologists and anthropologists have been developing ambitious theories of “coevolution.” These approaches treat human culture as an evolutionary system in its own right and investigate its properties and its interactions with its genetic counterpart. They thereby hope to develop comprehensive, indeed potentially revolutionary theories of human behavior, something one might think would be of interest to historians. Yet a jstor-search reveals that none of these books received even one review in a historical journal.
They deserve better. The following is a review of one of these projects, Robert Boyd and Peter Richerson’s 1985 book Culture and the Evolutionary Process. (The others are Luigi Cavalli-Sforza and Marcus Feldman’s 1981 Cultural Transmission and Evolution: A Quantitative Approach and William Durham’s 1991 Coevolution.)
Culture, as Boyd and Richerson (B & R) define it, includes all episodes of social learning of ideas and behaviors, whether by teaching or imitation. B& R provide considerable evidence that social learning of this type – and not individual learning or rationality, as neo-classical economics and rational choice approaches assume – plays a predominant role in human behavior. They point to the numerous cases of “cultural inertia,” in which people don’t respond to new circumstances, even for generations (see, for example, David Hackett Fischer’s excellent book, Albion’s Seed, on the persistence of different English folkways in America), and to psychologists’ plentiful evidence that people operate by various, often inaccurate, rules of thumb.
According to B & R, culture shares with genetic inheritance the three crucial ingredients necessary for an evolutionary system: variation, inheritance, and selection. I.e. people have different ideas and act in a variety of ways; they pass along these cultural traits to “cultural offspring,” who usually include their biological offspring, but can also include friends, students, etc.; finally, some cultural traits get passed on more often than others (for reasons to be discussed below). B & R therefore call their model a “dual inheritance” theory.
Crucially, the two inheritance systems, while similar, are not identical. Cultural evolution allows for “acquired variation,” i.e. it is Lamarckian. In genetic evolution the behavior of an individual has no effect on the genes he passes on. With culture, however, an individual can learn something on his own or otherwise pick and choose from his cultural heritage. What he passes on to his cultural offspring has been changed. Additionally, in cultural evolution there can be many “parents,” not just the two of biological reproduction.
B & R distinguish between several different “forces” giving cultural evolution its directions. The first two, which belong together, they call “guided variation” and “direct bias.” Guided variation involves the interaction of individual learning, or innovation, and cultural evolution by social learning. Despite having culturally inherited certain ideas or behaviors, individuals are also capable of assessing their surroundings and options and developing a new response, one they did not inherit. For example, a medieval farmer stumbles upon a different way to plow his fields. If he can ascertain that this is an improvement over traditional methods – something that may not be easy to evaluate - he is then likely pass on this new variant to his cultural offspring, in this case primarily his sons but perhaps also neighbors.. Direct bias, on the other hand, is less innovative: the person does not invent a new response, but adopts one of the various options she has inherited from various cultural parents. However, a certain predisposition may favor – directly bias – one kind of cultural alternative over the others. In the cases of both guided variation and direct bias, criteria are needed to make individual judgments. And these criteria, B & R argue, must come from our genes, i.e. from biological natural selection. For this reason, they refer to these two forces as socio-biological. That is, cultural inheritance will track and reinforce biological inheritance.
With the other forces – which, B & R argue, are likely to be more important than guided variation or direct bias - this is not necessarily the case. The socio-biological forces depend on individual learning or discrimination: even in the case of direct bias, the individual has to make judgments about the available options, which bias to apply, and how to do so. But gathering such information has costs, which opens the door to other, less costly “forces” affecting social learning. Two of these are “indirect bias” and “frequency bias.” With the first, one individual identifies another whom he deems successful – an older brother, a village headman, a movie star - and copies many behaviors from him. Overall, this process is less costly because the first individual is not trying to assess which behaviors of the cultural parent have caused the latter’s success; he simply copies many or all of them. However, in some instances, B & R argue, costly “runaway” processes can ensue: people go to great lengths to dress like rock stars they admire, efforts that could never be justified in terms of clothes’ evolutionary selective power. The process is akin to the evolution of the peacock’s tail, in which an arms race over sexual attraction may impair the creatures’ survival. Frequency bias means that people simply copy the most frequent cultural variant, which will often prove to be a simple, efficient strategy.
A final force is natural selection, not of genes, in this case, but of cultural variants. This arises because genetic and cultural evolution are asymmetrical. We inherit our genes from our mother and father and the same two individuals are often important for imbuing us with our ideas and behaviors. However, we often inherit cultural variants from many other sources as well (siblings, teachers, friends, religious leaders, public figures). These non-parental sources will become relatively more important as we age. If we only inherited culture from our parents, B & R argue, we might expect that those ideas and behaviors would track or conform to the biological impulses we inherited from them: for example, we would imbibe the idea that having large families is a good thing. However, the existence of asymmetrical strands of cultural inheritance means that ideas and values can spread that may run counter to our biological imperative (and hence to what our biological parents on their own would teach us). Thus, teachers and other professionals may spread the message that professional success - something they themselves have achieved, and which requires sacrifices of the time and energy necessary for physical reproduction – is of great value. A Darwinian competition would then ensue – between biological parents and teachers over whose ideas and values would spread faster. B& R make a convincing case that this kind of asymmetric inheritance and the resulting natural selection of cultural variants probably lie at the root of the current, extraordinary demographic revolution. People, especially in affluent countries, are having fewer and fewer babies. Biology and biological Darwinism would predict just the opposite: as resources increase – as they have for humans over the last century or more, especially in industrialized countries – birth rates should steadily increase. In these cases, B & R say, the cultural variant “enjoy your own life, be successful professionally, don’t acquire these noisy, troublesome little creatures” has undermined the biological imperative to reproduce as much as possible.
Because of these final three forces – indirect bias, frequency bias, and natural selection of culture – cultural evolution will often come into tension with the dictates of biological evolution. They help to explain the internal conflicts that individuals experience much the way Freud described the struggles between id and superego. They also distinguish B & R’s approach from a strictly socio-biological one and from William Durham’s 1991 Coevolution, which foresees greater – though still not complete - congruence between biology and culture.
Finally, B & R ask how cultural evolution itself could have arisen in the first place? This is especially acute given cultural evolution’s frequent (biologically) maladaptive consequences. The generic answer is that as long as culture is overall biologically adaptive, its benefits outweigh its considerable costs. More specifically, culture may be expected to arise under particular environmental circumstances. If the environment remains constant for long periods, the best strategy is to hard-wire behavior in genes. This eliminates the costs associated with learning, either of the individual or social kind. If, on the other hand, the environment changes significantly quite frequently, then not only is genetic hardwiring the wrong strategy. Social learning, with its inertia, is as well. Under these circumstances, individual learning is the best option. Social learning – which allows for limited individual learning and variation – is best when the environment remains fairly constant but changes to some degree. In later work, B & R suggest that this was precisely the environment during the ice ages starting 2.5 million years ago and lasting until 12,000 years ago.
Culture and the Evolutionary Process is a challenging work. B& R rely frequently on mathematical models, which will not always be easy to follow unless one already has considerable facility with such methods. However, the authors always take the trouble to walk the reader through the main steps and, most important of all, the conclusions of the models. They also offer tangible examples from history and other social sciences to illustrate their points. The book should be required reading for anybody interested in “big” or “deep” history. Even at smaller time scales, the book offers a very stimulating framework for analysis, especially for thinking about broad patterns of social and cultural development. So, can historians learn anything from biologists? Yes – if they are willing.

Monday, March 30, 2009

The Horse, the Wheel, and Language

David Anthony's book by this title is about the search for - and ways of life of - the original speakers of Indo-European, the tongue ancestral to languages spoken by about three billion people today. Both his account of the detective work and his conclusions are highly impressive.
Anthony is unusual in that he combines his training in archeology with a great familiarity with historical linguistics, which allows him to bridge normally distinct fields and to piece together disparate clues about the original Indo-European speakers. I was continually struck by just how much work has already been accomplished in each of these disciplines. For example, historical linguists, by comparing extant Indo-European languages and relying on rules of language change, have developed a vocabulary of several thousand words (!) from the mother language, even though it was spoken between perhaps 4500 and 3500 BCE and never recorded in writing. Even more impressive in some ways was the sheer attention devoted by archeologists to these archaic cultures. Hundreds, and even thousands, of sites from each of dozens of different cultures living around the Black and Caspian Seas (and no doubt elsewhere) have been thoroughly studied and catalogued. So a basic impression for me was the same one I got when I read Stephen Pinker's book How the Mind Works: namely, astonishment at just how much we do know. It's amazing what our economic surplus and academic specialization have permitted. This may sound trivial, but it provides a useful response to a common refrain I hear: we just don't know enough about X to take a stance one way or another. My friend James resorts to this tactic whenever we discuss evolutionary psychology. Because it sounds plausible (the brain is complicated, after all, and early man did leave few traces, so how much do we really know about it anyway?), this is often an effective way not to rebut the other side, but to end the discussion nonetheless. My readings of Pinker and Anthony make me even more resistant to this response.
So what are some of Anthony's substantive conclusions? The speakers of Indo-European were foragers living in river valleys north and west of the Caspian and Black Seas. They likely adopted cattle, sheep and goat herding from peoples living on the west shores of the Black Sea circa 5000 BCE. (These latter peoples, by the way, living along the lower Danube and east of the Carpathians, had the most advanced metal-working and the largest settlements *in the world* between 4000 and 3500 BCE. It never became clear to me why this region was so advanced.) Around 4000 BCE, the Indo-Europeans probably became the first people in the world to domesticate the horse. By perhaps 2000 (or was it 3000?), they had imported the wheel from the Near East (at first a solid wheel, useful only for slow-moving, ox-drawn carts, later, modified by the Indo-Europeans to include spokes, part of the revolutionary war-chariot).
The domestication of the horse and the importation of the wheel changed everything. The former allowed much larger herds of animals to be controlled. In addition to a general increase in wealth, this led to the emergence of significant social status differences (deciphered mainly by new burial practices) and various related social and political practices. For example, contractual relations developed when more marginal herders were compelled, by bad luck or having lost parts of their herds to theft, to borrow from wealthier herders. The wheel (in the form of ox-drawn carts) allowed the Indo-Europeans now to range much more widely over the steppes, since they could bring provisions with them. This more nomadic life - along with the temptations posed by cattle-rustling - led to the development of various host-guest practices. Nomadic groups learned to distinguish between acceptable visitors passing through their land (guests) and hostile interlopers. Anthony argues that Indo-European spread - into Greece, up the Danube and Dniester, then to the east, all the way to the Altaic Mountains, and also down into Iran and India - only partly by military conquest (facilitated by horseback riding). Even more important was the economic power and status these wealthy herders enjoyed. These factors convinced neighboring peoples to convert to a new way of life and language.

The Horse, the Wheel, and Language

Thursday, March 26, 2009

cheap and easy morality

I'm trapped in Golden by a ferocious spring blizzard which seems likely to keep me here until tomorrow morning (the hilly road to Boulder is treacherous in the snow, as I discovered when I got stuck, sideways, on the road out of Boulder during the last storm) . With hours to kill in my office and only one other colleague in the building, this seems like a good opportunity to catch up on my neglected blog. Though I said I would not provide any political commentary, I think the following point, perhaps bordering on politics, but only in a general way, deserves mention and a bending of the rule.

I want to introduce a new term: cheap and easy morality. Now, the idea has probably been identified before, but I can at least claim independent invention since I developed this - or at least half of it, as I will shortly explain - on my own.
The part that I didn't invent is cheap morality. This idea has been brought to my attention at least three independent times: first, a few years ago at the European Forum, by Hartmut Kliemt, who used the exact term; second, by my CSM friend and colleague James Jesudason, who talks of "low cost morality;" and third, by my Harvard grad school friend, now at Stetson University, Eric Kurlander. Cheap morality means the expression of ethical views that earn social approval, but cost the person nothing. A trivial example might be all the people, especially numerous around Boulder, who have "Free Tibet" bumper stickers on their cars. Why do these people put these stickers on their cars? Is it really to free that far-off country, which they might even be hard-pressed to find on a map? What would its liberation mean? Or are they doing this to express something about themselves? Some might include in the category of cheap morality the expressed concern - and even the obsession, especially at universities - of affluent whites, with multiculturalism and race and gender equity. Being affluent and, in many cases, having job security, they devote themselves to these issues and spend little thought or effort on more challenging problems (more challenging especially to their own positions) such as, say, class inequality. I would include most pacifists in this category of cheap moralists, as well.
I recognize that there are some problems with distinguishing cheap morality from other, more legitimate kinds. Surely, we shouldn't require that somebody make a personal sacrifice in order to hold a position on some question. I find the slaughter in eastern Congo very disturbing - and would say so - even though I can't do anything about it. During the second world war, I would not have called it cheap morality if somebody had worn a button saying "Free Poland" or "Save the Jews." So why do I feel that "Free Tibet" is different? Perhaps the acid test has to be what motivates the morality, or its expression. Is the main, though perhaps subconscious, motive to appear to be caring, decent, humanitarian, etc? Here there are clearly connections to the expressive theory of voting. This is the idea that what people are doing when they are voting is not actually trying to influence the outcome - since they know, or should know, at least, that their one vote will never be decisive. The outcome of the election will happen regardless. What people are doing is expressing something about themselves, about their values. This might explain why, for example, lots of wealthy people vote for the Democrats (and why many poor vote for the Republicans): i.e. regardless of how I Wealthy Person vote, the election will have its outcome. In either case, I can feel that I Wealthy Person voted for the environment, the poor, etc.

Now I want to expand cheap morality into cheap and easy morality. Easy morality means that people evaluate situations in terms of motives and not in terms of consequences. The former is easier than the latter - and also less valuable. This became clear to me in a recent conversation with a highly intelligent woman concerned about the environment. She praised the German government's subsidies for solar panels. I pointed out that these subsidies for solar energy in perpetually cloudy Germany had dramatically raised the price of silicon (used in the panels), thus putting solar panels out of reach of many people and companies in much sunnier parts of the world than Germany. This intelligent person just huffed and refused to address my point. For her, so it seemed to me, good intentions outweighed everything, indeed, may have been the only thing. I believe this is a very common tendency in regard to all sorts of ethical and political questions. As long as the person's heart is in the right place.... I believe this is a benighted way of approaching the world because the world is a complex place involving all sorts of trade-offs (for good arguments in favor, basically, of an "ethic of responsibility," i.e. considering consequences, see Max Weber's essay Politics as a Vocation). Because it is so widespread, so easy, and perhaps even natural in some sense (i.e. it comes naturally to people, whereas considering consequences, especially when good intentions yield bad results, and especially when the consequences are far off, can be difficult, can run against our grain), I think teachers of all kinds - from parents to official pedagogues - have an obligation to encourage utilitarian, consequentialist thinking. Lessons in how good intentions can go bad, and bad intentions produce good, should be a major part of moral upbringing. This is not to say we should *only* consider consequences (no, I would not sacrifice 99 people to save 100). But since evaluating motives seems come easy to people, we need to cultivate the other form of moral judgment. It's more of a learned tendency.
The two sides - the cheapness and the easiness - go hand in hand. Both relate to motives and to the expression of ostensibly good intentions. In the first case, it's your own; in the second case, it's somebody else's.

Thursday, March 5, 2009

Crone is a phone

A knowledgeable friend of mine tells me that Crone's ideas about the formation of European states (referred to in my post about steppe nomads) are horribly outdated, ignoring 50 years of scholarship. Rather than some Germanic element contributing to the unique European sequence, it was all heavily influenced by Rome. The German tribes had had hundreds of years to absorb Roman culture and institutions. Even the Germans who beat the Romans in the Teutoburg forest in 9 A.D. had been trained in the Roman army and spoke Latin.
In fact, the greater the Roman influence the happier I am since the project I mentioned about big history has to do with how the Mediterranean environment - and hence Rome - came to influence medieval and early modern Europe.
Thanks for the clarification, Robert!