Tag Archives: The humanities

Hannah Arendt

A few weeks ago, my undergraduate seminar spent the week reading selections from Hannah Arendt. I’m no Arendt specialist, of course, and anyway the specialist issues weren’t the point– the seminar was about the Enlightenment tradition and what happened to it over the nineteenth and twentieth centuries. But the re-encounter reminded me how much I admire Arendt. She’s Exhibit A-1 for us embattled twenty-first-century humanists, an example of what our kind of knowledge can do. Her story also provides some clues about why we’re not doing it here in 2015.

In case you’re unfamiliar with her, Arendt was a German Jew, born in 1906, who did a doctorate in philosophy, escaped the Holocaust by inches, and became one of America’s most prominent intellectuals of the 1950s and 1960s. The bombshell event in her career was her 1963 Eichmann in Jerusalem: A report on the Banality of Evil, which came out first as a New Yorker series, then as a free-standing book. The book was a shocker, among other reasons because it argues (as the sub-title indicates) that evil deeds don’t necessarily come from evil people. Arendt’s Eichmann was a pathetic bureaucrat and careerist, lacking both an ethical core and the capacity for clear thinking. It’s mainly his emptiness that explains his crimes.

That idea pissed people off in 1963, and it still does fifty years later. Serious scholars are still out looking for evidence that Eichmann was actually a criminal mastermind, “one of the greatest mass murderers in history,” in the words of the American political theorist Richard Wolin. Wolin emphasizes how much is at stake in that assessment: “if Eichmann was ‘banal,'” as Arendt claimed, “then the Holocaust itself was banal. There is no avoiding the fact that these two claims are inextricably intertwined. Arendt’s defenders would have us believe, counter-intuitively, that it was the mentalité of dutiful ‘functionaries,’ rather than impassioned anti-Semites, that produced the horrors of Bergen-Belsen, Treblinka, and Auschwitz.”

Well, yes, that’s pretty much Arendt’s point, though she wouldn’t use quite those words. As she sees it, the Holocaust was actually not a unique historical event. Genocide was standard procedure for the ancient Greeks and Romans, those fathers of our Western Civilization; if your country lost a war, she reminds us, the winner killed or enslaved you. That form of genocide ended with modernity, but other forms of mass violence took its place: the “administrative massacres” of European imperialism, the resettlement schemes of Stalinist totalitarianism, industrialized warfare.

Even more shocking, Arendt tells us to expect more Holocaust-like episodes. Modern industrial society needs steadily fewer actual human beings, as the machines get steadily better at doing what once was our work. That makes most of us superfluous, killable. Back in the old days, you couldn’t kill off masses of your own people without reducing your own standard of living, because then who would do the productive work? Now the robots take care of the productivity; most of us are just surplus mouths to feed.

Arendt sees another, related dark side in the modern condition. As the robots replace us in the workplace, we’re all increasingly uncertain about our future paychecks– and we’re willing to do an awful lot to keep them coming. Meaning it’s the decent, respectable family man who’s likely to go the farthest, because he’s got the most to lose. Arendt explains: “under the pressure of the chaotic economic conditions of our time,” the caring husband and father became “an adventurer who with all his anxiety could never be sure of the next day…. It turned out that he was willing to sacrifice conscience, honor and human dignity for the sake of pension, life-insurance, the secure existence of wife and children.”

It’s a lesson about the idiocy of the super-villain theory of history. (See here for more on that.) Arendt tells us we’re all vulnerable to these specifically modern pressures, all potential evil-doers; it’s childish to keep dividing the world between good guys and bad, and we should stop doing it. Back in 1963, Arendt could take it for granted that her readers understood that idea– how the hell can it be controversial in 2015, when all the economic chaos has become so much worse??

Maybe one reason is, Arendt’s take on industrial capitalism comes pretty much straight from Karl Marx. For her as for Marx, crisis is baked into the modern economic system: modern productive forces guarantee moments of over-production and consequent lay-offs. More fundamentally, like Marx she sees a system that isn’t really designed for human beings. It has its own non-human logic, and if we don’t fit into one of the slots it offers, it’ll kick us to the curb. Those aren’t things we’re comfortable hearing in 2015, and there’s lots of pressure on contemporary intellectuals not to say them.

But if Arendt’s diagnosis sounds like Marx, her prescriptions don’t. She doesn’t talk revolution or imagine some gauzy future utopia. Instead, she pushes Culture, of the heavy-duty, old-time sort: reading the Great Books, wrestling with the Big Ideas, even learning the dead languages. She’s constantly tossing around Greek words and fancy philosophical references, and she expects us to look them up if we don’t already know them.

What good’s that supposed to do in a world of robots and mass killing? Arendt’s answer is simple and basic: it will teach us to think clearly and act well, and she gives us Eichmann as the ultimate counter-example. As she presents him, he was neither an illiterate nor a slobbering sadist, instead, simply a man who could apply only empty phrases to his situation, because he’d never acquired the ability to think seriously. In other words, he wasn’t just a “dutiful functionary” (Richard Wolin’s summary of Arendt’s view), but something more frightening– a representative modern man, full of off-the-shelf clichés and plastic reasoning, incapable of seeing through his fake words, incapable even of putting them into logical order.

As we roam our landscape of talking-heads, cable news, and politician sound bites, the Eichmann example should scare the shit out of us.

So I take Arendt’s ultimate message to us as something like this: Humanistic knowledge isn’t there just to beautify our lives or to round out our practical doings, it’s not enrichment. Instead, it’s brutally practical– it’s what separates us from “one of the greatest mass murderers in history.”

Can you get any more practical?

Advertisements

Do humanities professors dream of electric sheep?

Over the weekend, our graduate students put on a fantastic one-day conference, and it included a faculty roundtable discussing the digital humanities. The line-up included one super-enthusiast, two moderates, and me as the designated Mr. Negative– which in itself tells you where the window of debate is now located. I mean, I blog, I occasionally tweet, I push my students to consult Wikipedia for the background facts on what we’re studying. Take away digital photography, and I wouldn’t last a week in the archives; take away my morning dose of internet news, and I’m a wreck. The digital revolution has gone awfully far if someone like me gets cast as the voice of caution and doubt.

In the Teaching section here and on my Academia.edu site, I’ll post a cleaned-up version of my formal comments. Here I’ll offer a short version of those, mixed with some thoughts that came to me during the (outstanding) discussion that followed the panel’s presentations.

I won’t go on much about my own super-enthusiast side, except to say it’s real. As it happens, my particular weakness as a scholar coincides with some dramatic strengths of the new digital resources. I’ve always had trouble getting dates and details exactly right, and the old printed reference bibliographies have always just left me depressed and listless– anyway the specialized resources I usually need aren’t even available in the universities where I’ve taught. Think of it as my kinky version (not my only version, I hasten to add…) of a thrill we’re all experiencing these days: suddenly I’ve got a cheap, easy electronic solution to a dark, secret, personal weakness.

But the storm warnings also seem to impress me more than most of my colleagues. For the PG-13, super-scary version, check out the philosopher Tim Mulligan’s Ethics for a Broken World: Imagining Philosophy After Catastrophe. Among many other issues, Mulligan thinks seriously about the reality we all know lurks behind the digital wonderland– namely, it could go poof at any moment, because of a war, a breakdown of the electrical system, evil-super-hackers, an NSA Stuxnet-type operation gone wrong, or dozens of other altogether-possible scenarios

So Mulligan imagines his post-catastrophe philosophers having to make do with what he calls the Princeton Codex– scrambled bits and pieces of Princeton University’s paper library that survived climate change and its attendant disasters, in roughly the same messed-up way as ancient European literature survived the Dark Ages.

Except for one big difference. Everything from the ancient world at least had a fighting chance of making it through the bad times, and a lot was waiting there for people like Thomas Aquinas and Copernicus to sort through and build on when the dust settled. In 2015 we’re probably already beyond that point. A steadily greater percentage of our knowledge is now preserved only up there in the cloud, and pretty soon it will be most of our knowledge; if it goes, it’s gone for good.

So that’s the Total-Catastrophe worry, but there’s also the Right-Here-Right-Now worry: digital knowledge reshuffles the sociology of knowledge, in some ways for the better, in others for the worse. At this point we don’t know how much worse, but maybe quite a bit.

On the plus side, the digital world gives new reality to old ideals of equality and fraternity. Like everyone else, I now connect directly and easily with scholars all over the world, people I would never have encountered in the old days. And I get to publish my thoughts in places like this without awaiting the approval of editors or reviewers. Sure, the hierarchies and barriers still exist, but they’re way weaker than they used to be.

But as Alexis de Tocqueville explained long ago, the third element in the great French trinity doesn’t necessarily play well with the other two– and Tocqueville would have loved thinking about liberty’s tormented place in the new digital regime. “Tormented,” because our online doings are watched 24/7, by governments, insurance companies, angry teens, employers, and all sorts of others. Real havoc regularly ensues–health coverage rejected, jobs lost, visas denied, legal trouble, personal humiliations.

In the nature of things, life in this new panopticon entails controlling what we say and do, and even what we learn– multiple authorities now monitor our visits to informational websites. It’s the most effective kind of censorship, the kind where we do the real work ourselves, each of us monitoring our own utterances.

That seems to be part of a larger problem, which we’ve barely started wrestling with: digital culture binds us extra-intensely to our late-capitalist social order, not only because the individual bonds are so strong, but also because there are so many of them. Of course we rely on the corporations that supply our computers, browsers, storage, electricity, etc etc etc. But we also find ourselves slotted into mini-capitalist-entrpreneur roles– each of us bloggers now worries about generating traffic, attracting readers, speaking to our audience; nowadays we’re all minor-league versions of the hustlers who produce the Big Bang Theory.

Higher up the food chain, the resemblance gets even creepier. Here’s the former director of a major digital humanities project, a well-established project at a great public university, speaking some years ago about his job: “A main part of Thomas’s role as Director is to write grants, as well as to seek out appropriate public and private agencies, whose interests match the VCDH’s projects. He compares it to finding funds for a venture capital firm.”

So in this world of surveillance, audience-seeking, entrepreneurship, and venture capital, what happens to the humanists’ trouble-making functions, our capacity to raise harsh questions and social criticism?

My own answer is, so far, so good. Anyone who reads these posts will understand how liberating I’ve found the new media. But the storm clouds are there, and they may get very dark, very fast.

 

Historians and irony, Part II

My last post talked about historians’ irony, which I presented as a way of approaching the past, a tendency not a specific interpretation. Irony-friendly historians tend to see people as having a limited handle on their circumstances, and even on their own intentions. Not knowing the world or ourselves very well, on this view, we humans regularly blunder into tragedy, generating processes we can’t control and outcomes we didn’t want. We don’t know what the fuck we’re doing.

I also suggested that irony of that kind is out of fashion nowadays. Not among all historians, and not 100 percent among any historians– as I said last time, we can never give it up altogether, because we know more than the people we study about how their stories turn out. But historians and irony are mostly on the outs right now, and that counts as something important about our era of historical writing. Open a recent history book, and you’re likely to encounter words like “contingency” and “agency.” Even late in the day, these words tell us, things could have gone differently, and individual decisions made a real difference. These words also tell us not to condescend to people in the past– not to view them as the helpless puppets of bigger forces, not to dismiss their efforts, hopes, and ideas, good and bad alike.

Things were REALLY different back in the mid-twentieth century, and they were still mostly different in the mid-seventies, when I got my PhD. In those days, the talk was all about long-term processes, societal changes, and the blindness of historical actors, and you found that talk pretty much everywhere in the profession, among Marxists and Freudians on the political left, modernization theorists and demographers in the middle, political historians on the right. These scholars mostly hated each other, but they agreed on a basic interpretive stance: big forces trumped individual wills.

So what happened? How did the history biz go from mainly-ironic to mainly-non-ironic? The question matters, because it touches on the ideological functions of history knowledge in our times. Mainly-ironic and mainly-non-ironic histories provide different lessons about how the world works.

Of course, some of the change just reflects our improving knowledge of the past. We talk more nowadays about contingency because we know so much more about the details of political change. We talk more about the agency of the downtrodden because we’ve studied them so much more closely– now we know that serfs, slaves, women, and other oppressed groups had their own weapons of small-scale resistance, even amidst terrible oppression. They couldn’t overturn the systems that enclosed them, but they could use what powers they had to carve out zones of relative freedom, in which they could live on their own terms.

And then, there’s what you might call the generational dialectic. Like most other intellectuals, we historians tend to fight with our intellectual parents– so if the mid-twentieth-century historians were all into big impersonal forces and longterm processes, it’s not surprising their successors looked to poke holes in their arguments, by pointing out all the contingencies and agency that the previous generation had missed. That’s one of the big ways our kind of knowledge advances, through criticism and debate. (For a discussion of this process as it works in a neighboring  discipline, see here.)

So there are plenty of reasons internal to the history profession that help account for irony’s ebb– and that’s without even mentioning the decay of Marxism, Freudianism, and all those other -isms that tried to explain individual behavior in terms of vast impersonal forces. Almost nobody finds those explanatory theories as persuasive as we once did, in the history department or anywhere else.

But having said all that, we’re left with an uncomfortable chronological juxtaposition: the historians’ turn to mainly-non-irony coincided with the circa-1980 neo-liberal turn in society at large, the cultural revolution symbolized by Margaret Thatcher in Britain and Ronald Reagan in the US. There’s a substantive juxtaposition as well: while we historians have been rediscovering agency among the downtrodden and freedom of maneuver among political actors, neo-liberal ideology has stressed individuals’ creativity and resourcefulness, their capacity to achieve happiness despite the structures that seem to imprison them. Unleashing market forces, getting people off welfare, reducing individuals’ reliance on public resources– these all start from the presumption that people have agency. They know what they’re doing, and they should be allowed to do it.

In other words, Edward Thompson’s warnings against “the enormous condescension of posterity” weirdly foreshadow various neo-con one-liners about how social programs and collective goods condescend to the disadvantaged. (For an example, check out George Will and George W. Bush talking about cultural “condescension.”)

Which of course is a pretty ironic thought, given that Thompson was a committed political activist and brilliant Marxist theorist. But if it could happen in the 1950s, it can happen now: intellectuals who hate each other and disagree on many specifics can nonetheless be teaching the same basic ideological lessons.

To me this suggests it may be time to rethink concepts like contingency and agency, or at least re-regulate our dosages. Maybe our alertness to agency has diminished our sensitivity to tragedy, to the ways in which circumstances really can entrap and grind down both individuals and whole communities. Maybe we need to think more about the long chains connecting specific political actions and constricting everyone’s freedom.

Maybe we historians need to stop being so damned optimistic!

 

Historians and irony, Part I

We historians have a long, intense, up-and-down relationship with irony, the kind that merits an “it’s complicated” tag. We argue with irony, shout, try going our own separate way– but the final break never comes, and eventually we and irony always wind up back in bed together. Like all stormy relationships, it’s worth some serious thought.  (Note for extra reading:  like pretty much any other historian who discusses irony, I’ve been hugely influenced by the great historian/critic Hayden White— when you have the time, check out his writing.)

Now, historians’ irony doesn’t quite track our standard contemporary uses of the word. It’s not about cliché hipsters saying things they don’t really mean, or about unexpected juxtapositions, like running into your ex at an awkward moment.

No, we historians go for the heavy-hitting version, as developed by the Ancient Greeks and exemplified by their ironist-in-chief Oedpius Rex. In the Greek play, you’ll remember, he’s a respected authority figure hot on the trail of a vicious killer– only to discover that he himself did the terrible deed, plus some other terrible deeds nobody even imagined. Like most of the Greek tragic stars, he thinks he’s in charge but really he’s clueless.

You can see how that kind of irony appeals to historians. After all, we spend a lot of our time studying people who misjudged their command of events– and anyway, we know the long-term story, how events played out after the instigators died. Most of the leaders who got Europe into World War I thought it would last a few weeks and benefit their countries. By 1918 four of the big player-states had been obliterated, and the ricochet damage was only beginning– Stalin, Hitler, the Great Depression, the atomic bomb, and a whole trail of other bad news can all be traced back to 1914.

That’s why our relationship to irony never makes it all the way to the divorce court. It’s basic to what we do.

But there are other sides to the relationship, and that’s where the shouting starts. We historians don’t just confront people’s ignorance of long-term consequences. There’s also the possibility they don’t understand what they’re doing while they’re doing it. That possibility takes lots of forms, and we encounter them in daily life as well as in the history books. There’s the psychological version, as when we explain tough-guy behavior (whether by a seventeenth-century king or twenty-first-century racists) in terms of childhood trauma or crises of masculinity. There’s the financial self-interest version, as when we believe political leaders subconsciously tailor their policies to their career needs.

And then there are the vast impersonal forces versions, what we might call ultra-irony, where historians see individuals as powerless against big processes of social change. That’s how the Russian novelist Leo Tolstoy described the Napoleonic wars, and how the French philosopher Alexis de Tocqueville described the advance of democracy— efforts to stop it just helped speed it up. Marxist and semi-Marxist historians have seen something similar in the great western revolutions. Those fighting tyrannical kings in 1640, 1776, and 1789 didn’t think they were helping establish global capitalism– many hated the whole idea of capitalism– but their policies had that effect all the same.

You can see why historians have such a fraught, high-voltage relationship with ultra-irony interpretations like these. On the one hand, sure– we all know that many social forces are bigger than we are; we laugh at those who try to stop new technologies or restore Victorian sex habits; we know we’re born into socio-cultural systems and can’t just opt out of them.

On the other hand, historical practice rests on evidence, documentation– and where do we find some president or union leader telling us he did it all because his childhood sucked? How do we document vast impersonal forces? Ironic interpretations require pushy readings of the documents– speculation, going beyond what the evidence tells us, inserting our own interpretive frameworks. Nothing makes us historians more jumpy.

There’s a deeper problem as well: interpretations like these diminish human dignity, by telling us that people in the past didn’t know what they were doing or even what they wanted to do. If we accept these interpretations, we deny agency to historical actors, belittle their ideas, dreams, and efforts, mock their honesty and intelligence. We dehumanize history– the human actors are the pawns, the vast impersonal forces run the game.

Those are serious criticisms, and they’ve been around since the nineteenth century.

But the interesting thing is, their persuasive force rises and falls over time. You’ll have a whole generation of historians who find ultra-irony persuasive and helpful; it feels right, and it seems to open up exciting new research questions. Then the tide shifts, and historians become more concerned with agency. They listen closely to historical actors’ own views of who they were and what they were doing.

By and large, the mid-twentieth century fell into Phase 1 of this cycle– it was a time when historians saw irony everywhere and paid lots of attention to big impersonal forces. Marxism was riding high, but so also were the other -isms: Freud-influenced historians saw unconscious drives pushing people to act as they did; Weberians saw the experience of modernization behind political and religious movements. “Underlying causes” were big, and we viewed participants’ own accounts with suspicion– we assumed they didn’t understand their own motives or circumstances.

But that changed in the 1970s, and for the past thirty years we’ve been deep in Phase 2, the no-irony phase. We’re concerned with taking historical actors seriously and with avoiding what a great Marxist historian called “the enormous condescension of posterity.” We believe in “agency”– meaning, from the top to the bottom of the social scale, people can help shape their own destinies.

What does it all mean? I have a few thoughts, but I’ll wait until the next post to lay them out– stay tuned!

Patrons of the arts

“When bankers get together for dinner, they discuss Art. When artists get together for dinner, they discuss Money.” That’s the British playwright Oscar Wilde, speaking to us from around 1900. That was during the world’s previous great Gilded Age, and now that we’re deep into a new one, we humanists need to pay attention, even here in non-artistic, non-imagination-centric corners like the History Department. That’s because Wilde raises one of the basic questions we should be asking about ourselves: what’s our relationship to money and the powerful people who have it?

In wisecrack format, Wilde sums up one of the classic answers. Rich people love the arts, and artists (or historians, or philosophers– you get the idea) need money. Usually we can’t get that money selling our wares to ordinary people, since they have other needs to cover first, like housing, clothing, and food. Like it or not, we’re in a luxury business, selling expensive, delightful add-ons that make life better but aren’t needed to keep it going. We have to sell to the same folks who buy the other luxury products.

Of course the selling is more direct in the art world. Rich patrons interact directly with artists, and sometimes they tell the artist what to produce– a portrait of the kids, a design for a new home, a new opera. In academia, there are intermediaries. Donors give their money to institutions, which then dole it out to individual professors and researchers according to the institutions’ own guidelines and standards. But it’s basically the same process, rich people paying for cultural production.

Usually that doesn’t mean bad art or ideas, au contraire. Many of the rich have had good educations, and anyway they don’t have to care what other people think– they can make the adventurous calls, not just the safe ones. The Rockefellers created New York’s Museum of Modern Art back when modern art seemed crazy and dangerous, and that openness to the new has been a standard pattern since the Renaissance. Check out the seventeenth-century painter Caravaggio for an extreme example. He was gay, violent, and young, and he painted sacred scenes in wild new ways– but he received huge support from all sorts of Catholic big-shots.

But it seems that push always eventually comes to shove, and then the dark sides of artistic/intellectual patronage come into view. I’ve written here already about the case of Steven Salaita, whose appointment the University of Illinois overturned after wealthy donors complained about some of his tweets. And you’ve probably heard about the billionaire Koch brothers, sophisticated and generous patrons of the New York City Ballet and other cultural institutions, who’ve also donated tens of millions to various universities– but with strings attached: in return for the money, at least in one case, they’ve demanded that the university teach ideas congenial to them, and they may have demanded a say in the faculty appointment process.

In other words, the rich aren’t just buying aesthetic pleasure– they’re also investing their money, and like all investors they expect a return.

Now there’s a new example to ponder, more disturbing in that it concerns an especially sympathetic figure– the hedge-fund manager George Soros. Unlike most of the other modern billionaires, Soros has genuine intellectual credibility– before he got rich he did a real PhD, in a hard-core humanities discipline, and he’s used his money to support various admirable causes. He’s even helped create a whole new university in his native Hungary, devoted primarily to the humanities and social sciences.

So to a humanities professor like me, Soros is a good guy, exemplifying the best sides of cultural patronage.

But now Soros has joined the war-pushing business that’s so popular these days, calling for tougher European action in the Ukraine: “Europe is facing a challenge from Russia to its very existence,” he tells us; “the argument that has prevailed in both Europe and the United States is that Putin is no Hitler,” but “these are false hopes derived from a false argument with no factual evidence to support it;” all European resources “ought to be put to work in the war effort,” because “in the absence of unified resistance it is unrealistic to expect that Putin will stop pushing beyond Ukraine when the division of Europe and its domination by Russia is in sight.”

Whatever you may think about the Ukraine situation, there’s a lot here to weird you out. There’s the casual talk of going to war, as if launching a serious European war wouldn’t be one of the all-time human disasters. There’s the full-court demonization of our enemies, as monsters with whom it would be folly–“unrealistic”– to negotiate. We’ve had fifteen years of this kind of rhetoric– has it produced anything but disasters?

And then there’s the strange venue that Soros selected for his call to arms: the New York Review of Books. Most of those who encounter this site will know all about the New York Review, but in case you don’t, it’s the publication that pretty much encapsulates humanities department thinking in the US. Every two weeks, it offers extended reviews of academic books, along with one or two pieces of sophisticated political commentary; professors write most of these, but they write with educated-outsider readers in mind.  So people like me read it to learn about the new trends in English or Art History, or about debates on the origins of the American Revolution– it’s a way to get up to at least amateur speed on interesting topics, without doing the heavy reading yourself, a virtual coffee house for academics, where we all meet up.

Which raises the question, why is a call for European leaders to get tough appearing there, rather than in the Frankfurter Allgemeine Zeitung, Le Monde, or the New York Times?

Now, I have no clue what Soros has in mind with the substance of his warfare talk. Maybe he actually believes the comic book, super-villain-on-the-loose worldview he’s pushing, or maybe he has some money-making irons in the Ukraine fire (iron Maidans, as it were…), or maybe some mix of the two– who knows?

But we can do better guessing about that last question, the why-the-New York Review question. Because whatever else is going on, Soros is broadcasting to an audience made up mostly of us humanities professors and various humanities-adjacent types; he apparently wants us along on his foreign policy crusade. It’s the classic good news/bad news story. The good news: our collective opinion seems to matter in legitimating an enterprise of this kind, perhaps more than most of us realize. We’re worth courting. The bad news: when it comes to cultural patronage, the good guys like Soros give as much thought as anyone else to the returns their investments will bring.

My man Clausewitz

Some weeks ago, I described my admiration for the mid-Victorian novelist Anthony Trollope. I fall way outside Trollope’s target demographic, which was conservative, Church-of-England-style Christians. But I find myself re-reading him often, and learning from him. It’s been a lesson in the limited importance of literary intentions, both authors’ and readers’. We don’t know what books are going to matter to us, just as authors don’t know whom they’re going to reach, or how.

Today, I want to discuss another literary enthusiasm I’ve recently developed, which has surprised me just as much: it’s for Carl von Clausewitz, the early nineteenth-century Prussian military philosopher.

Clausewitz was a theorist who also walked the walk. He joined the Prussian army at age twelve, and for the next twenty years he fought in all its wars against revolutionary and Napoleonic France– the biggest, bloodiest wars Europe had seen up to that time. But he made his superiors jumpy, and they eventually parked him in the Prussian military academy, where he taught future officers, honed his theories, and worked away at his enormous book On War. It still wasn’t done when he died, but his devoted widow assembled the pieces, and it became an instant classic. It’s still taught at military colleges around the world, including our own West Point.

Even if you haven’t read Clausewitz, you’ve probably heard some of the snappy phrases he invented, like “the fog of war” and “war is the continuation of politics by other means.” There are dozens of other one-liners that aren’t as well known but ought to be. In fact he was something of a literary genius– he carries you along as you read, and you find yourself reading longer stretches of the text than you’d planned. Like other great writers, he forces you to look at the world in new ways.

That literary oomph turns out to be more common than you might expect among history’s great generals, and Clausewitz himself explains why: war “may appear to be uncomplicated,” he tells us, but actually it “cannot be waged with distinction except by men of outstanding intellect.” To make his point, he tosses in some startling comparisons. In some ways, he says, the good commander resembles the poets, painters, scholars, and intellectuals. Like them, he has to use imagination and insight into the human condition, as well as the specific skills and disciplines of his art.

Clausewitz’s reasons get to the heart of his ideas about war– namely, that it’s a really, really complicated business, which even the geniuses can’t fully master. The mediocrities don’t stand a chance.

Sure, he tells us, from a distance “everything looks simple: the knowledge required does not look remarkable,” the strategic options look obvious; anyone with a good map can figure out how best to encircle a city or cut off opposing troops. But the reality is unimaginably complex, because it involves thousands or millions of individual human beings, all acting on the basis of their own emotions and will, all enduring maximum stress. The physical environment poses its own difficulties. Simple acts become complicated in the smoke, mud, dust, and exhaustion of combat; geography takes on strange new shapes; chance events assume enormous importance. As he puts it in another of his sharp formulations: “War is the realm of chance. No other human activity gives it greater scope: no other has such incessant and varied dealings with this intruder.”    (The quotations come from On War, in the spectacular translation put together by Michael Howard and Peter Paret.)

In these circumstances, Clausewitz’s commander is on a quest for knowledge, trying to find the truth when “all information and assumptions are open to doubt, and with chance at work everywhere.” Courage amidst dangers, training, equipment, faith in the mission– in war all those count, but the indispensable qualities are intellectual: “first, an intellect that, even in the darkest hour, retains some glimmerings of the inner light which leads to truth; and second, the courage to follow this faint light wherever it may lead.” For Clausewitz, truth about situations and the people involved in them is the ultimate war-making tool.  That’s why the commander needs elements of the humanist’s mindset.

There’s lots more to Clausewitz, of course, some of which maybe I’ll write about in the next few weeks. But for now let’s stop and think a minute about how his vision of military knowledge fits with what we encounter here today, in twenty-first-century America.

Because we also have lots of ideas about war. We ought to, because over the last generation war has been the main constant in American life. The War in Afghanistan has now lasted longer than the Trojan War, and twice as long as World War II. Some retired general pops up on TV pretty much every night, and most weeks you can find op-eds by thoughtful experts pushing for American military intervention somewhere in the world. Most of us don’t go to war ourselves, but we’ve come to view war-making as a normal part of American political life.

We can do that partly because our American ideas about war differ so wildly from Clausewitz’s. He talked about chance, uncertainty, inadequate information, and the need for imaginative brilliance to get at the reality of any military situation. We describe war instead as knowable, predictable, and manageable. Our favorite war terminology is medical– we speak of surgical strikes and interventions; we describe our enemies as cancerous growths that need to be excised; we call many of our interventions humanitarian acts, life-saving missions. And in modern war as in modern medicine, we’ve got technologies that Clausewitz never dreamed of; drones, night vision goggles, computers– these allow our soldiers to overcome war’s information gaps. Of course technology doesn’t eliminate all uncertainty. Unexpected problems still arise on the battlefield, as they do at the hospital– but now we can address them effectively.

So given that we live in a different technological world, is Clausewitz basically a museum piece, or is he someone we should be listening to?  How seriously should we take a voice from the horse-drawn, muzzle-loading era?

One reason for listening is that Clausewitz gives us the voice of a hardened Prussian officer, who’d fought in high-level battles, both victories and defeats, without losing his faith in either war or the army. When he tells us about the unknowability of war, he’s talking as a believer, not a pacifist dreamer or bleeding-heart do-gooder. He doesn’t doubt the value of war– he just wants us to know what it really is.

The other reason concerns us, not Clausewitz. The brutal fact is, American conventional wisdom about war doesn’t look so good these days. We’ve got the biggest, best-equipped army in the world, but we’re on a fifty-year losing streak– against a series of much weaker enemies. (Ok, we looked impressive against Grenada and Panama, but you get the point.)  Our humanitarian interventions have typically made situations worse, not better.

Maybe it’s time to rethink our approach to this most serious of human activities– and we could do worse than starting with Clausewitz.

On books

My last post made gentle fun of us humanities professors and our research. We spend years writing our books– all the while knowing that only a few fellow-specialists are going to read what we turn out, and that the world isn’t going to change because of it.   Researching and writing, I claimed, actually just provide the framework for the more important work we do. We can live without another book on French social history, however brilliant. But we can’t survive as a culture unless someone is keeping alive our texts and other cultural artefacts– by reading, performing, and thinking about them. In contemporary America, that mainly means us professors of humanities.

And yet here I am ten days later beaming with pride and pleasure as my own new book nears its publication moment. It’s a project that’s occupied me since 2005, and now the absolute final version of the text has just gone to the type-setter. In preparation for the actual launch, the publisher has just sent me two possible versions of the cover.

Suddenly this thing is morphing from MS Word documents on my computer to a real book, and the transformation has me seriously excited.

Which suddenly hit me as worth thinking about. I mean, it’s not my first time on this particular carnival ride, and I’ve got no illusions about where it ends up — namely, back at the starting point. My book will impress some fellow scholars, vex a few others, and have no effect whatsoever on everyone else. In a year or so, even I will think think I made some weird choices in how I put it together. Anyway, today in 2014 we have other, in many ways better ways to put our ideas before those who might be interested in them– this blog, for instance.

So why does a book still have a special kind of power?

Partly, I think, it’s just because books remain beautiful objects, in some ways more beautiful than ever. (New technologies have allowed publishers to do lots of things that were once impossible or wildly expensive.) We respond to the beauty, and also to the multiple ways that beauty connects us to bits of the past. Our own, highly specific past, with its trails of books encountered in public libraries, trashy bookstores, and college seminars, and our collective past; as physical object (we’re not talking content here!), my book won’t look all that different from books Erasmus published in the sixteenth century. There’s an excitement about plugging into all those various histories.

But I think there’s also another dimension to the thrill of book publishing, something that’s absent from any other kind of writing. It’s that writing even a narrow-gauge, scholar-oriented book like mine requires creating a self-contained world, populated with its own characters, moved by its own motives and forces, marked by certain kinds of emotions and relationships. The thrill of book-publishing is the thrill of world-creating.

Now, historical study being what it is, our newly-created worlds are supposed to be “true,” or at least true according to the conventions of our discipline. Unlike gods and novelists, we’re not allowed to create a world from nothing more than our own thoughts and imaginings. Everything we say has to rest on some trace created by others– on documents from the past, discoveries by other scholars, and the like.

But that doesn’t change the basics of the world-creating work. Just like novelists, we select our characters and sketch out the terrain where they act. We give them emotions and attitudes, many of which we’ve had to intuit from mere fragments and hints in the historical record. We reconstruct the after-effects of what they’ve done, again on the basis of our own intuitions rather than from any direct evidence. Throughout, we have to give the readers who visit these scenes a sense of the rules that apply there, how things work.

And we have to do all this within the 300 pages of a typical book– in other words, we can’t just report everything we’ve found. We’re constantly choosing between what matters and what doesn’t, making decisions about the guidance new visitors to this particular world will need. Some things we have to explain; others we can leave out because vistors will already know them from their other travels.

Of course we don’t usually put it this way. We talk instead about the craft of writing, gauging our audience, the trivia of editing. Is there too much background, or not enough? Has a character been properly introduced in earlier chapters? Do the explanations I offered in chapter 1 apply to the events in chapter 6, or do I need to rethink my characters’ motives?

That common-sensical language is just an acceptable way to talk about what’s really a magic show. We’re calling a dead world back into some semblance of life.

No one has spoken more eloquently about the process than the great Russian-German-American-Swiss novelist Vladimir Nabokov, in a literature course he gave at Cornell in the 1950s:

“The material of this world may be real enough (as far as reality goes) but does not exist at all as an accepted entirety: it is chaos, and to this chaos the author says ‘go!’ allowing the world to flicker and to fuse. It is now recombined in its very atoms, not merely in its visible and superficial parts.”

Nabokov certainly didn’t intend for his description to apply to people like me. He was one of the all-time culture snobs, ready to dismiss even some heavy-hitting novelists as mediocrities. He left his university job the moment he had the money to do so. But his image applies to all us authors, because that’s what putting together a book is like, for ordinary writers as much as for Nabokov’s greats.

Cool, huh?