Tag Archives: academia

Ways we live now: the Volkswagen scandal and modern capitalsm

With so many crazy things going on these days, who even remembers the Great Volkswagen Scandal of 2015? That’s the one where they caught VW faking pollution performance in its diesel engines– and not just messing with the paperwork, either. VW actually installed a whole extra system in eleven million of its cars, allowing them to detect when they were in the shop for state emissions tests. While hooked up to the inspection machines, the cars ran in low pollution mode and met government standards; once back on the road, they went back to standard performance– which meant sending out forty-times as much pollution as in fake-out mode.

It’s not exactly in the spirit of blogging to bring up a months-old scandal– but I keep thinking about the VW case, because it says so much about how crazy contemporary capitalism has become. Also, about the craziness of our responses to contemporary capitalism.

As a partial list of what’s so crazy about the case, consider the following:

1) The scale of the operation. It’s a big deal to invent a cheat system and install it in eleven million cars. VW has fallen back on the “it was a few bad apples/rogue executives” defense we hear so often these days, but that’s pretty implausible. An operation like this required research and development, changes to assembly lines, major expenditures, all going on over several years.

2) The Germany thing. Being known for quality manufacturing is a very, very big deal to Germany. Of course the “made in Germany” brand is an economic tool, which helps the country sell its goods worldwide and charge premium prices for them, but it’s also part of the larger German psychology, really a nation-wide brand name. The idea is, Germans do things right, they don’t take short-cuts, their products deserve your trust. It’s the mindset that allows German politicians to lecture Greeks, Italians, and others about being lazy, slovenly tax evaders who can’t generate trade surpluses.

So you might think an iconic German company would hesitate to put all that at risk; even if companies occasionally did so, you’d think, the German authorities themselves would be extra vigilant about misbehavior that threatened the national brand. Apparently not.

3) The objective. Since I’m not a car guy, my handle on the technicalities is weak. But as I understand it, the cars worked just fine in low-pollution mode, they just performed better when spewing (lots) more pollutants– more pep, more power, more responsiveness, better fuel economy. In other words, VW was ready for crime if that’s what it took to make its customers marginally happier.

4) The stakes. Of course, the costs included more than just violating various countries’ laws. There’s also the real-life impact of all that extra pollution– premature deaths (about sixty in the US, so the researchers guess, and many more in Europe, where diesels are more popular), and of course contributing to the destruction of the planet. Think of it as human sacrifice on the altars of high performance and customer satisfaction.

5) Professors to the rescue. The story only came to light because complete outsiders looked into the case. A group of professor/researchers at West Virginia University bought a handful of the cars and did their own tests, out on the open road– only after they published their findings did the relevant regulatory agencies get involved, and only after that did VW itself take steps. (Steps which are still dragging along, by the way– VW still hasn’t replaced or refitted the engines.)

So among other things, the VW Scandal is a lesson about the benefits of tenure and the other protections the American research university provides its researchers. Thousands of people within VW and thousands more outside must have known what was going on, but there were zero whistle-blowers on this one– not surprisingly, when you think about what happens to most corporate whistle-blowers. Nowadays, the universities are the last refuge for this kind of free-range, trouble-making, profit-threatening research. Most everywhere else, raising this kind of question means losing your job.

Now, there’s a tendency to think about cases like this in terms of corporate psychology and ethics. You’ve probably seen stories like this one, which asks what all those people were thinking– how could they have signed off on the VW program, given its immoral elements?  Or if you haven’t seen VW-relevant stories, you’ve seen them about other corporate ethical disasters, like our recent banking scandals. Which is fine, I’m all for moral improvement, but I think that misses the real message of the Great VW Scandal.

Instead, the real message is just how completely capitalist calculation now trumps other possible ways of thinking– like, say, worrying about right and wrong, or about humanity’s survival, or your country’s reputation, or even the long-term health of the corporation itself. It all pretty much follows the Karl Marx script. Capitalism works as total Darwinian war, meaning that all options are on the table, escalation is always possible, and in the long run there are no safety zones. And of course, Marx would have been pleased to hear the rumors that other car companies have been doing the same thing. He’d say, that’s just how capitalism works.

Maybe it’s time to take the old guy a little more seriously.

 

 

Military education is to education as… Update!

A few days ago, I posted some thoughts on American military education, as exemplified by our army academy at West Point. It’s a topic that’s worth some attention. Out there in the world, we’ve now got troops in about 150 countries— people in all of them should be wondering how American generals think. Meanwhile here at home the military enjoys amazingly high levels of respect. Apparently 70 percent of us think our military officers want what’s best for the country, as opposed to what’s best for themselves; only 32 percent think that about civil servants, and for members of congress the number drops to 12 percent. Enjoying respect like that, it’s no wonder values and ideas spread outward from our officer corps to the rest of society.

So we need to think hard about how good those ideas and values are likely to be, and my post suggested that they’re not so good. The young people going through West Point don’t get experience in thinking hard about any particular thing, because they’re moving too fast, with only brief stops at all kind of knowledge service stations, and no time at all for unstructured reflection. It’s no surprise that once they’re out in the field, they aren’t so hot at understanding complicated societies like Afghanistan. As they get older and become opinion-makers on matters of international politics, like David Petraeus, the effects are even worse.

Before I’d even finished writing up my thoughts, a new example popped into the news that should make us worry even more about what the hell goes on at West Point. It’s the case William Bradford, an assistant professor at West Point whom The Guardian covered a couple of weeks ago. It turned out that Bradford had published an essay calling some American law professors “an Islamist Fifth Column” that the military can legitimately target/kill; the professors’ law schools are fair game too, as long as our soldiers try not to kill any innocent by-standers. Apparently the essay doesn’t name names, but it does supply an estimate of how many fifth-columnists need to be dealt with– about forty nation-wide.

One thing you can say for West Point, they moved fast to end the Bradford scandal– he resigned almost immediately after the Guardian story broke; the academy also explained that he’d written up these opinions before joining its faculty, so the army couldn’t be held resposible for them. In other words, the military’s excuse is that the article was all written up before they hired the guy– meaning, they could have reviewed it, asked some questions, looked at some other candidates.

For all I know that’s what actually happened, and we should take Bradford as expressing out loud what the rest of the military is thinking silently– maybe we should be really scared, rather than just appalled. But my money’s on this being just another example of the speed-obsessed superficiality that my last post described. This is the kind of thing that happens in an organization that doesn’t give its members any unstructured time.

“They also serve who only stand… and think.”

Military education is to education as military music is to music

A few years ago, my daughter got me the book Absolutely American: Four Years at West Point. The amazon.com blurb describes it as “a thrilling portrait of a unique institution and those who make up its ranks,” which is actually not too wildly overstated– at least, it’s a very good book. In classic New Journalism style, the author rents a house near the West Point campus and follows a group of cadets (a small group, but they seem representative) through their whole time there. Also in classic style, the author starts out a hardened wise-guy/big city cynic, and ends up really, really liking the young people he meets. He’s impressed at how demanding the education is, how many challenges the cadets overome, and how fundamentally decent they are to one another. He’s also impressed at how they grow up in the course of their time there.

Now, we’re a pretty unmilitary family, and the only person I know who’s gone to a service academy is my ex-brother-in-law, who got kicked out after one semester at the Merchant Marine Academy. But like most everyone else, we’re suckers for young-people-growing-up-and-meeting-challenges narratives and for stories about the romantic, “unique institutions” that make the growing-up happen– whether it’s Rugby School, LA’s Garfield High School, or Hogwarts. West Point fits right into the series, making Absolutely a seriously feel-good read.

At least until you start thinking about the details of West Point education, and then you start to realize something weird: these young people barely have time for the washroom, let alone for any thinking about whatever it is they’re learning. Basically, they’re up before dawn (6 am), then running non-stop from class to class, activity to activity until bedtime. There are two or three hours of free time in the course of the day, but mostly that gets used up on sports, room cleaning, and other kinds of prep work. Maybe some of these kids find time in there for getting excited about a weird novel or writing project, or talking about ideas– but they’re much more likely to use their few spare minutes on pure escapism. Anyway, with “almost every facet of life being graded” (in the words of a recent cadet), there’s not much payoff in unstructured activity.

I assume the concept is,  when West Point graduates are out fighting our enemies, they’re not going to have time for random reflection or enriching reading. They’ll have to think and decide fast, without the benefit of a good night’s sleep or a research library, and the West Point atmosphere of constant busy-ness is supposed to prepare them for that. Of course they’ll also have to be physically fit, and West Point’s sports requirement (everybody has to play some organized sport) prepares them for that too.

It sounds convincing, but is it really a good idea to have military leaders who’ve basically never had the experience of thinking seriously about something? By which I mean partly unhurried thinking, with time to argue things out and pursue loose ends. At least for the last fifty years, the think-fast-not-deep thing just hasn’t worked very well– as I’ve pointed out before, the American military is on a long losing streak, despite having more firepower than the rest of the world combined.

That’s bad in itself, but it seems to me the real reason for worry is that this West-Point approach is seeping out beyond the military itself, into American society at large. On the one hand, our military leaders aren’t content with the military domain any more; instead they turn up in high civic offices, running the CIA and the like, and in the media, where they hold forth about the state of the world and what we should do about it. Even David Petraeus has recently returned to advice-offering, pushing for various strategies in dealing with ISIS. According to CNN, “many in the foreign policy establishment still seek out his views, so his proposal will no doubt be taken seriously.”

Meanwhile our education reformers sound eager to bring some of the West Point spirit to our beleaguered schools. We hear about is the need for frequent testing and clear goals, for both students and teachers; unstructured activity can only derail those objectives. Every so often there’s even a push to bring back school uniforms, though thankfully that seems to have died down for the moment.  What hasn’t died down is the sense that young people need to pack in more activities, and whatever gets in the way of those is just an obstacle to good education.

So my suggestion is, thinking about our sorry military score-card isn’t just for military historians or policy geeks. All those wars we’ve lost against weaker opponents suggest that West Point-ism doesn’t even work on the battlefields it was designed for. Why would we expect it to work elsewhere in society? Why are we making students’ lives more constantly busy, rather than less?

Telescopic philanthropy and the modern university

Over the last year, I’ve offered occasional thoughts about the role of philanthropy in today’s world, looking mostly at the billionaires (like Bill Gates and George Soros) doing the giving. It seemed–actually it still does– weird and interesting that these guys would concern themselves with how historians and other educators spend their time, and I offered a few possible explanations.

But lately I’ve been more interested in the other side of that story, meaning the enthusiasm that some professors are starting to show about big-money philanthropy. It’s an enthusiasm that says some interesting things about how the university and its humanist professors fit into the twenty-first-century world.

My example is Peter Singer, who’s probably today’s most visible philanthropy-fan professor. Singer’s a world-famous Australian philosopher, who’s taught at Princeton since 1999 (Wikipedia quotes a colleague describing him as “almost certainly the best-known and most widely read of all contemporary philosophers”). He’s been pushing philanthropy for a long time, but lately it’s become his primary focus, and he’s signed on to a view called “effective altruism.” The basic idea is: all over the world, there are people in life-and-death need, and the rest of us have a duty to do everything we can to rescue them. “Everything we can” includes giving as much as we can, but also being smart about it, by making our dollars go as far as possible (that’s the “effective” side of the equation). That includes selecting charities that function efficiently, with low overhead costs and modest offices, and orienting our altruism to the truly desperate– sending food to starving Africans counts way more than (say) endowing book purchases at the local library.

Most interesting of all, those on the donor side have a duty to organize their own lives for maximum altruistic effectiveness. Singer offers an example from his Princeton classrooms, that of a brilliant young student who (influenced by Singer’s teaching) decided against a fast-track career in academic philosophy and instead went to Wall Street; he reasoned that all the extra money he’d earn there would make him a far more effective altruist. Singer has only praise for this career switch, which will allow the young man “to save a hundred lives” in his first year or two out of school, way more than would have been possible on a professor’s salary. Career choices like these (Singer assures us) form part of “an exciting new movement” that’s sweeping elite universities world wide; they show philosophy “returning to its Socratic role” of shaking up our ideas about the good life, dramatically transforming students’ lives, and making “the world a better place.”

From all this glossy talk of innovation and excitement, you wouldn’t know there are important criticisms of Singer’s approach, but they’re out there. As numerous observers point out, big philanthrophy undermines democratic values, by giving individual donors decision-making power over what society as a whole gets. It reinforces social hierarchies, by dividing the world between big-hearted givers and weakling takers and (as in Singer’s version) by giving extra moral status to big money. It tends to destroy public institutions, as their survival increasingly depends on pleasing a few wealthy donors. And it often has more directly destructive consequences. Sending free American food to Africa sounds great, but if it destroys African farming (how do farmers stay afloat when NGOs distribute free food in the marketplace?) and subsidizes American agribusiness (from whom NGOs purchase the “free” food), is it really helpful? Is this about “saving lives” or “clearing out small farmers so that multi-nationals can step in?”

Singer’s Wall-Street-bound student offers an extreme form of this last problem, because whatever his exact big business role, it would take some deep calculations to know whether his donations counter-balance the harm he may doing– especially since some of that harm won’t show up for many years. Do his investments contribute to climate change, for instance, or to carcinogenic industrial processes? Do his clients use the money he makes for them to lobby against health and safety regulations, or against old age pensions? It may be two or three generations before we know the costs and benefits, even if we add up only the lives saved and lost.

I’ll leave the list of criticisms there, since I find them so compelling (for a more complete and careful discussion, see this terrific article by the philosopher Matthew Snow). Instead, I want to think a little about what Singer and “effective altruism” tell us about the modern university and its culture.

First, we might note the strange combination of qualities that Singer’s story attributes to the university itself. There’s an element of professorial megalomania in the account– Singer presents himself as a new Socrates, shaking up philosophy and redirecting the polis, and he sees world-improvement starting in university classrooms, rather than among, say, the poor themselves. But there’s also some startling self-abasement in his story, because its real heroes are the wonderful undergraduates just passing through the university, on their way to making money and improving the world. Think of it as an updated version F. Scott Fitzgerald‘s glamor-Princeton, now featuring ethically sensitized young people rather than Jazz Age partiers. The professors function as their life-coaches, and the other figures in the university– graduate students, librarians, researchers, and the like– don’t make it into the picture.

Then there’s the strange historical shallowness in Singer’s account– meaning, it offers no hint that debate about philanthropy has been raging for about 250 years, and in that time the nay-sayers have landed some pretty good punches. The great nineteenth-century novels are full of philanthropists, most of them horrifying. (Think Charles Dickens’s Mrs. Jellyby, whose “telescopic philanthropy” centers on sending colonists to Africa while her own family sinks into ruin and degradation, or Charlotte Bronte’s Mr. Brocklehurst, bullying impoverished young women.) Even earlier, the French thinker/politician Anne-Robert-Jacques Turgot pointed out how silly and self-serving most charitable donations look to later generations. In the 1960s and 1970s, Michel Foucault explored the structural linkages between philanthropy and power, showing that even the slam-dunk do-gooder projects have come with very heavy baggage.

Now, presumably a smart, learned guy like Peter Singer knows all this, and he may reason that it’s not his job to argue against his own views– that’s for those of us who disagree with him. But whatever full disclosure duties he may have, we can still notice the peculiar firmness of his non-historicity. Here’s another world-class professor implicitly telling the public that the past doesn’t matter, we don’t need to think about it, let’s just focus on the bright, shiny future.

Which leads to a last peculiarity in Singer’s story, the politics. Singer presents himself as a leftist, and he was even a Green Party candidate back in Australia. Clearly he’s desperately concerned about the state of the world today, and really wants to improve people’s lives. Yet when we’ve stripped it down, “effective altruism” looks an awful lot like a celebration of capitalism and the well-paid folks who make it run. It’s a deeply flawed system, seems to be Singer’s position, but it’s not going away, and so we have to work with it.

As I’ve said before, the neo-cons who fume about tenured radicals in the universities can probably relax.

Mysteries of the classroom

What happens when we teach? It’s a more peculiar process than you might think.

To illustrate, here’s one of those True-Stories-That-Are-Also-Parables we writers like so much: My first term in graduate school, I landed in a research seminar taught by an old guy nearing retirement. Apparently he’d been hot stuff back in the 1930s, but in 1968, not so much. He hadn’t published anything in years, and from our fifteen weeks together I can remember only two classroom moments. One came when he hauled out from his desk some photocopies he’d made years earlier, of a seventeenth-century ship-building contract– he thought maybe he could publish the details in a model-ship-building magazine. The other time, he showed up twenty minutes late, wet from the rain, and mad because he couldn’t find on-campus parking. It was like a laboratory demonstration of Clark Kerr’s joke about the duties of a university president– to provide football for the alumni, sex for the undergraduates, and parking for the faculty. There were only four of us students. We crammed into his office for a couple of hours every week, listened to him ramble on, then went home and did our research.

So this was about as hopeless a teaching set-up as you could find– yet the seminar turned out to be quite the big deal for us students. All four of us wrote seminar papers that turned into dissertations, and all the dissertations became respectable university press books; we all got decent academic jobs, though one of us bailed out of the profession (and into law school) before getting tenure.

Every parable needs a moral, so here’s mine: we see part of what goes on in a classroom, but there’s lots we don’t see– invisible forces swirling around, electrical charges sparking and fusing at subatomic levels, multiple temporalities colliding and redirecting one another. Had there been teaching evaluations back then, we’d have given Professor X F-minuses, and the deans would have packed him off to teacher remediation boot camp. But then, check out his “learning outcomes”– meaning, did the course give us students what we signed up to get? That’s the big metric now in vogue among administrators, and according to it Professor X belongs in the Teaching Excellence Hall of Fame.

Now, I understand the objections to making anything much of my story. It’s just one example, and graduate school is a peculiar business; we were bright, proto-professional keeners, not disaffected freshmen. Anyway, it was a long time ago, when the world had more room for ineffectual bumblers like Professor X.

All fair enough, but I’ve encountered less dramatic examples of the Professor X story all my life– bad teachers from whom I learned a lot, certified teaching stars who left me bewildered and/or scared. Of course some of those stars were the obvious fakes, the bombastic performers with nothing to say, but some were the real deal– I just wasn’t ready for what they were offering. I see the same things happening these days among the students I encounter.

Obviously that explains some of the subatomic interactions going on in the classroom. People need different teaching at different moments in their lives. Sometimes a conscientious and brilliant instructor overpowers and discourages; sometimes the chance to feel snooty and superior about your teacher (as we did with Professor X) is just what the doctor ordered– it encourages intellectual adventurism, or just helps you survive a difficult stretch like the first weeks of graduate school. And then there are all the other logics– lucky encounters with the right mix of fellow students or with the right topics and books, no matter who’s teaching them.

But when we’ve said all this, there are still plenty of classroom forces at play whose logics elude us– and some that we don’t even see. Which is just to say, the classroom is a site of human interactions, much like other human interactions only moreso than most: more compact and intense, with more at stake, with more layers, more moving parts.

You don’t have to be inside the university to know this human-interaction model of teaching faces challenges nowadays– from online education, from students who feel they’ve already got too many human interactions in their lives, from shrinking university budgets, from measurement-besotted administrators.

All I can say is, check out those learning outcomes!

Do humanities professors dream of electric sheep?

Over the weekend, our graduate students put on a fantastic one-day conference, and it included a faculty roundtable discussing the digital humanities. The line-up included one super-enthusiast, two moderates, and me as the designated Mr. Negative– which in itself tells you where the window of debate is now located. I mean, I blog, I occasionally tweet, I push my students to consult Wikipedia for the background facts on what we’re studying. Take away digital photography, and I wouldn’t last a week in the archives; take away my morning dose of internet news, and I’m a wreck. The digital revolution has gone awfully far if someone like me gets cast as the voice of caution and doubt.

In the Teaching section here and on my Academia.edu site, I’ll post a cleaned-up version of my formal comments. Here I’ll offer a short version of those, mixed with some thoughts that came to me during the (outstanding) discussion that followed the panel’s presentations.

I won’t go on much about my own super-enthusiast side, except to say it’s real. As it happens, my particular weakness as a scholar coincides with some dramatic strengths of the new digital resources. I’ve always had trouble getting dates and details exactly right, and the old printed reference bibliographies have always just left me depressed and listless– anyway the specialized resources I usually need aren’t even available in the universities where I’ve taught. Think of it as my kinky version (not my only version, I hasten to add…) of a thrill we’re all experiencing these days: suddenly I’ve got a cheap, easy electronic solution to a dark, secret, personal weakness.

But the storm warnings also seem to impress me more than most of my colleagues. For the PG-13, super-scary version, check out the philosopher Tim Mulligan’s Ethics for a Broken World: Imagining Philosophy After Catastrophe. Among many other issues, Mulligan thinks seriously about the reality we all know lurks behind the digital wonderland– namely, it could go poof at any moment, because of a war, a breakdown of the electrical system, evil-super-hackers, an NSA Stuxnet-type operation gone wrong, or dozens of other altogether-possible scenarios

So Mulligan imagines his post-catastrophe philosophers having to make do with what he calls the Princeton Codex– scrambled bits and pieces of Princeton University’s paper library that survived climate change and its attendant disasters, in roughly the same messed-up way as ancient European literature survived the Dark Ages.

Except for one big difference. Everything from the ancient world at least had a fighting chance of making it through the bad times, and a lot was waiting there for people like Thomas Aquinas and Copernicus to sort through and build on when the dust settled. In 2015 we’re probably already beyond that point. A steadily greater percentage of our knowledge is now preserved only up there in the cloud, and pretty soon it will be most of our knowledge; if it goes, it’s gone for good.

So that’s the Total-Catastrophe worry, but there’s also the Right-Here-Right-Now worry: digital knowledge reshuffles the sociology of knowledge, in some ways for the better, in others for the worse. At this point we don’t know how much worse, but maybe quite a bit.

On the plus side, the digital world gives new reality to old ideals of equality and fraternity. Like everyone else, I now connect directly and easily with scholars all over the world, people I would never have encountered in the old days. And I get to publish my thoughts in places like this without awaiting the approval of editors or reviewers. Sure, the hierarchies and barriers still exist, but they’re way weaker than they used to be.

But as Alexis de Tocqueville explained long ago, the third element in the great French trinity doesn’t necessarily play well with the other two– and Tocqueville would have loved thinking about liberty’s tormented place in the new digital regime. “Tormented,” because our online doings are watched 24/7, by governments, insurance companies, angry teens, employers, and all sorts of others. Real havoc regularly ensues–health coverage rejected, jobs lost, visas denied, legal trouble, personal humiliations.

In the nature of things, life in this new panopticon entails controlling what we say and do, and even what we learn– multiple authorities now monitor our visits to informational websites. It’s the most effective kind of censorship, the kind where we do the real work ourselves, each of us monitoring our own utterances.

That seems to be part of a larger problem, which we’ve barely started wrestling with: digital culture binds us extra-intensely to our late-capitalist social order, not only because the individual bonds are so strong, but also because there are so many of them. Of course we rely on the corporations that supply our computers, browsers, storage, electricity, etc etc etc. But we also find ourselves slotted into mini-capitalist-entrpreneur roles– each of us bloggers now worries about generating traffic, attracting readers, speaking to our audience; nowadays we’re all minor-league versions of the hustlers who produce the Big Bang Theory.

Higher up the food chain, the resemblance gets even creepier. Here’s the former director of a major digital humanities project, a well-established project at a great public university, speaking some years ago about his job: “A main part of Thomas’s role as Director is to write grants, as well as to seek out appropriate public and private agencies, whose interests match the VCDH’s projects. He compares it to finding funds for a venture capital firm.”

So in this world of surveillance, audience-seeking, entrepreneurship, and venture capital, what happens to the humanists’ trouble-making functions, our capacity to raise harsh questions and social criticism?

My own answer is, so far, so good. Anyone who reads these posts will understand how liberating I’ve found the new media. But the storm clouds are there, and they may get very dark, very fast.

 

Historians and irony, Part II

My last post talked about historians’ irony, which I presented as a way of approaching the past, a tendency not a specific interpretation. Irony-friendly historians tend to see people as having a limited handle on their circumstances, and even on their own intentions. Not knowing the world or ourselves very well, on this view, we humans regularly blunder into tragedy, generating processes we can’t control and outcomes we didn’t want. We don’t know what the fuck we’re doing.

I also suggested that irony of that kind is out of fashion nowadays. Not among all historians, and not 100 percent among any historians– as I said last time, we can never give it up altogether, because we know more than the people we study about how their stories turn out. But historians and irony are mostly on the outs right now, and that counts as something important about our era of historical writing. Open a recent history book, and you’re likely to encounter words like “contingency” and “agency.” Even late in the day, these words tell us, things could have gone differently, and individual decisions made a real difference. These words also tell us not to condescend to people in the past– not to view them as the helpless puppets of bigger forces, not to dismiss their efforts, hopes, and ideas, good and bad alike.

Things were REALLY different back in the mid-twentieth century, and they were still mostly different in the mid-seventies, when I got my PhD. In those days, the talk was all about long-term processes, societal changes, and the blindness of historical actors, and you found that talk pretty much everywhere in the profession, among Marxists and Freudians on the political left, modernization theorists and demographers in the middle, political historians on the right. These scholars mostly hated each other, but they agreed on a basic interpretive stance: big forces trumped individual wills.

So what happened? How did the history biz go from mainly-ironic to mainly-non-ironic? The question matters, because it touches on the ideological functions of history knowledge in our times. Mainly-ironic and mainly-non-ironic histories provide different lessons about how the world works.

Of course, some of the change just reflects our improving knowledge of the past. We talk more nowadays about contingency because we know so much more about the details of political change. We talk more about the agency of the downtrodden because we’ve studied them so much more closely– now we know that serfs, slaves, women, and other oppressed groups had their own weapons of small-scale resistance, even amidst terrible oppression. They couldn’t overturn the systems that enclosed them, but they could use what powers they had to carve out zones of relative freedom, in which they could live on their own terms.

And then, there’s what you might call the generational dialectic. Like most other intellectuals, we historians tend to fight with our intellectual parents– so if the mid-twentieth-century historians were all into big impersonal forces and longterm processes, it’s not surprising their successors looked to poke holes in their arguments, by pointing out all the contingencies and agency that the previous generation had missed. That’s one of the big ways our kind of knowledge advances, through criticism and debate. (For a discussion of this process as it works in a neighboring  discipline, see here.)

So there are plenty of reasons internal to the history profession that help account for irony’s ebb– and that’s without even mentioning the decay of Marxism, Freudianism, and all those other -isms that tried to explain individual behavior in terms of vast impersonal forces. Almost nobody finds those explanatory theories as persuasive as we once did, in the history department or anywhere else.

But having said all that, we’re left with an uncomfortable chronological juxtaposition: the historians’ turn to mainly-non-irony coincided with the circa-1980 neo-liberal turn in society at large, the cultural revolution symbolized by Margaret Thatcher in Britain and Ronald Reagan in the US. There’s a substantive juxtaposition as well: while we historians have been rediscovering agency among the downtrodden and freedom of maneuver among political actors, neo-liberal ideology has stressed individuals’ creativity and resourcefulness, their capacity to achieve happiness despite the structures that seem to imprison them. Unleashing market forces, getting people off welfare, reducing individuals’ reliance on public resources– these all start from the presumption that people have agency. They know what they’re doing, and they should be allowed to do it.

In other words, Edward Thompson’s warnings against “the enormous condescension of posterity” weirdly foreshadow various neo-con one-liners about how social programs and collective goods condescend to the disadvantaged. (For an example, check out George Will and George W. Bush talking about cultural “condescension.”)

Which of course is a pretty ironic thought, given that Thompson was a committed political activist and brilliant Marxist theorist. But if it could happen in the 1950s, it can happen now: intellectuals who hate each other and disagree on many specifics can nonetheless be teaching the same basic ideological lessons.

To me this suggests it may be time to rethink concepts like contingency and agency, or at least re-regulate our dosages. Maybe our alertness to agency has diminished our sensitivity to tragedy, to the ways in which circumstances really can entrap and grind down both individuals and whole communities. Maybe we need to think more about the long chains connecting specific political actions and constricting everyone’s freedom.

Maybe we historians need to stop being so damned optimistic!