There was a terrific NOVA program on the tube last night, on the subject of remotely-piloted vehicles (aka "drones") and their rapidly expanding role in the American military. The show focused mostly on the technical aspects of these weapons, but didn't omit some of the tricky ethical and political questions associated with their use. FP's Rosa Brooks argues that the advent of drones is a recipe for perpetual war; I'm inclined to agree, at least as long as the United States can continue to use them with impunity.
I took three lessons away from last night's program. First, a reminder: for all the alleged successes of our expanded drone program (i.e., degrading al Qaeda in various locales, providing battlefield intel in Afghanistan, etc.) in the end the United States failed to achieve its core objectives in either war. Iraq did not become a stable, pro-American democracy (it remains violent and if anything tilts toward Iran). Nor did we defeat the Taliban and create a stable democracy in Afghanistan (whose fate will be determined after we leave in 2014). And this reminds us that technological wizardry does not always translate into strategic success.
Second, one of the interesting puzzles of the so-called drone wars is why so few remotely piloted vehicles (RPVs) get shot down. Most RPVs are slow and don't fly that high, which would makes them vulnerable to relatively unsophisticated anti-aircraft weapons. Even the most elusive drones would be invulnerable to fighter aircraft or advanced anti-aircraft missiles. Serbia reportedly shot down some fifteen U.S. drones in the Kosovo War, and Iran may -- repeat, may -- have forced one down over its territory last year.
There are two obvious reasons why we don't lose more drones. One is that some governments (e.g., Pakistan) that object to their use are protesting too much: they are not so angered by drone strikes that they are willing to start shooting them down. Another is the fact that the Taliban and al Qaeda don't have access to sophisticated anti-aircraft weaponry, and nobody is going to provide it to them. Even states like Russia and China aren't overly fond of non-state terrorist organizations, which makes it much harder for the groups that we are targeting with drones to acquire counter-measures that might equalize the situation. But note: this situation also means that the relatively passive environment that we've been exploitng in places like Yemen or Pakistan may not be the norm, and things might be quite different if we went up against a foe that had better anti-aircraft capabilities and was willing to use them.
Third, I couldn't help but consider what the RPV revolution tells us about the future of the manned space program. Homo sapiens has many interesting and attractive qualities, but we also have real physical limitations and keeping us alive in demanding environments like space is very hard and expensive. Sending machines to explore space makes a lot more sense than sending human beings; we will learn more at far less cost if we abandon our romantic notions of "space exploration" by humans and send sophisticated machines instead.
Ethan Miller/Getty Images
It's summer, and a searing drought is shriveling corn fields in the Midwest. Meanwhile torrential rains (the worst in 60 years) have killed several dozen people in Beijing. Sea ice continues to shrink in the arctic -- the decline in June was the largest in the satellite record -- creating new sea areas for the Coast Guard to patrol. Welcome to climate change 2012.
But how serious is the problem? How worried should you be? I don't know, because I'm neither an atmospheric physicist, environmental economist, nor specialist in global institutions designed to address collective goods (or negative externalities). Nonetheless, I do try to stay informed on this issue, and I occasionally use the case of climate change to illustrate certain features of international politics to my students. And what makes it frustrating for a layperson like me is the range of opinion one can find even among well-informed journalists.
Case in point: two prominent articles on this topic appeared this past week, reaching sharply contrasting conclusions. The first article, by science writer/environmental journalist Bill McKibben, presents a deeply worrisome picture of the planet's future. According to McKibben, it's all in the math. There is now a strong scientific consensus that human beings can only put another 565 gigatons of CO2 into the atmosphere without causing average atmospheric temperature to rise more than two degrees Celsius. (Two degrees was the agreed-upon target figure at the 2009 climate change summit in Copenhagen, though many climate scientists think even that level of increase would be very harmful.)
Unfortunately, a recent inventory of current oil and gas reserves showed that they contain enough carbon to release roughly 2,795 gigatons of CO2, if it is all brought to the surface and burned. That's about five times the upper limit identified above. The problem, of course, is that the companies that own these reserves will want to pump the oil and gas out and sell it -- that's the business they're in -- even though spewing that much more carbon dioxide into the atmosphere would be disastrous. In the absence of effective government action to discourage consumption (i.e., by taxing carbon to raise the price and diminish consumption) we're in deep trouble.
The second article, from yesterday's New York Times, offers a cheerier view. In the words of business reporter David Leonhardt, "behind the scenes. . .a somewhat different story is starting to emerge -- one that offers reason for optimism to anyone worried about the planet." He describes how investments in clean energy are reducing the price of solar and wind power and how shifts from coal to natural gas (which is less carbon-intensive) for electricity generation have accelerated. And he dangles that hope that government-sponsored R and D will eventually create "disruptive technologies" that "can power the economy without heating the planet."
To be sure, these two articles aren't totally at odds. Leonhardt acknowledges that we have a long way to go, and that many experts believe that you need a combination of regulation to raise the price of carbon along with further reductions in the cost of alternative energy sources. Similarly, McKibben's account accepts that there is probably still time for effective political action to address this situation (Indeed, his whole article is clearly intended as a clarion call for greater activism).
As is so often the case, the issue boils down to politics. And that's why I'm pessimistic, because I can't think of any issue where the barriers to effective political action are so great. First of all, you have an array of special interests with little or no interest in allowing the government to interfere with their ability to make money in the short-term (see under: Koch Brothers). Second, you have a political system in the United States (the world's second largest greenhouse gas producer) that is unusually open to lobbying and other forms of political interference. Third, climate change is a classic example of an intergenerational equity problem: it's hard to get people to make sacrifices today (i.e., in the form of higher energy prices, less comfortable houses and offices, more expensive travel, etc.) for the sake of people who haven't even been conceived yet. That same principle applies to politicians too: Why should they jeopardize their re-election prospects for the sake of voters who won't be around until they are long gone? Fourth, there's also a thorny equity issue between advanced industrial countries like the United States (whose economies were developed before anyone knew about climate change) and emerging economies like China or India that don't want to slow their economic growth by reducing greenhouse gas emissions today. Even if there is a rapidly growing consensus on the need to do something soon, everybody wants somebody else pay most of the price or bear most of the burdens.
For all these reasons, the well-publicized effort to devise an effective global solution to the problem of human-induced climate change has largely failed thus far. It's possible that some new disruptive technology will swoop in and solve the problem for us, or maybe some of the intriguing proposals for "geo-engineering" the planet may prove workable and effective.
Maybe, but such hopes remind me of this old cartoon. If we're going to need a miracle (whether political or technological) we're going to have to be more explicit about what happens in step two.
Scott Olson/Getty Images
At the Big Think website, John Horgan argues that war is just a cultural practice that humankind could eventually abandon, unless we keep infecting ourselves with the "war virus" (h/t Andrew Sullivan). If one state gets infected by war-proneness, so his argument runs, its neighbors may have no choice but to follow suit and adopt similar measures in order to prevent themselves from being conquered. In Horgan's words (as reported by Mark Cheney here):
"Imagine your neighbor is a violent psychopath who is out for blood and land. You, on the other hand, are person who wants peace. You would have few options but to embrace the ways of war for defense. So essentially your neighbor has infected you with war."
It's an arresting use of language, perhaps, but the history of social Darwinism should have taught us to be wary of bringing misplaced biological analogies into the study of world politics. Viral infections spread by very specific and well-known mechanisms -- e.g., they take over the DNA of neighboring cells and replicate themselves-and that's not remotely like the mechanism that Horgan is identifying here. Instead, he's actually describing a situation where an external threat forces the leaders of neighboring states to rationally choose to adopt policies and strategies designed to insure their survival. That's not how viruses spread: You don't catch a cold because you've decided the only way to protect yourself against your sneezing neighbor is to start sniffling and sneezing along with them.
The actual logic that Horgan is pointing to here is the basic "security dilemma" that realists have been talking about ever since John Herz. In a world where no agency or institution exists to protect states from each other, each is responsible for its own security. Because states cannot know each other's intentions with 100 percent certainty (either now or in the future), they have to prepare for the possibility that neighbors may do something nasty at some point. So they invest in their own armed forces or they look for powerful allies, especially if they think the possibility of trouble is fairly high. And once they do that, others have to worry about them in turn. This is the "tragedy" of great power politics identified by my colleague John Mearsheimer, and it's a much better explanation for security competition (and war) than some analogy to microbes.
To be fair, Horgan's larger point is simply that war is not a biological necessity; it is a specific political or cultural response to certain conditions and thus in theory could gradually be abandoned. This theme has been developed at length by John Mueller and more recently by Steven Pinker. I agree with Pinker's claim that the overall level of human violence has declined significantly over the past several centuries (mostly due to the emergence of increasingly stable domestic political orders, i.e., states), but I remain agnostic about the larger claims for a long-term reduction in inter-state violence. That trend is driven almost entirely by the absence of great-power war since 1945, and the absence of great-power war may have multiple and overlapping causes (bipolarity, nuclear weapons, the territorial separation of the U.S. and USSR during the Cold War, the spread of democracy, etc.) whose persistence is hard to forecast.
The absence of great-power war is a good thing, because major powers have the most capability and can do the greatest harm when their destructive capacities are fully roused. What we're seeing instead, however, is either protracted conflicts among warlords, insurgents, or relatively weak states (think the Congo, Sudan, or Colombia), and wars of choice waged by the United States and other powerful states in various strategic backwaters, mostly against adversaries that we don't think can do much in response. At least we hope not.
Apologies if I've blogged about this before, but reading Micah Zenko's excellent post on "ten things you didn't know about drones" reminded me of something I've pondered a bit over the past couple of years. To be specific: I wonder if we are reaching a moment in which traditional "surprise attacks" are less and less likely, at least between major powers.
Genuine "surprise attacks" are pretty rare already -- at least at the strategic level -- for the simple reason that mobilizing large military forces is a huge undertaking and hard to conceal. Hence, opponents are likely to see what you're doing and know that you're coming. The United States fought Iraq twice, for example, and though Saddam apparently clung to the hope that we wouldn't actually fight, it can't have been a complete surprise to him when the bombs started falling. After all, in both cases we'd been talking about it for months while building up our forces right next door.
To be sure, in some cases information may be ambiguous and you can imagine how one side might be able to mask its intentions or create sufficient ambiguity so as to pull off the surprise. And it helps if the victim is complacent or misreads the available intelligence (as Israel did in 1973), or if it stubbornly refuses to believe that an attack is coming (Stalin in 1941). And surprise may be achieved if the attacker is both lucky and surveillance is hard (Pearl Harbor), or if the target doesn't "connect the dots" (the US on 9/11). But in a world where communications are instantaneous and surveillance is increasingly widespread, pulling off a genuine strategic surprise is bound to get more difficult.
This is not to say that unexpected and small-scale attacks cannot occur when the necessary forces are already in place, which is why North Korea could suddenly shell a South Korean island in 2010. Surprise can also work if the target is undefended and it doesn't take a lot of mobilization, which is how Argentina could seize the Falklands in 1982. Similarly, tactical surprise is commonplace once forces are engaged in the field.
But for today's major powers, it is hard to imagine conventional forces inflicting a genuine strategic surprise attack. Reconnaissance satellites can monitor global hotspots on a more-or-less real-time basis and let leaders know if forces are being moved and prepared for combat. Drones can provide even more detail, and electronic surveillance capabilities of various sorts can monitor message traffic. And the victim doesn't have to be the party that detects the preparations, if the party that does is willing to blow the whistle. It is possible that these capabilities could be disrupted by countermeasures or a cyber-attack, but a large-scale effort of that sort would itself be a warning sign.
This means that large-scale surprise attacks may be increasingly rare, except in certain special circumstances. What might those circumstances be? First, states that lack advanced surveillance capabilities are still vulnerable, unless someone tips them off in advance. Second, we will still see surprise attacks by airplanes, drones, or missiles, particularly if the attacks don't require a lot of advanced preparation and/or involve short distances. A third possibility are cyber-attacks or attacks by terrorist groups, especially because the latter operate in clandestine fashion. Nor can one rule out really elaborate efforts at deception (such as staging an attack in the context of a previously announced exercise, or something like that).
But notice that most of these scenarios depict attacks of rather limited effect. Air strikes can destroy specific facilities (e.g. Osirak, or the Syrian reactor site), but can't topple a government all by themselves or enable an attacker to take over territory.
None of this is to say that major power war is gradually becoming obsolete (though some have argued that it is for other reasons). Rather, I'm just suggesting that the sort of "3 AM phone call"/bolt-from-the-blue scenarios much beloved of strategists may be less and less relevant, because global transparency has made it very hard to mask the preparations.
SAUL LOEB/AFP/Getty Images
Today I want to offer a few brief words of tribute to Paul Doty, who passed away yesterday at the age of 91. Paul was a distinguished biochemist and molecular biologist, as well as a pioneering figure in the field of arms control. He was head of the Federation of American Scientists, a founder of the Pugwash Conferences (which brought together scientists from both sides of the Iron Curtain to discuss arms control and war prevention), and a key figure in the renaissance of security studies that began in the late 1970s. A more detailed account of his life and career can be found here and here.
I am one of the countless number of scholars who owe part of their professional success to Paul's vision and support. Back in the 1970s, Paul realized that his generation of policy-minded academics was not being replicated, and he convinced the head of the Ford Foundation, McGeorge Bundy, to finance new research centers at a number of prominent universities. This act led to the founding of the Center for Science and International Affairs (CSIA) at Harvard (with Paul as founding director), and to parallel centers at Stanford, UCLA, and Cornell.
The model for CSIA (subsequently renamed the Belfer Center), was a scientific lab. In addition to providing young scholars with the time and resources to conduct their research, these centers also provided an atmosphere where older scholars could mentor younger colleagues and where people with varying backgrounds could meet, exchange ideas, and build robust professional networks. Thus, a fellowship at CSIA was more than just an opportunity to finish or revise a dissertation. It was also a chance to interact with prominent academics and policymakers, to learn how to challenge a prominent expert with whom one disagreed and, in general, to comport oneself as an engaged and competent professional. My initial stint at CSIA (1981-1984) was central to getting my own career started, and there are now literally hundreds of CSIA alumni holding prominent positions in the academy and in key policymaking circles, including prominent Obama administration figures such as Michele Flournoy, Daniel Poneman, Kurt Campbell, and Ivo Daalder.
Paul had a lot of the "absent-minded professor" in him, and stories about some of his idiosyncrasies became legendary among his colleagues. But what I remember most was his rare ability to cut to the heart of an issue, and his quiet fearlessness in confronting those with whom he disagreed. I never saw him behave rudely to a visiting speaker, but he had little patience for arguments that didn't add up or for policy positions that made no sense. And it didn't matter if the person trying to sell some dubious idea was powerful or prominent; Paul would press the attack with quiet persistence. He was, in short, a truth-teller, who cared more about getting the right answer or the right policy than advancing his own personal fame or power. In that most basic of virtues, he was a model for us all.
Belfer Center for Science and International Affairs
Apart from a few brief sojourns at various think tanks, I've spent most of
my professional life in the academic world. Seven of these years were spent
helping run various programs, first as deputy dean of social sciences at the
University of Chicago and later as academic dean here at the Kennedy School. I
have one child in college and another heading there in two years. You can
therefore assume I have a certain professional and personal interest in the
whole business of higher education.
Which is why I find discussions of how technology might transform this whole enterprise quite fascinating. It's hard not to read such articles and wonder how my own job might change in the years ahead, and to reflect on how I think it ought to change. I have not studied this issue in detail, so what follows are some purely impressionistic observations, based mostly on my own experience.
1. I think there's no doubt that the traditional model of the academic lecture is headed the way of the dodo. I say that with a certain wistful regret, because I enjoy lecturing and like to think I'm fairly good at it. But it's hardly an efficient mode of information-transmission, and there are plenty of studies suggesting that students don't learn particularly well in this sort of passive "I-speak-while-you-listen-and- take-notes" experience. Lecturing of the old-fashioned sort can be entertaining and inspirational, but real learning requires students to engage and wrestle with the material instead of just hearing some older person declaim about it.
2. Given that top-flight faculty are among any college or university's scarcest resources, having them stand in front of a handful of students and talk is especially inefficient, and all the more so in basic introductory courses. In other words, you probably don't want Nobel Prize winners teaching basic statistics, Economics 101, or even Intro to Biology -- especially when there may be lots of less renowned people who are actually better at doing that. But you do want students to have the opportunity to interact with the most brilliant minds, to argue with them, to see how they do their work, and to be inspired by their example. And that means creating different sorts of educational experiences (seminars, workshops, mini-courses, etc.) rather than just one.
3. Information technology is making it possible to transmit educational content at almost no cost; you can put course materials on the web and stream lectures to anyone with an internet hookup. This is what MIT is doing now, and it doesn't seem to be discouraging people from wanting to attend full-time and pay full-freight. There are also online teaching programs that might do a better job of teaching basic materials (such as introduction to microeconomics, statistics, calculus, etc.) than that old model of the single lecturer with a chalkboard and a pile of notes. This suggests that we ought to be thinking of ways to use faculty rather differently -- in more interactive and personal modes--where hands-on attention, genuine inspiration, and pedagogical ability can produce big payoffs, while using online tools to deliver basic factual or technical content.
4. I suspect that in the near future we are going to see a lot of experimentation with new forms of higher education, reflecting the fact that these institutions in fact serve many purposes other than merely transmitting knowledge/skills to students. One reason MIT can make its content available for free is that students understand there is a difference between watching lectures online and actually being in the class, being on the campus, and being immersed in the broader in-person environment. In the United States, at least, universities and colleges also provide a relatively safe space for making the transition from adolescence to adulthood. They are environments where young people can meet future spouses of similar class or social backgrounds, have lots of arguments with peers and with their professors, and get a lot of preconceived notions challenged. For many young people (though not all), college is about a lot more than just what they learn in class, which is one reason parents are willing to pay through the nose to make that whole experience possible.
What I'm describing here, of course, is the traditional model of a liberal arts education, and it's hardly the only model out there. Other institutions (e.g., commuter colleges, junior colleges, vocational institutes) serve somewhat different educational functions and are already organized differently. My guess, therefore, is that changes in information technology and the overall globalization of information and education is going to produce an explosion of innovation over the next few years. The traditional four-year university/college won't disappear, but it will be coexisting and competing with a lot of other models.
Lastly, this is going to be a painful process. Universities are filled with brilliant and innovative people -- as individuals -- but they are also incredibly conservative institutions (not politically, but in the sense of being wary of change). As a former Harvard president reportedly said, "trying to change the curriculum is like moving a graveyard." Faculties don't like having to retool and alumni and other stakeholders often have powerful emotional attachments to traditional ways of doing business. And the older and more successful a university is, the more impervious to change it is likely to be.
Plus, coming up with new educational models is hard to do if you're already working pretty hard teaching the existing program. But there's no stopping this sort of Schumpeterian "creative destruction," and I'd hate to be working for the educational equivalent of Polaroid -- a brilliant and innovative company that proved unable to adapt to a rapidly changing technological frontier.
Now if we can just get universities out of the business of running semi-professional athletic teams...
Darren McCollester/Getty Images
There's a fascinating piece in today's New York Times, summarizing the findings of a recent Science article on the origins of human language. Based on a mathematical analysis of phonetic diversity (i.e., the number of separate sounds in different languages), biologist Quentin Atkinson of the University of Auckland has determined that human language originated in southern Africa around 50,000 years ago (some scientists believe its origins may be even earlier).
You've got to hand it to our species: 50,000 years isn't that long a time. Think of all the good and bad ideas that we've produced in 50 millennia: Shakespeare, the "divine right of kings," both slavery and abolitionism, relativity, the Bhagavad Gita, fascism, a mind-boggling array of religious dogma, liberalism, Marxism, the movies of Fred Astaire, Mad magazine, Japanese manga, rap, hip-hop, and bebop. The list is infinite … and now there's the blogosphere.
But here's what I wondered as I finished the article: Who uttered the first pun? And did those early humans groan when they heard it?
Dan Kitwood/Getty Images
Explanatory Note: A couple of weeks ago, I read a news story about how museums around the country were competing to exhibit the retired space shuttle Discovery, after its long and supposedly distinguished career. That's not surprising, of course, as having a shuttle on display would undoubtedly be a big draw for a lot of museums. What troubled me was the suspicion that future museum exhibits would depict the whole shuttle program in laudatory terms, instead of treating it as an foolish diversion of national resources. Space policy isn't really my thing, however, and I said to myself: "You know, you'd need a rocket scientist to write this properly!" Fortunately, I have one available: my father. He's a geophysicist who spent much of his career designing satellite packages and interpreting the data they produced, so I asked him if he'd be willing to contribute a guest post on the topic. Here's what he sent in. -- S.M.W.
By Martin Walt IV
Recent news columns have commemorated the retirement of the Space Shuttle orbiter Discovery. It is indeed noteworthy that this vehicle experienced some 39 launches and traveled 150 million miles in near-Earth space. This achievement was made possible by the imaginative engineers and scientists who conceived the Shuttle program and developed the necessary technical innovations. Recognition must also be given to the dauntless flight crews -- both military personnel and civilians -- whose courage and dedication were outstanding, especially those astronauts who volunteered to fly after two orbiters were lost in accidents that revealed serious weaknesses in the hardware and in NASA's managerial culture.
NASA via Getty Images
The scope of devastation from the earthquake and tsunami in Japan is heart-rending, and readers who are in a position to help should donate generously to the charity of their choice. (See here for a list of worthy options).
The immediate consequences of the disaster are real enough, but today's New York Times also identifies what could be an even more significant long-term effect of this event: the curtailing of plans to address global warming through sharply increased reliance on nuclear power.
The basic equation here is pretty simple. The only way to deal with climate change is by reducing greenhouse gas emissions, which in turns means reducing reliance on the burning of fossil fuels. Conservation, improved efficiency, and "green" energy sources like wind farms can help, but not enough to fill the gap without a significant curtailing of living standards. Accordingly, many recent proposals to address future energy needs have assumed that many countries -- including the United States -- would rely more heavily on nuclear power for electricity generation. It's not a complete answer to the climate change problem by any means, but addressing it in a timely fashion would be more difficult if nuclear expansion is eliminated.
The destruction of the Fukushima nuclear plant is bound to set back these efforts, and it may derail them completely. At a minimum, it will make it much harder to get approval for new power plants -- which already face classic NIMBY objections -- which will drive up the cost and make a significant expansion of the nuclear industry politically infeasible in many countries, especially the United States.
This reaction doesn't make a lot of sense because the costs and risks of nuclear energy need to be rigorously compared against the costs and risks of other energy sources and the long-term costs and risks of global warming itself. But that's not the way that the human mind and the democratic process often work. We tend to worry more about rare but vivid events -- like an accident at a nuclear plant -- and we downplay even greater risks that seem like they are part of the normal course of daily life. Thus, people worry more about terrorist attacks than they do about highway accidents or falling in a bathtub, even though they are far more likely to be hurt by the latter than the former.
So, in addition to the thousands of lives lost, the billions of dollars of property damage, and the knock-on economic consequences of the Japanese disaster, we need to add the likely prospect of more damage from climate change down the road. It's possible that clearer heads will prevail and guide either more stringent conservation measures or the sensible expansion of nuclear power (along with other energy alternatives), but I wouldn't bet on it.
Some readers may recall that I've been a skeptic about the whole "cyber-war" business, and suggested that it was an ideal policy arena in which to expect threat-inflation. To be clear, I did not argue that there was absolutely nothing to it, or even that we could afford to ignore the problem, but there's no question that I've been less than fully persuaded by a lot of the hype.
It is therefore a fair question to ask whether the whole Stuxnet affair has altered my views on this matter. (For those of you just returning from a month wandering in the desert, I refer to the computer worm whose origins remain obscure but which has apparently affected a number of industrial control computers in Iran, presumably with the intent of disrupting their nuclear enrichment efforts).
So has the Stuxnet worm convinced me that the cyber-war/cyber-terror threat ought to be taken more seriously?
Yes and no.
On the one hand, this incident has provided a vivid demonstration of the potential impact that various cyber-weapons could have, and so it has led me to revise my concerns about the problem upward. But as noted above, I never said it should be ignored; only that we had to be careful not to over-hype it.
On the other hand, I think this incident also demonstrates why this whole problem is still so hard to evaluate, and why we really need greater information and assessment before we'll know if we are over- or under-reacting. Although some people undoubtedly know who made the Stuxnet worm and how it got into Iran's industrial control systems, it hasn't been made public thus far. Indeed, private computer security experts are reportedly miffed that the U.S. government isn't providing them with everything it may know about the Stuxnet problem. So it's hard for us laypersons to judge just how broad or serious such a threat might be, or how easy it would be for others to do something like this to us. The apparent success of the Stuxnet attack may not tell us very much about the vulnerability of other systems (including military systems), especially when they are equipped with more sophisticated defenses.
The reports I've seen also suggest that the worm was almost certainly the product of a sophisticated programming team, and most analysts seem to think that a wealthy and/or advanced country had to be behind it. If so, then one might be justified in concluding that cyber-war in the future will be a lot like conventional war in the past: the richest and most advanced countries will be better at it, simply because they can devote more resources to the problem. Even if Stuxnet suggests that cyber-war has more potential than people like me had previously believed, it doesn't herald some sort of revolutionary shift in the global balance of power, in which a handful of clever computer-wielding Davids suddenly strike down various lumbering, computer-dependent Goliaths.
In any case, the one thing I haven't changed is my desire to see this problem analyzed in a more systematic and public fashion, and by a panel of experts with no particular professional or economic stake in the outcome. Ironically, in the aftermath of the Stuxnet attack, I'd like to see that even more.
Today's New York Times has an interesting article on a diplomatic dispute between the United States and South Korea, arising from South Korea's desire to begin reprocessing some of the spent fuel from its large nuclear power program. South Korea gets about forty percent of its electricity from nuclear power plants, and is reportedly running out of space to store the spent fuel. It is barred from reprocessing by a 1974 agreement with the United States, and the Koreans are now pushing for a revision when the treaty expires in 2014.
U.S. officials oppose this step, fearing it will set a precedent for other states and could make it harder to push North Korea to give up its own nuclear program. (The problem with reprocessing spent fuel is that it yields plutonium, which can be used to make a nuclear bomb). There are also lingering concerns about South Korea's intentions, given that the country flirted with getting nuclear weapons back in the 1970s.
Three quick thoughts. First, as the Times article makes clear, critics who warned that the lax U.S.-India nuclear deal negotiated by the Bush administration would come back to haunt us should be feeling vindicated, as South Korea has rightly complained about the obvious double-standard here. South Korea is a long-time U.S. ally and an NPT signatory, while India is a nuclear weapons state that has yet to sign the NPT). Yet the Indians got advance U.S. consent for reprocessing in its nuclear deal with the United States, while South Korea is getting stiffed.
Second, the dispute also illustrates important aspect of intra-alliance bargaining, especially when nuclear weapons are involved. The Times story quotes Cheon Seong-whun, a senior analyst at a government-run research institute, saying that "We will never build nuclear weapons as long as the United States keeps its alliance with us." Probably true, but notice that this is both a reassuring pledge and an implicit threat. What Mr. Cheon is saying -- and I'm not criticizing him for it -- is that South Korea doesn't need a nuclear deterrent as long as it is under the United States continues to protect it. But one reason why South Korea might want to reprocess -- and again, I'm not saying they shouldn't -- is so that they can go nuclear at some point in the future, should confidence in the U.S. commitment erode. And notice that the closer they are to an actual weapons capability, the more potential leverage they might have over the United States.
Third, it's hard not to be struck by the basic hypocrisy of the U.S. position, which it shares with other existing nuclear powers. Washington has no intention of giving up its own nuclear weapons stockpile or its access to all forms of nuclear technology. The recent New START treaty notwithstanding, U.S. government still believes it needs thousands of nuclear weapons deployed or in reserve, even though the United States has the most powerful conventional military forces on the planet, has no great powers nearby, and faces zero-risk of a hostile invasion. Yet we don't think a close ally like South Korea should be allowed to reprocess spent fuel, take any other measures that might under some circumstances move them closer to a nuclear capability of their own.
In my view, there's nothing reprehensible or even surprising about this situation; it merely reminds us that no two states have the same interests and that hypocritical (or more politely, 'inconsistent') behavior is common-place in international politics. But the U.S. ability to persuade others not to flirt with their own nuclear capabilities might be a lot stronger if we didn't place so much value on them ourselves.
Chung Sung-Jun/Getty Images
No profound thoughts to offer today;
instead, ten rapid-fire, shoot-from-the-hip impressions -- some of them snarky -- from
my current road trip. Readers who want to discount what follows can chalk
it up to some serious jet lag.
1. British Airways has mastered the art of predatory pricing. First, they canceled my initial flight to London, which meant I couldn't make my connection to Paris in time for my first commitment. So I had to buy a separate one way ticket on Air France to preserve my schedule. But did BA offer to refund the unused portion of my itinerary (which was unused because they canceled the flight)? But nooooooooo! If I wanted a refund, I had to cancel my entire itinerary (which involved four more flights) and then rebook all four of the remaining legs under a new reservation number, but at a new, higher price that cost more than the original ticket. Heads they win, tails you lose. Resolved: avoid BA whenever possible in the future.
2. Alas, Air France is not an appealing alternative; it's no longer a great airline but instead is merely adequate. I still have vivid and glowing memories of flying first class to Paris on my honeymoon (a gift from my mother-in-law, who had a gazillion frequent flyer miles back then). I wasn't in first class this time, but even taking that into account, it was a pretty mediocre experience. And the "tournedos" they served for dinner would have made Escoffier tear his hair. Some poor vache died for no good reason.
3. Public transportation. On the other hand, there were a few experience on the road that put les États-Unis to shame. In Paris, there's a direct train from the airport into Paris, or you can take an Air France bus that leaves frequently, is cheap, and gets you to one of several convenient Metro stops. In London, the "Heathrow Express" rail line is equally convenient, and a virtually seamless way to get from the airport to central London. As you leave customs, there's a guy standing there with a credit card swiper. Thirty seconds later, you have your ticket, the trains leave every 15 mins., and they get you to Paddington in about 20 mins.. Consider that you can't take a train to Dulles or JFK and it reminds how bad most public transport and infrastructure is in the Land of the Free(way).
MARTIN BUREAU/AFP/Getty Images
Should the National Science Foundation stop funding research in political science? Senator Tom Coburn (R-OK) thinks so, and the American Political Science Association is predictably upset. I can't say that I think Coburn is right, but I'm finding it hard to get too exercised about it. I say this in part because I think a lot of NSF-funded research has contributed to the "cult of irrelevance" that infects a lot of political science, and because the definition of "science" that has guided the grant-making process is excessively narrow. But I also worry that trying to use federal dollars to encourage more policy-relevant research would end up politicizing academic life in some unfortunate ways.
With respect to the first issue, NSF support has undoubtedly facilitated a lot of useful data collection, especially in the field of American politics, and that the availability of this data has contributed to our knowledge of voting behavior, electoral processes, and other aspects of democratic politics. (See Paul Krugman's blog post for more on this). What's less clear is whether that additional "scientific" knowledge is actually helping real democracies perform better, or helping policymakers devise solutions to real policy problems. And in the field of international relations, I suspect that most of the NSF-funded research has been by-academics-and-for-academics, and hasn't had a discernible impact on important real-world problems.
But I haven't done a comprehensive survey of NSF funding in this field, and it's entirely possible that I missed something important. (The work of Elinor Ostrom, who just received the Nobel Prize, might be a case in point). Here's a suggestion: why doesn't the NSF put a link up on its website, listing all the grants that it has made to political science since 1995 and then listing all the research products that these projects produced, along with hyperlinks to the books or articles? That way, we could easily examine the results and debate if they were useful or not. Or if NSF doesn't want to do that, the APSA could provide this information itself. If the field has a lot of accomplishments to be proud of, surely it won't take long to compile a compelling list. And by the way, it would be interesting to compare the results of NSF-funded projects with research that was either unfunded (i.e., done without outside grants), or funded from other sources.
But please don't just give me a citation count, because all shows is that some academic has managed to get cited by his or her fellow scholars. In other words, incest. Demonstrating real-world value will require some serious process-tracing outside the ivory tower, to show how new knowledge and ideas are actually shaping policy in positive ways.
My other concern has to do with the relationship between government funding and policy-relevance. Much as I would like more academic research to address real-world problems, I worry that it would inevitably become more politicized once the government gets involved. It is hard to imagine how a serious study of counterinsurgency, the global financial crisis, human rights, or counterterrorism policy would not have important implications for current policy debates, and some of that research would be explicitly critical of key government policies. Senator Coburn is eager to cut off political science because he thinks it is wasteful, but other politicians are bound to try to fund projects that conform to their own political prejudices. Or they will go after government-funded research that they think is "unpatriotic," just as politicians once attacked a major RAND study on the dynamics of surrender by suggesting it was encouraging "defeatism." Academics are human, and some of them are bound to start tailoring their topics and their conclusions to fit the perceived preferences of funders. That's ok in the think tank world, but universities really ought to aim for a higher standard. The other danger is that academics will be encouraged to make their research as bland as possible, so that it doesn’t offend anyone. We hardly need more of that.
As I've written elsewhere, political science ought to place more value on its ability to contribute to solving real-world policy problems, but that will require a shift in the norms and standards that the field sets for itself. Ironically, that rethinking might happen faster if the NSF gravy train were smaller, or if academics started to worry that ideas like Coburn's might catch on.
Stephen M. Walt is the Robert and Renée Belfer professor of international relations at Harvard University.