Over at the new, independent Daily Dish, Andrew Sullivan has been hosting an interesting thread on why academic writing is frequently abysmal. As someone who tries hard to make even my academic writing clear and accessible and who tries to instill that value in my students, I've followed the thread with interest.
For starters, I don't think the problem is that no one encourages future academics to write well. In my own case, for example, I was fortunate to study with Alex George at Stanford as an undergrad and with Kenneth Waltz at Berkeley during graduate school, and both repeatedly stressed the importance of writing well. Waltz didn't do a lot of line-editing of grad student papers or dissertations, but he certainly let me know when he thought my writing was obscure, verbose, disorganized, or just plain confused. He also spoke openly about the importance of writing in his graduate courses, encouraged students to read books such as Fowler's Modern English Usage, and was scornful of the trendy neologisms that infest academic writing like so many weevils.
I also don't think the problem is due to poor editing at journals or university presses. I've published in over a dozen academic journals, with a prominent university press, and with two different commercial publishers, as well in a number of journals of opinion. Almost all of the editors or copy-editors with whom I've worked were helpful and attentive, and some were superlative. Indeed, I can think of only one case in nearly thirty years where a manuscript of mine was truly butchered by an editor (it was actually done by an intern) and fortunately the magazine let me repair the damage before the article appeared.
So why is academic writing so bad?
One reason academic writing is sometimes difficult is because the subjects being addressed are complicated and difficult and hard to explain with ordinary language. I have more than a little sympathy for philosophers grappling with deep questions about morality, time, epistemology, and the like, as these subjects are inherently slippery and it is easy to lose the reader in a fog of words. But it isn't inevitable even there. Some philosophers manage to write about very deep and weighty matters in a prose that is crystal clear. You still have to pay attention and think hard to understand what is being said, but not because the author is making it more difficult than it needs to be.
A second reason is the failure of many scholars to appreciate the difference between the logic of discovery and the logic of presentation. Specifically, the process by which a scholar figures out the answer to a particular question is rarely if ever the best way to explain that answer to a reader. But all too often articles and manuscripts read a bit like a research narrative: "First we read the literature, then we derived the following hypotheses, then we collected this data or researched these cases, then we analyzed them and got these results, and the next day we performed our robustness checks, and here's what we're going to do next."
The problem is that this narrative form is rarely the best way to make a convincing case. Once you know what your argument is, really effective writing involves sitting down and thinking hard about the best way to present that argument to the reader. The most important part of that process is figuring out the overall structure of the argument -- what points need to be developed first, and then what follows naturally or logically from them, and so on. An ideal piece of social science writing should have a built-in sense of logical or structural inevitability so that the reader moves along the argument and supporting evidence as effortlessly as possible.
Achieving this quality requires empathy. You have to be able to step outside your own understanding of the problem at hand and ask how your words are going to affect the thinking of someone who doesn't already know what you know and may even be inclined to disagree with you at first. Indeed, persuasive writing doesn't just convince the already-converted, a really well-crafted and well-supported argument will overcome a skeptic's initial resistance.
Why does this matter? Because the poor quality of academic writing is both aesthetically offensive and highly inefficient. Academics should strive to write clearly for the obvious reason that it will allow many others to learn more quickly. Think of it this way: If I spend 20 extra hours editing, re-writing, and polishing a piece of research, and if that extra effort enables 500 people to spend a half-hour less apiece figuring out what I am saying, then I have saved humankind a net 230 hours of effort.
Which leads me to the real reasons why academic writing is often bad. The first problem is that many academics (and especially younger ones) tend to confuse incomprehensibility with profundity. If they write long and ponderous sentences and throw in lots of jargon, they assume that readers will be dazzled by their erudition and more likely to accept whatever it is they are saying uncritically. Moreover, jargon is a way for professional academics to remind ordinary people that they are part of a guild with specialized knowledge that outsiders lack, and younger scholars often fear that if they don't sound like a professional scholar, then readers won't believe what they are saying no matter how solid their arguments and evidence are.
The second problem is the fear of being wrong. If your prose is clear and your arguments are easy to follow, then readers can figure out what you are saying and they can hold you to account. If you are making forecasts (or if the theory you are advancing has implications for the future), then you will look bad if your predictions are clearly stated and then fail. If your argument has obvious testable implications, others can run the tests and see how well your claims stand up.
But if your prose is muddy and obscure or your arguments are hedged in every conceivable direction, then readers may not be able to figure out what you're really saying and you can always dodge criticism by claiming to have been misunderstood. (Of course, sometimes critics do deliberately misrepresent a scholarly argument, but that's another matter). Bad writing thus becomes a form of academic camouflage designed to shield the author from criticism.
In the endless war against academic obscurantism, I tell my own students to read Strunk and White's classic The Elements of Style and to heed their emphasis on concision. Most of us tend to overwrite (especially by using too many adverbs), and shorter is almost always better. Or as Strunk and White put it:
"Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell."
I'm also a fan of Anthony Weston's A Rulebook for Arguments, a very smart primer on the different forms of persuasive argument and the ways to make written arguments more convincing.
Finally, I encourage students to emulate writers they admire. If there are scholars whose books you enjoyed, read them several times and try to capture what it is that makes their use of language so effective. I've found inspiration in writers like Waltz, Thomas Schelling, James Scott, John Mueller, and Deirdre McCloskey. And you don't have to agree with someone to respect their ability to write: Charles Krauthammer's ideas usually appall me, but there's no question that he is an effective prose stylist.
In the end, it comes down to what a scholar is trying to achieve. If the goal is just narrow professional success -- getting tenure, earning a decent salary, etc. -- then bad writing isn't a huge handicap and may even confer some advantages. But if the goal is to have impact -- both within one's discipline and in the wider world -- then there's no substitute for clear and effective writing. The question is really pretty simple: do you want to communicate with others or not?
Adam Berry/Getty Images
For years a number of political scientists have been complaining about the propensity for scholars to study topics that are of little real-world value or of interest only to a handful of fellow scholars. We've come to call this the "cult of irrelevance." At the same time, many academics cloud their analyses in obscure jargon or a fog of methodological "sophistication," and rarely bother to offer up translations for the busy policy-maker. To make matters worse, although academics defend the institution of tenure fiercely, most of them do not use the protection it affords to pursue topics that might be politically controversial.
These unfortunate tendencies are not universal, however, and a number of us have tried to address the broader issue in various ways. You can read about the general subject here, here, here, or here. In that spirit, I'm also happy to pass on the news that a group of political scientists have organized a week-long summer institute designed to tackle the problem head-on. Under the guidance of Bruce Jentleson of Duke, Steve Weber of UC-Berkeley, and James Goldgeier of George Washington, a new International Summer Policy Institute will "deliver an intensive curriculum designed to teach participants how to develop and articulate their research for a policy audience, what policymakers are looking for when they look to IR scholarship, whom to target when sharing research, and which tools and avenues of dissemination are appropriate." The Institute is part of a larger effort to "bridge the gap" between academia and policy, and you can find out more about its activities here.
Needless to say, I think this is a worthy enterprise. Together with efforts like the Tobin Project, it may encourage more academics to focus their research efforts on policy-relevant topics and teach them how to communicate their results in ways that policymakers will find more accessible. The point here, by the way, is not to "dumb down" scholarship or to imitate the plethora of partisan think tanks now located inside the Beltway. Academic scholars should be independent researchers first and foremost, and seekers of truth above all. But the topics that they choose to address can be chosen to illuminate important policy issues more directly, and devoting some time to figuring out how to communicate their results more broadly would surely be a good thing.
What is also needed is a change in academic practice, including the criteria that are used to make key hiring and promotion decisions. The standards by which we assess scholarly value are not divinely ordained or established by natural law; they are in fact "socially constructed" by the discipline itself. In other words, we collectively decide what sorts of work to valorize and what sorts of achievement to reward. If university departments placed greater weight on teaching, on contributions to applied public policy, on public outreach, and on a more diverse range of publishing venues -- including journals of opinion, trade publishers and maybe even blogs--then individual scholars would quickly adapt to these new incentives and we would attract a somewhat different group of scholars over time. If university departments routinely stopped the "tenure clock" for younger academics who wanted to do a year of public service, that would enable them to gain valuable real-world experience without short-changing their long-term academic futures. It would also send the message that academia shouldn't cut itself off from the real world. And it probably wouldn't hurt if deans, department chairs, and university presidents welcomed controversy, encouraged intellectual diversity, and defended the slaying of sacred cows. As I've said before, academics really shouldn't count it a great achievement when students have no interest in their classes, and when people outside the ivory tower have no interest in what we have to say.
Assuming China continues to grow economically (which seems like a fairly safe bet), how will this trend affect strategic alignments in Asia? I've posted on this topic before (see here), but I've been thinking about it again in light of some recent developments and after reading some recent scholarship on the topic.
Structural realism gives a straightforward answer to the question: As China becomes more powerful, other Asian states will move to balance it by devoting more of their own wealth to national security and by forging closer security ties with each other and with powerful external actors like the United States.
This is essentially a pure "balance-of power" explanation, but as some of you probably know, I think that is not the best way to explain why alliances form. In the near-to-medium term, the extent to which Asian states balance against China will depend not just on Chinese power, but on the level of threat that these states perceive. The level of threat, in turn will be affected not just by China's aggregate capabilities (i.e., its GDP, defense spending, etc.) but also by 1) Geography, 2) Offensive military capabilities, and 3) Its perceived intentions.
To be more specific, states that are closer to China are likely to be more worried than states that lie some distance away. In particular, states that border directly on China -- such as Vietnam -- have to fear China's rising power more than states who are separated by water (such as Indonesia) because it is inherently more difficult to project power over oceans. (Taiwan is something of a special case, given the tangled history of cross-strait relations and its relative proximity).
Furthermore, the level of threat that China poses will depend in part of how it chooses to mobilize its growing economic might. If it builds military capabilities that are primarily designed to defend its own territory, China's neighbors will feel less threatened and be less inclined to balance against it. By contrast, if China develops the power projection capabilities that are typical of most great powers (i.e., large naval and air forces, long-range missiles, amphibious capabilities, etc.), then others in the region will worry about what those capabilities might be used for and they will be more likely to join forces with each other (and the United States) to protect their own interests and autonomy.
PHILIPPE LOPEZ/AFP/Getty Images
When great powers intervene in minor countries, sometimes they win quick and fairly decisive victories. (Think U.S. in Grenada). When this happens, the only short-term problem is where to hold the victory parade and how many medals to give out. But when a war of choice goes badly, then national leaders have to decide either to cut their losses and get out or to "stay the course." If the opponent is an insatiable great power like the Third Reich, there may be little choice in the matter. But if the enemy is an insurgency in a relatively weak and unimportant state, and the challenge is nation-building in a society that you don't understand very well, it's a much trickier decision.
As we've seen in Iraq and are seeing again in Afghanistan, getting out of a quagmire is a whole lot harder than getting into one. Indeed, I'd argue that this is a general tendency in most wars of choice: they usually last longer than the people who launch them expect, and they usually cost a lot more. I'm hardly the first person to notice this phenomenon, which does make you wonder why it keeps happening.
In any case, now that we are (supposedly) leaving Iraq, here are my Top Ten Reasons why wars of choice last too long, and why it's so hard for politicians to wake up, smell the coffee, and just get out.
1. Political leaders get trapped by their own beliefs. All human beings tend to interpret new information in light of their pre-existing beliefs, and therefore tend to revise strongly-held views more slowly than they should. Having made the difficult decision to go to war (or to escalate a war that is already under way) it will be hard for any leader to rethink the merits of that decision, even if lots of information piles up suggesting that it was a blunder.
2. Information in war is often ambiguous. Another reason wars of choice last too long is that the case for cutting one's losses is rarely crystal-clear. Even if there is lots of evidence that the war is going badly, there are bound to be some positive signs too. Remember all those "benchmarks" the Bush administration developed for measuring progress in Iraq? If you have enough of them, you can always find a few items on the list where things are looking better. When the evidence is mixed (as it usually is), leaders are even less likely to rethink their beliefs that the war is worth fighting.
Majid Saeedi/Getty Images
I've been thinking about U.S. grand strategy again, and pondering some big questions that ought to be central to the debate on America's global role. Some of these big questions are researchable, others are by their very nature more speculative. How you answer some of them also depends on the theories you think are most powerful or applicable (i.e., realist theory suggests one set of answers, liberal approaches offer a different set, etc.), and the answers your get should have profound implications for what you think U.S. grand strategy ought to be.
So here are Five Big Questions about contemporary world politics.
1. Where is the EU project headed? The construction of the European Union was a major innovation in global politics, but new doubts have arisen about its long-term future. Pessimists such as Notre Dame's Sebastian Rosato believe the highwater mark of European unity has already been passed, while optimists like Princeton's Andrew Moravcsik think that Europe's current difficulties are likely to encourage further steps towards integration. The answer matters, because the re-emergence of genuine power politics within Europe could force the United States to devote more attention to a continent that some argue is "primed for peace" and no longer of much strategic concern.
2. If China's power continues to rise, how easy will it be to get Asian states to balance against it? Balance of power (or if you prefer, balance of threat) theory predicts that weaker states will try to limit the influence of rising powers by forming defensive alliances against them. China's rise is already provoking alarm in many of its neighbors, who look first to the United States and possibly to each other for assistance. But how strong will this tendency to balance be? If China gets really powerful, and the United States disengages entirely, some of China's neighbors might be tempted to bandwagon with Beijing, thereby facilitating the emergence of a Chinese "sphere of influence" in Asia. But if China's neighbors get support from each other and from the United States, then they'll probably prefer to balance.
But here's the question: Just how much support does the United States have to provide, given that this issue ought to matter more to the Asian states than it does to us? If you think balancing is the dominant tendency (as I do), then the United States can pass a lot of the burden to Japan, India, Vietnam, etc. It can "free-ride" to some degree on them, instead of the other way around. But if you think these states will be reluctant to balance, then the United States might have to do a lot of the heavy lifting itself.
To make matters more complicated still, both the United States and its Asian allies may be tempted to do some bluffing with each other, to try to get their allies to pay a larger share of the burden. Asian states will quietly threaten to realign or go neutral if they don't get more backing from the United States, and U.S. leaders may drop hints about disengagement if they don't get what they want from the allies they are helping protect. And this means figuring out just how large and iron-clad the U.S. commitment needs to be in order to sustain a future balancing coalition is a tricky business, and there will be lots of room for disagreement.
It's easy to think of examples where great powers stayed in in some foreign war too long, and with the benefit of hindsight, it's clear that they would have been better off getting out sooner. Examples might include the United States in Vietnam, France in Algeria, Britain and the Soviet Union in Afghanistan, or Israel in southern Lebanon.
Similarly, it's easy to think of wars when states suffered early setbacks, chose to stay the course anyway, and ultimately succeeded. World Wars I and II, Korea, and the Boer War might be examples of this category, and some would place Iraq in this category too (although I wouldn't).
Finally, I can think of several cases where states chose to get out of trouble quickly when things turned south, and never regretted it. The United States got out of Lebanon after a suicide bomber destoyed the Marine barracks there in 1983 and it withdrew from Somalia in 1993 following the Black Hawk Down incident, and withdrawal didn't have particularly significant strategic consequences in either case. More importantly, staying longer wouldn't have been worth it in any case.
So here's my question: Are there good historical examples where a great power withdrew because a foreign military intervention wasn't going well, and where hindsight shows that the decision to withdraw was a terrible blunder? If there are plenty of examples where states fought too long and got out too late, are there clear-cut cases where states got out too early?
For a case to qualify, you'd have to show that early withdrawal led to all sorts of negative consequences that might otherwise have been avoided. Hawks normally argue that getting out will embolden one's adversaries, undermine one's credibility, or jeopardize one's geopolitical position, but how often does any of these anticipated misfortunes really happen? Or you could argue that the withdrawing state was very close to winning but didn't know it, and that "staying the course" would have worked if they had just held on a little longer.
One possible candidate is U.S. involvement in Afghanistan in 2002-2003, but even that case isn't clear-cut. Many experts now argue that our current troubles there could have been avoided had we kept our eyes on the ball in 2003 and concentrated on building an effective Afghan government, thereby preventing the Taliban from making a comeback. The main problem with this line of argument is that the United States didn't really "withdraw" from Afghanistan (and certainly not because things were going badly). Instead, we just drew down our forces so we could go invade Iraq. Also, it's not obvious that greater effort back then would have produced a markedly different situation today, although it is certainly possible.
In any case, my question still stands: How often has early and rapid strategic withdrawal from a war of choice lead to disastrous results for the withdrawing power? Is staying too long the greater and more common danger? And can anyone think of some good examples?
BAY ISMOYO/AFP/Getty Images
In May 2003, New York Times columnist Tom Friedman told Ha'aretz's Ari Shavitz that the invasion of Iraq was:
"[T]he war the neoconservatives wanted ... the war the neoconservatives marketed. .. I could give you the names of 25 people (all of whom are at the moment within a five-block radius of this office [in Washington]) who, if you exiled them to a desert island a year and a half ago, the Iraq war would not have happened."
Was Friedman advancing a "conspiracy theory" to explain the invasion of Iraq? Is it proper to regard the neoconservative movement -- and especially those neocons who were the loudest cheerleaders for invading Iraq -- as a conspiracy or cabal, as some writers have? I don't think so. I have plenty of disagreements with the neoconservative approach to U.S. foreign policy, and I think there's no question they played a central role in leading the United States into Iraq, but to characterize them as a cabal or conspiracy is misleading, counter-productive, and possibly dangerous.
As we know from a number of important books -- including Richard Hofstader's The Paranoid Style in American Politics, David Aaronovitch's recent Voodoo Histories, and Kathryn Olmsted's Real Enemies: Conspiracy Theories and American Democracy -- conspiracy theories have a long and unhappy history in the United States (and elsewhere). Prominent examples include assorted plots involving Freemasons, preposterous claims about secret Jewish influence (such as the infamous Protocols of the Elders of Zion, a notorious Czarist-era forgery), or the claims that FDR knew about Pearl Harbor in advance, that John F. Kennedy was assassinated by the CIA, or that the U.S. government faked the 1969 moon landing. More recent versions are the 9/11 "truther" movement, which portrays the 9/11 attacks as a secret plot by the U.S. government, and virtually any of the claims put forward by Lyndon LaRouche. Glenn Beck's TV show is an equally fertile source of absurd but scary notions about current U.S. politics. (See here for Jon Stewart's lethal lampooning of Beck's style of "reasoning.")
Conspiracy theories take many forms, but they generally have several common features. First, they often claim to expose the secret machinations of a small group of individuals, acting to accomplish some nefarious but largely-hidden purpose. Second, they attribute to the designated group vast and far-reaching powers, including a mysterious ability to control (rather than simply influence) a wide array of institutions. Yet a conspiracy theory (as opposed to a careful institutional analysis) never identifies the precise mechanisms by which this alleged control is achieved and normally fails to provide concrete evidence to justify its far-reaching claims. Alternatively, conspiracy theorists sometimes suggest that "the government" is engaged in some enormously-important but covert activity, like hiding captured alien spacecraft at "Area 51" or arranging to bring down the World Trade Center while getting it blamed on al Qaeda. In virtually all cases, a good conspiracy theory implies that what you think you know about the world is dead wrong, usually because the people responsible for the conspiracy have managed to convince you that up is down and black is white.
“Here is what I think a Hippocratic Oath for Quantitative Analysis in Security Studies should look like:
War is a human endeavor. I recognize that it is a phenomenon that does not conform to neat mathematical equations.
I will use quantitative analysis in conjunction with theory and qualitative analysis to describe what I see as phenomena in war and peace. I will be honest about the limits of both my theory and my analysis.
In war and peace, the variables are infinite, and not everything can be measured or assigned a numerical value.
I will not use numbers to signify what are fundamentally qualitative assessments without acknowledging to my reader that I have done so in order to satisfy a departmental requirement, gain tenure, or get published in the APSR. Or because I have been in graduate school for so long that I have forgotten how to effectively write in prose.
I recognize there are no mathematical equations in Vom Kriege and that it is nonetheless unlikely that my legacy will transcend that of Clausewitz.
I recognize that very few squad leaders in the 10th Mountain Division have ever taken a course in statistics yet probably know more about the conduct and realities of war than I do.”
Wise words indeed. I’d just add that Nobel prize-winning economist and strategic guru Thomas Schelling offered a similar warning in The Strategy of Conflict, cautioning against any tendency “to treat the subject of strategy as though it were, or should be, solely a branch of mathematics.”
That’s not to say that various types of mathematical analysis aren’t useful, whether one is talking about operations research, basic statistics, game theory, or whatever. But it’s just a tool, and ought to be used in conjunction with other methods and with an appropriate degree of humility.
Rounding up my first day at the International Studies Association annual meetings, in beautiful midtown Manhattan:
Began by chairing a panel on the forthcoming book Balance Sheet: The Iraq War and U.S. National Security, edited by John Duffield of Georgia State and Peter Dombrowski of the Naval War College. This collection will be out from Stanford University Press in June, and is an excellent attempt to conduct a scholarly assessment of the war’s impact on U.S. security interests. There are chapters by Steve Simon on the war on terror, Mike O’Hanlon on military readiness, Joe Cirincione on proliferation, Greg Gause on the Middle East region, and Clay Ramsay on public opinion. The editors sum it up in their conclusion and also attempt to wrestle with the obvious counter-factuals: what would have happened if we hadn’t gone in? Or if we had sent more troops from the beginning? Or if Saddam had ‘fessed up, or if the inspectors had continued longer? etc. The basic verdict is that the war has been bad for overall U.S. security interests, but the picture painted is not as consistently grim as some of you might think.
The book is important because Iraq remains a political football, and you can bet that Democrats and Republicans will continue to debate both the original decision and the subsequent conduct of the war, and will do so in an explicitly partisan fashion. The belief that Iraq is a disaster helped propel Obama to the Oval Office, but you can already see the neoconservative architects of the war preparing their own “stab in the back theory.” The core of this version is the argument that “the surge worked, and victory is at hand.” So if anything bad happens subsequently, it is all Obama’s fault (or so the argument will run).
That's why a book this is valuable: academic scholars don’t have pick a side in this fight; their comparative advantage lies in providing as even-handed and fair-minded an assessment as they can. And that’s what this book tries to do. Not the last word on the subject, perhaps, but an important contribution.
Then on to another panel on unipolarity, with several excellent papers. One highlight was University of Chicago Ph.D. student Nuno Monteiro’s paper “Unrest Assured: Why Unipolarity is Not Peaceful.” His basic argument is that the dominant state in a unipolar system (i.e., the unipole) will be tempted to try to maintain or improve its advantage, and especially to prevent weak states from acquiring a nuclear deterrent, which the weak state could use to constrain the unipolar's actions. Accordingly, the logic of unipolarity will tend to provoke conflicts between the unipolar and any lesser powers who refuse to accept its dominance.
It’s a very creative argument, although one can raise at least two questions. First, if Monteiro’s logic is correct, why didn’t the United States do more to stop North Korea, Pakistan, and Iran from getting a nuclear capability? We did fight a war with Iraq to prevent that from happening, but the argument suggests the U.S. should have fought these other states too. Second, if we have been in a unipolar world for the past fifteen years or so, what are the implications of the economic meltdown? Will economic constraints undermine America's dominant position, and drive us back to multipolarity?
A second highlight was Todd Sechsers’s paper “Goliath’s Curse: Asymmetric Power and Effectiveness of Coercive Threats.” Using a simple bargaining model, Sechser (from the University of Virginia) argues that great powers often fail to get their way when they issue coercive threats (which is surprising at first glance), and that this problem may in fact get worse the more powerful they are. The basic logic here concerns reputation: weak states will worry about giving in to a great power’s demands (even when the demands are fairly minor), because they will fear that the great power will just demand more later. So they resist now, to enhance their reputation for being stubborn and to convince the great power to leave them alone in the future. The core of the problem is that a very powerful state can’t make a credible commitment of restraint; it can’t reassure the weak state that it really, truly, wants just a modest concession, one that the weak state might be willing to grant if it were confident that this would be the only demand. And the bigger and stronger the coercing state is, the harder it is for that state to reassure the weak power that its aims are actually limited.
Sechser illustrates his model with a nice case study of Finland’s refusal to bow to Soviet demands in 1940, a refusal that triggered the Russo-Finnish war. But I kept thinking about the United States and Serbia in 1998-99 and the United States and Iran today. In the latter case, we have issued demands that we think are actually quite reasonable, and we’ve also said we will provide some positive benefits if we get a deal. But what if Iran is still worried that we really do have more ambitious goals (such as regime changfe) and that we will take advantage of any concessions they might make and up our demands later? If that is their view, then making relatively modest demands and offering generous incentives may not work. Paradoxically, his paper implies that we might have a better chance of cutting a deal with Iran if our position in the region were somewhat weaker, because Tehran would be less worried about the long-term implications of giving up its nuclear program. It also implies that great powers like the United States have to think about how they can provide credible reassurances to weak states, as a way of making them more willing to cut a deal.
I've oversimplified both these papers considerably; nonetheless, it was reassuring to see several scholarly projects that are directly relevant to current policy issues. If you know the ISA, this is not something one can always count on at these meetings.
Tomorrow’s highlight: a panel offering a posthumous award to Samuel Huntington for his contributions to international studies. It is a shame that Sam won’t be here to receive it himself, though I’m sure he would have been embarrassed by all the fuss.
Stephen M. Walt is the Robert and Renée Belfer professor of international relations at Harvard University.