How to address reviewer comments

You got a revise-and-resubmit request from a journal – congratulations! What now? Here are step by step suggestions to maximize the probability of converting that R&R into a publication and minimize the number of revision rounds.

Preparing to write the reply

  • Copy-paste the reviewer comments into Word, Latex, or whatever you write in. Format the comments to distinguish them from the reply you will write (e.g. make the comments italic or bold). If the editor provided specific comments, do this with his/her comments as well.
  • Write what you’re going to do in response to each comment following the comment and make a to-do list based on that. Now you’ve got a super-rough draft of the reply!

Deciding how/if to address a comment

  • Start with the assumption that each comment is valid: the reviewer is innocent until proven guilty! In my experience as both author and reviewer, it’s not uncommon for authors to glance over a comment and dismissively conclude that a reviewer has misunderstood something about the paper. Consider the possibility that it is you who has misunderstood the comment and re-read it carefully. In general, carefully re-read each comment in the beginning, middle, and end of the revision process to make sure you haven’t misunderstood the point. If, after carefully considering a comment, you continue to think that it reflects a misunderstanding of the paper, try to figure out why the reviewer misunderstood your paper. Sure, reviewers can be careless, but just as often authors might think something in the paper is clear when it is not. In this case, carefully edit the part of the paper that may have confused the reviewer and make your reply to the comment something along the lines of “We apologize for the confusion. In fact, [EXPLAIN]. We have now revised lines/sections/pages X-Y to make this clearer.” Sometimes this is as simple as moving something from a footnote in the back of the paper to earlier in the paper or adding a footnote about something in the supplementary materials.
  • Remember that the editor is the one ultimately in charge. If an editor tells you not to address a particular comment, don’t address it. If an editor highlights a comment as specifically important, pay particular attention to it. If an editor has not said anything about a particular comment, assume that they want you to address it.
  • Address every comment unless it is impossible or the editor told you not to do it. Assume that the reviewers are acting in good faith and giving you feedback to improve your paper. Note that “addressing” a comment does not always mean you do exactly what the comment says. For example, if a reviewer says that “The analysis sample should be limited to X” and you think there are good objective reasons to keep your current sample, you can address the comment by showing results with sample X in the reply to the reviewer and clearly explaining why you believe it’s not the best sample to focus on.
  • Err on the side of comprehensiveness. There are no page limits when it comes to reviewer replies (this is not an invitation to overwhelm the reviewers by making the reply unnecessarily long though!), and if you decide that some exercise suggested by the reviewers isn’t important enough for the manuscript, go ahead and include the results of the exercise in the reply. A common phrase in my replies has been “To keep the length of the manuscript manageable, we have decided to not include this exercise in the paper.” but it always follows a reply where the results are shown to the reviewers!
  • Sometimes reviewer and editor requests can be burdensome, e.g., if you’re asked to run another experiment or collect more data. Ultimately, it’s your paper and your career, so you decide where the limits are, but keep in mind that by choosing to not address a particular comment, you weakly increase the risk of rejection.
  • If you’re in doubt about what a comment is asking you to do even after reading it carefully, ask a senior colleague to take a look.

You should not view the editor as someone you can email back and forth with whenever you want (they’re busy!), but there are times when it’s appropriate to send the editor an email before completing your revision

  • When reasonable reviewer suggestions contradict each other, but the editor did not clarify which direction you should take.
  • When a comment was highlighted as particularly important to address by the editor, but you don’t view it as feasible. Better to explain to the editor why you can’t do it and ask him or her up front if it’s a deal-breaker so you don’t spend time on all the other revisions only to be rejected.
  • When the required revisions are substantial, the suggestions are vague, and you want to run your revision plan by the editor before executing it.

Finally, some specific suggestions on how to address reviewers

  • Thank the reviewer at the beginning of your reply. They read your paper and provided comments!
  • Start the reply to each reviewer by outlining the key changes you have made in the response to the editor and other reviewers. Also note any major changes you made during the revision that didn’t stem from reviewer comments (e.g., because you thought of other beneficial changes yourself). Don’t expect reviewers to read the other reports and your replies to them (though they often may do that). Outlining changes made in response to the editor and other reviewers provides insurance, among other things: if a reviewer dislikes a change, they are much less likely to go after you if the change was made in response to another reviewer’s suggestion.
  • Make it easy for reviewers and the editor to see exactly what was changed. Aim to minimize the number of times the reviewer has to flip back and forth between your reply and the paper. Put copies of new tables/figures into the reply. Always note the page/line numbers that have changed. If the change is short (e.g., you added or revised a couple of sentences or added a new paragraph), paste the new language into the response document. (Don’t paste entire sections or multiple paragraphs that have been edited though.)
  • If you decided that a comment is not feasible to address, provide an objective explanation as to why. Don’t just write something along the lines of “We decided it would be better to not implement this suggestion.” without an explanation.
  • Avoid sounding defensive. For example, instead of writing “Although this issue was essentially addressed in Table 1, we have now added additional analysis to our supplementary materials”, write simply “We have now added additional analysis to our supplementary materials.”
  • Be professional no matter what. In many cases, the reviewers know who you are, and you may be interacting with them for years to come (without knowing it!). The editor definitely knows who you are, and unprofessional behavior can cost you. Even if the reviewers are rude, do not stoop to their level.
  • Try to make the responses as short as possible (but not shorter). This means editing them like you might a manuscript.
  • Remind the editor that you’re open to alternative ways of implementing the suggestions. If something you did as part of the revision is only in the replies, note that you chose not to put it in the paper but also that you would be open to doing so should the editor think it desirable. If you cut something to stay within the page/word limit, note that you’d be open to bringing it back if the editor prefers you to cut something else. You don’t need to state this for literally every single change, but a broad statement to that extent in the editor reply can only help you.

I know this sounds like a lot, but once you’ve used this approach a few times in R&Rs, it gets easier. Good luck!

Who is publishing in AER: Insights? An update

Over a year ago, I wrote a post tabulating the share of AER: Insights authors who have also published in a top-5 journal*. (The answer was 67%, significantly higher than most other journals, except those that generally solicit papers, like the Journal of Economic Literature.)

Now that AER: Insights is in its second year of publishing and has 60 forthcoming/published articles, I decided to revisit this question, again using Academic Sequitur data. The graph below shows the percent of authors that (a) have published/are forthcoming in a given journal in 2018-2020 and (b) have had at least one top-5 article published since 2000. The journals below are the top ten journals based on that metric.

With a score of 66%, AER: Insights still has the highest share of top-5 authors among journals where submissions are not generally solicited.** The next-highest journal, Theoretical Economics, is five percentage points behind. (There is some indication that the share for AER: Insights is coming down: for articles accepted in 2020, the top-5 share was “only” 60%.)

What if we condition on having two or more top-5 publications? That actually causes AER: Insights to move up in the ranking, overtaking Brookings Papers on Economic Activity.

Whether this pattern exists because AER: Insights is extremely selective or because less-established scholars are reluctant to submit their work to a new-ish journal or for some other reason is impossible to know without submission data. But no matter how you look at it, the group currently publishing in AER: Insights is quite elite.




*Top 5 is defined as American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies.

**AER: Insights would be even higher-ranked by this metric (#3) if we ignored top-5 publications in American Economic Review. Therefore, this pattern is not driven by the fact that both journals are published by the AEA.

What publishes in top-5 economics journals?

Part I: agricultural economics, lab experiments, field experiments & economics of education

Most of us have a sense that it is more difficult to get certain topics published in the top 5 economics journals (American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies), but there is not much hard data on this. And if a particular topic appears infrequently in top journals, it may simply be because it’s a relatively rare topic overall.

To get more evidence on this issue, I used Academic Sequitur data, which covers the majority of widely-read journals in economics. The dataset I used contains articles from 139 economics journals and spans the years 2000-2019. On average, 6 percent of the papers in the dataset were published in a top 5 journal.

I classified papers into topics based on the presence of certain keywords in the abstract and title.* I chose the keywords carefully, aiming to both minimize the share of irrelevant articles and to minimize the omission of relevant ones. While there is certainly some measurement error, it should not bias the results. (Though readers should think of this as a “fun-level” analysis rather than a “rigorously peer-reviewed” analysis.)

I chose topics based on suggestions in response to an earlier Tweet of mine. To keep things manageable, I’m going to focus on a few topics at a time. To start off, I looked at agricultural economics (5.3% of articles in the dataset), field experiments (1.0% of articles), lab experiments (1.9% of articles), and education (1.8% of articles). I chose these to have some topic diversity and also because these topics were relatively easy to identify.** I then ran a simple OLS regression of a “top 5” indicator on each topic indicator (separately).***

The results are plotted in a graph below. Field experiments are much more likely to publish in a top 5 journal than in the other 134 journals (about 5 percentage points more likely!), while lab experiments are much less likely. Education doesn’t seem to be favored one way or the other, while agriculture is penalized about as much as field experiments are rewarded. Moral of the story: if you want to publish an ag paper in a top 5, make it a field experiment!

Now you might be saying, “I can’t even name 139 economics journals, so maybe this isn’t the relevant sample on which to run this regression.” Fair point (though see here for a way way longer list of econ journals). To address this, I restricted the set of journals to the 20 best-known general-interest journals—including the top 5—and re-generated the results.**** With the exception of lab experiments, the picture now looks quite different: both field experiments and education research are penalized by the top 5 journals, but agriculture is not.

Combining the two sets of results together, we can conclude that the top 5 penalize agricultural economics research but so do the other good general-interest journals. The top 5 journals also penalize field experiments relative to other good general-interest journals, but top general-interest journals as a whole rewards field experiments relative to other journals. Finally, top 5 journals penalize education relative to other good general-interest journals, but not relative to the field as a whole.

The second set of results is obviously sensitive to the set of journals considered. If I were to add field journals like the American Journal of Agricultural Economics, things would again look much worse for ag. And how much worse they look for a particular topic depends on how many articles the field journal publishes. So I prefer the most inclusive set of journals, but I welcome suggestions about which set of journals to use in future analyses! Would also love to hear everyone’s thoughts on this exercise in general, so please leave a comment.

——————————————————————————————————————–

Endnotes

*I did not use JEL codes because many journals do not require or publish these and we therefore do not collect them. JEL codes are also easier to select strategically than the words in the title and abstract.

** An article falls into the category of agricultural economics if it contains any of the following words/phrases in the abstract or title (not case-sensitive, partial word matches count): “farm”, “crop insurance”, “crop yield”, “cash crop”, “crop production”, “crops production”, “meat processing”, “dairy processing”, “grain market”, “crop management”, “agribusiness”, “beef”, “poultry”, “hog price”, “cattle industry”, “rice cultivation”, “wheat cultivation”, “grain cultivation”, “grain yield”, “crop diversity”, “soil conditions”, “dairy sector”, “hectare”, “sugar mill”, “corn seed”, “soybean seed”, “maize production”, “soil quality” “agricultural chemical use”, “forest”. Field experiment: “field experiment”, “experiment in the field”. Lab experiment: “lab experiment”, “laboratory experiment”, “experimental data”, “randomized subject”, “online experiment”. Education: “return to education”, “returns to education”, “college graduate”, “schooling complet”, “teacher”, “kindergarten”, “preschool”, “community college”, “academic achievement”, “academic performance”, “postsecondary”, “educational spending”, “student performance”, “student achievement”, “student outcome”, “student learning”, “higher education” “educational choice”, “student academic progress”, “public education”, “school facilit”, “education system”, “school voucher” “private school”, “school district”, “education intervention”. Articles may fall into multiple categories.

*** Standard errors are heteroskedasticity-robust

**** The 15 additional journals are (in alphabetical order): American Economic Journal: Applied Economics, American Economic Journal: Economic Policy, American Economic Journal: Macroeconomics, American Economic Journal: Microeconomics, American Economic Review: Insights, Economic Journal, Economic Policy, Economica, European Economic Review, Journal of the European Economic Association, Oxford Economic Papers, Quantitative Economics, RAND Journal of Economics, Review of Economics and Statistics, Scandinavian Journal of Economics.

Political Science Journal Rankings

How do we judge how good a journal is? Ideally by the quality of articles it publishes. But the best systematic way of quantifying quality we’ve come up with so far are citation-based rankings. And these are far from perfect, as a simple Google Search will reveal (here’s one such article).

I’ve been using Academic Sequitur data to experiment with an alternative way of ranking journals. The basic idea is to calculate what percent of authors who published in journal X have also published in a top journal for that discipline (journals can also be ranked relative to every other journal, but the result is more difficult to understand). As you might imagine, this ranking is also not perfect, but it has yielded very reasonable results in economics (see here).

Now it’s time to try this ranking out in a field outside my own: Political Science. As a reference point, I took 3 top political science journals: American Political Science Review (APSR), American Journal of Political Science (AJPS), and Journal of Politics (JOP). I then calculated what percent of authors who published in each of 20 other journals since 2018 have also published a top-3 article at any point since 2000.

Here are the top 10 journals, according to this ranking (the above-mentioned stat is in the first column).


Quarterly Journal of Political Science and International Organization come out as the top 2. This is noteworthy because alternative lists of top political science journals suggested to me included these two journals! Political Analysis is a close second, followed by a group of 5 journals with very similar percentages overall (suggesting similar quality).

Below is the next set of ten. Since this is not my research area, I’m hoping you can tell me in the comments whether these rankings are reasonable or not! Happy publishing.

Finally, here’s an excel version of the full table, in case you want to re-sort by another column. Note that if a journal is not listed, that means I did not rank it. Feel free to ask about other journals in the comments.