A new way of ranking journals 2.0 – journal connectedness

A few weeks ago, I proposed that one could rank journals based on what percent of a journal’s authors have also published in a top journal. I calculated this statistic for economics and for finance, using the top 5/top 3 journals as a reference point.

Of course, one does not have to give top journals such an out-sized influence. One beauty of this statistic is that it can be calculated for any pair of journals. That is, we can ask, what percent of authors that publish in journal X have also published in journal Y? This “journal connectedness” measure can also be used to infer quality. If you think journal X is good and you want to know whether Y or Z is better, you can see which of these two journals has a higher percentage of authors from X publishing there. Of course, with the additional flexibility of this ranking come more caveats. First, this metric is most relevant for comparing journals from the same field or general-interest journals. If X and Y are development journals and Z is a theory journal, then this metric will not be very informative. Additionally, it’s helpful to be sure that both Y and Z are worse than X. Otherwise, a low percentage in Z may just reflect more competition.

With those caveats out of the way, I again used Academic Sequitur‘s database and calculated this connectedness measure for 52 economics journals, using all articles since 2010. Posting the full matrix as data would be overkill (here’s a csv if you’re interested though), so I made a heat map. The square colors reflect what percent of authors that published in journal X have also published in journal Y. I omitted observations where X=Y to maximize the relevance of the scale.

A few interesting patterns emerge. First, the overall percentages are generally low, mostly under 10 percent. The median value in the plot above is 3 percent and the average is 4.3 percent, but only 361 out of 2,652 squares are <0.5 percent. That means that a typical journal’s authors’ articles are dispersed across other journals rather than concentrated in some other journal. This makes sense if the typical journal is very disciplinary or if there are many equal-quality journals (eyeballing the raw matrix, it seems like a bit of both is going on, but I’ll let you explore that for yourself).

There are some notable exceptions. For example, 41% of those who have published in JAERE have published in JEEM, 54% of those who published in Theoretical Economics have published in JET, and 35% of those who have published in Quantitative Economics have published in the Journal of Econometrics. These relationships are highly asymmetric: only 13% of those who have published in JEEM have published in JAERE, only 16% of those who have published in JET have published in Theoretical Economics, and only 4% of those who have published in the Journal of Econometrics have published in Quantitative Economics.

There is also another important statistic contained in this map: horizontal lines with many green and light blue squares indicate journals that people seem to be systematically attracted to across the board. And then there’s that green cluster at the bottom left, with some yellows thrown in. Which journals are these?

I had the benefit of knowing what the data looked like before I made these heat maps, so I deliberately assigned ids 1-5 to the top 5 journals (the rest are in alphabetical order). So one pattern this exercise reveals is that authors from across the board are flocking to the top 5s (an alternative interpretation is that people with top 5s are dominating other journals’ publications). And people who publish in a top 5 tend to publish in other top 5s – that’s the bottom left corner. In fact, if you omitted the top 5s, as the next graph does, the picture would look a lot less colorful.

But even without the top 5, we see some prominent light blue/green horizontal lines, indicating “attractive” journals. The most line-like of these are: Journal of Public Economics, Journal of the European Economics Association, Review of Economics and Statistics, Economics Letters, and JEBO. Although JEBO was a bit surprising to me, overall it looks like this giant correlation matrix can be used to identify good general-interest journals. By contrast, the AEJs don’t show the same general attractiveness.

Finally, this matrix illustrates why Academic Sequitur is so useful. Most authors’ articles are published in more than just a few journals. Thus, to really follow someone’s work, one needs to either constantly check their webpage/Google Scholar profile, go to lots of conferences, or subscribe to many journals’ ToCs and filter them for relevant articles. Some of these strategies are perfectly feasible if one wants to follow just a few people. But most of us can think of way more people than that whose work we’re interested in. Personally, I follow 132 authors (here’s a list if you’re interested), and I’m sure I’ll be continuing to add to this list. Without an information aggregator, this would be a daunting task, but Academic Sequitur makes it easy. Self-promotion over!

If you think of anything else that can be gleaned from this matrix, please comment.

Journal Rankings: Extensions and Robustness Checks

My recent post on a new way of ranking journals using data from Academic Sequitur (which you should check out, by the way!) was more popular than I expected. People pointed out important theory and macro journals I had missed (I’m clearly an applied micro person). So I added more journals. They also pointed out that making top 5 the reference journals may mean that the ranking reflects who is in “the club” with these journals more than anything else. One thing I will do in the future is make a giant matrix of pairwise journal relationships, so if you don’t like using the top 5 as a reference, you can use a different journal. But for now what I did is calculate what % of authors in a journal have only one top 5. This could plausibly make the rating noisier (maybe these people just got lucky), but it should reduce the influence of those who live in the top 5 club (as opposed to guests!).

Finally, someone pointed out that because AER and AEJs are linked, using publication in AER as a metric for the quality of AEJs may be misleading. So I calculated the percent publishing in top 4, excluding the AER. This metric is what the data below are sorted by.

So without any further ado, I give you the expanded and revised rankings! First, the “top 10”.


One thing worth pointing out here is that Quantitative Economics is linked to Econometrica, as is also evident from the high proportion of its authors who have published there. Theoretical Economics and Journal of Economic Theory were not originally in the set of journals I ranked, but they score high both with and without counting the AER. Overall, the rankings get re-shuffled a bit, but given how numerically close the original percentages were, I would call this broadly similar.

Next ten journals:

Next ten:

And here’s the final set:

How do the rankings with and without AER compare? Four journals rise by 5+ spots when AER is excluded: Quantitative Economics, Journal of Mathematical Economics, Review of Economic Dynamics, and Quantitative Marketing and Economics. And four journals fall by 5+ spots: AEJ: Micro, Journal of Human Resources, Journal of International Economics, and Journal of the Association of Environmental and Resource Economics (abbreviated as JAERE above). AEJ: Policy falls by four spots, AEJ: Macro falls by one spot, and AEJ: Applied stays in the same rank.

What if we only count authors who have just one top 5? That changes the rankings much more, actually, with 13 journals rising 5+ spots, including ReStat, JHR, JIE, JUE, and JPubEc. Nine journals fall by 5+ spots, including AEJ: Applied, JEEA, RAND, JEL, and IER. To me, that suggests that who we count matters much more for the ranking than which journals we count.

Bottom line is: stay tuned (you can subscribe to be notified when new posts appear on the bottom right). I plan to play around with these rankings a lot more in the next few months to figure out if/how they can be useful! If you want to play around with the data yourself, the full spreadsheet is here (let me know what you find).

Ranking finance journals

Last week, I tried out a new way of “ranking” economics journals, based on the percent of 2018-2019 authors who have also published in one of the top 5 economics journals anytime since 2000. This week, I decided to take a look at finance journals (political science is next in line, as well as some extensions and robustness checks for econ journals).

The top 3 finance journals are generally agreed to be Journal of Finance, Journal of Financial Economics, and Review of Financial Studies. How do other finance journals stack up against them according to this metric? For fun and fairness, I threw in the top 5 econ journals into the mix, as well as Management Science.

Here are the “top 10” journals according to this metric (not counting the reference top 3, of course). The first numerical column gives the percent of authors that published in the journal specified in the row in 2018-2019 who have also published an article in any of the top 3 finance journals at some point since 2000. The next three columns give journal-specific percentages.

Because this is not my field, I have less to say about the reasonableness of this ranking, but perhaps finance readers can comment on whether this lines up with their perception of quality. Compared to the econ rankings, the raw percentage differences between journals appear larger, at least at the very top. And the overall frequency of publishing in the top 3 is lower. Management Science makes the top 5, but the top econ journals do not (
JPE and ReStud do make the top 10). To me, this makes sense, since it’s pretty clear that this ranking picks up connectedness as well as quality. Anecdotally, finance departments seem to value Management Science and the top 5 econ journals no more and perhaps less than the top 3 finance journals.

Here are the rest of the journals I ranked (as before, if a journal is not on the list, it doesn’t mean it’s ranked lower, it means I didn’t rank it). Here, we can clearly see that not many people who publish in JF, JFE, and RFS publish in AER, QJE, or Econometrica.

If there’s another journal you’d like to see ranked in reference to the top 3 finance ones, please comment!

How good will AER: Insights be?

American Economic Review: Insights is a new journal by the American Economic Association. It’s intended to replace the short paper section of the AER, and the first issue will appear this summer. Naturally, I’ve had quite a few discussion with colleagues about its likely quality: will AER: Insights be a top-tier journal like the AER, somewhere in the middle of the pack, or a flop?

Obviously, many factors affect the success of a journal. But how it starts out surely matters. Publish some amazing articles and become the journal at the top of people’s minds when they think about where to submit, which will in turn make it easier to attract high-quality articles. Publish some questionable research, and risk aversion will kick in, prompting people to submit elsewhere first and leaving you mostly with articles that other journals decided against publishing.

So I again dove into the database of Academic Sequitur. We track forthcoming articles, so even though the first issue of AER: Insights has not been published yet, we have 26 to-be-published articles in our database (the first of which were added in November of 2018, by the way!). The question I asked was simple: what percent of authors whose work is scheduled to appear in AER: Insights have published a top 5 article any time since 2000?

The answer is a whopping 67% (61% if you exclude AER articles). 58% have published in the AER, 23% have published in Econometrica, 38% have published in QJE, and 39% have published in ReStud. The overall percentage is non-trivially higher than that of any other journal except for Journal of Economic Literature.

Perhaps these numbers are not surprising to you. In fact, it may very well be a strategy that AER: Insights is consciously employing to gain early traction. And these statistics could signal that it’s difficult to get published there unless you’re well-known, at least at this point (though we don’t know what the distribution of submissions looks like). But more information is better, and this certainly makes me more likely to try for AER: Insights in the future!