The are many ways of evaluating and ranking philosophy journals. This post details some of the most common metrics used to rank Philosophy Journals and explains how to use them to assess journal quality. Rankings covered include SJR, SNIP, % Cited, CiteScore, Impact Factor, H-Index (Google Scholar Rankings), Acceptance Rates, Number of submissions, Leiter Philosophy journal rankings, and other academic polls. SJRThe SCImago Journal Rank Indicator (SJR) is a commonly used citation-based journal ranking metric. The key feature of the SJR is that not all citations are given equal weighting; citations from highly-ranked journals are worth more than those from lower-ranked journals. SCImago describes the metric as 'a measure of journal's impact, influence or prestige. It expresses the average number of weighted citations received in the selected year by the documents published in the journal in the three previous years.' You can find all SJR scores here and you can read more about the methodology here. SNIPLeiden University’s Centre for Science and Technology Studies (CWTS) offers a source normalized impact per paper (SNIP) metric for journal ranking. This citation-based metric aims to allow for more accurate inter-subject comparisons by accounting for the different citation practices across disciplines. CWTS describes the metric as measuring 'the average citation impact of the publications of a journal. Unlike the well-known journal impact factor, SNIP corrects for differences in citation practices between scientific fields, thereby allowing for more accurate between-field comparisons of citation impact.' You can find all SNIP scores here and you can read more about the methodology here. Impact Factors and CiteScoreThe Impact Factor (IF) measures the average number of citations for articles published by the journal over the last two years. To work it out, the total number of citations of papers published in the last two years is divided by the total number of citeable papers published in the last two years. Similarly, the 5-year Impact factor offers the same metric but is calculated over the previous five years. This is particularly helpful for disciplines where the publication process is slower as it gives more time for papers to make an impact. Elsevier's CiteScore is a similar metric but calculated over the previous four years. % CitedSCOPUS data tracks the % of papers published in a journal that received at least one citation in the three years following publication. Unlike the SJR, SNIP and CiteScore, which are built on average citations per paper, the % Cited offers insight into what proportion of papers published by the journal goes on to have any registered impact. You can find the % cited for all journals indexed in SCOPUS here. H IndexGoogle Scholar ranks journals based on the H-5 index. The H-5 is the largest number h such that h articles published in the last five years have at least h citations each. Similarly, SJR tracks a journal's H index across all years that SJR has indexed (2010 - present). As half of the H index is based on the number of papers a journal has published, journals that publish a lot tend to do well in these rankings. For example, the Google Scholar top-ranked journal in philosophy (as of Jan 2024) Synthese has a H-5 of 54 and published 2,778 papers in the last five years. The second-ranked journal Nous with an H-5 of 39 published 245 papers in the same period. The Philosophical Review doesn't make the top 20 with a H-5 of 20 but only published 40 papers in the last five years. You can find the Google Scholar rankings for the top 20 philosophy journals here and use their search function to find other journals. Submission StatisticsOne way of assessing journal quality is by their acceptance rates. The thinking behind this is that standards for acceptance are higher for more selective journals. Whether the reasoning holds is up for debate but given that acceptance rates are a popular metric for assessing journal quality, the rate is an important proxy for ranking journals. Rachel Herbert's 'Accept Me, Accept Me Not: What Do Journal Acceptance Rates Really Mean? [ICSR Perspectives]' in the International Center for the Study of Research Paper Series, offers a detailed overview of acceptance rates and their use as a measure of journal prestige. Alternatively, the number of submissions a journal receives offers a way of assessing the perceived reputation of the journal. Here the thinking is that if more people are sending their papers to the journal then this is evidence that the journal is a desirable venue for publication. Many factors might influence the number of submissions a journal receives but it can be a blunt metric for assessing a journal's popularity within the academic community. The PJIP Operations Survey captures both of these statistics. Intelligentsia SurveysLeiter Rankings Brian Leiter regularly runs polls to assess the ‘best’ generalist philosophy journals. His polls are broadly accepted as the de facto resource when assessing the opinions philosophers hold for the top generalist journals. His most recent 2022 poll was a pairwise comparison between 26 generalist journals. A pairwise comparison puts all the journals head-to-head in a series of votes. The results (those numbers in brackets) represent the probability of the journal winning if it was randomly compared to any other journal. The outcomes were as follows:
de Bruin's Meta-Rankings A 2023 paper titled ‘Ranking philosophy journals: a meta-ranking and a new survey ranking’ by Boudewijn de Bruin outlines serval ways of ranking philosophy journals. The paper proposes a novel set of meta-rankings that aggregate many different rankings. As part of this study de Bruin conducted a survey collecting information on perceived journal quality and awareness. The results of which are presented below. Source: Table 5 in de Bruin, B. Ranking philosophy journals: a meta-ranking and a new survey ranking. Synthese 202, 188 (2023). Reused under CC BY 4.0 licence.
Comments are closed.
|
Categories
All
Archives
January 2025
×
Advertise here
The PJIP is an independent organisation operating with limited funding. To cover our hosting and maintenance costs, we display ads across the site, primarily sourced through Google AdSense. However, we also welcome direct advertising partnerships, especially for services and products relevant to the thousands of researchers and academics who use this site. |