throbber
EDITORIAL
`
`The most influential journals: Impact Factor
`and Eigenfactor
`
`Alan Fersht1
`Medical Research Council Centre for Protein Engineering, Cambridge CB2 0QH, United Kingdom
`
`Plot of the 2007 Eigenfactor rating against total number of citations listed in the Journal Citation
`Fig. 1.
`Reports姞.
`
`article. She then proceeds to the
`journal that was cited, reads a ran-
`dom article there, and selects a cita-
`tion to direct her to her next journal
`volume. The researcher does this ad
`infinitum.
`The Eigenfactor™ is now listed by Jour-
`nal Citation Reports威. In practice, there
`is a strong correlation between Eigen-
`factors and the total number of citations
`received by a journal (2). A plot of the
`2007 Eigenfactors for the top 200 cited
`journals against the total number of ci-
`tations shows some startling results (Fig.
`1). Three journals have by far and away
`the most overall influence on science:
`Nature, PNAS, and Science, closely fol-
`lowed by the Journal of Biological Chem-
`istry. So, publish in PNAS with the full
`knowledge that you are contributing to
`one of the most influential drivers of
`scientific progress.
`The terrible legacy of IF is that it is
`being used to evaluate scientists, rather
`than journals, which has become of
`increasing concern to many of us.
`Judgment of individuals is, of course,
`best done by in-depth analysis by ex-
`pert scholars in the subject area. But,
`some bureaucrats want a simple metric.
`My experience of being on interna-
`tional review committees is that more
`notice is taken of IF when they do not
`
`have the knowledge to evaluate the
`science independently.
`An extreme example of such behavior
`is an institute in the heart of the Euro-
`pean Union that evaluates papers from
`its staff by having a weighting factor of
`0 for all papers published in journals
`with IF ⬍5 and just a small one for 5 ⬍
`IF ⬍ 10. So, publishing in the Journal of
`Molecular Biology counts for naught,
`despite its being at the top for areas
`such as protein folding.
`All journals have a spread of cita-
`tions, and even the best have some pa-
`pers that are never cited plus some
`fraudulent papers and some excruciat-
`ingly bad ones. So, it is ludicrous to
`judge an individual paper solely on the
`IF of the journal in which it is
`published.
`Fortunately, PNAS has both a good
`IF and a high reliability because of its
`access to so many expert National Acad-
`emy of Sciences member–editors. If a
`paper has to be judged by a metric, then
`it should by the citations to it and not to
`the journal. The least evil of the metrics
`for individual scientists is the h-index
`(3), which ranks the influence of a sci-
`entist by the number of citations to a
`significant number of his or her papers;
`an h of 100 would mean that 100 of
`Singapore Exhibit 2008
`Lassen v. Singapore et al.
`PNAS 兩 April 28, 2009 兩 vol. 106 兩 no. 17 兩 6883– 6884
`PGR2019-00053
`
`1E-mail: arf25@cam.ac.uk.
`
`P rogress in science is driven by
`
`the publication of novel ideas
`and experiments, most usually in
`peer-reviewed journals, but
`nowadays increasingly just on the inter-
`net. We all have our own ideas of which
`are the most influential journals, but is
`there a simple statistical metric of the
`influence of a journal? Most scientists
`would immediately say Impact Factor
`(IF), which is published online in
`Journal Citation Reports威 as part of
`the ISI Web of KnowledgeSM (www.
`thomsonreuters.com/products㛭services/
`scientific/Journal㛭Citation㛭Reports). The
`IF is the average number of citations in
`a year given to those papers in a journal
`published in the previous 2 years. But
`what, for example, is the most influen-
`tial of the 3 following journals: A, which
`publishes just 1 paper a year and has a
`stellar IF of 100; B, which published
`1,000,000 papers per year and has a dis-
`mal IF of 0.1 but 100,000 citations; or
`C, which publishes 5,000 papers a year
`with an IF of 10? Unless there is a very
`odd distribution of citations in B, or A
`has a paradigm-shifting paper like the
`Watson and Crick DNA structure, C is
`likely to be the most influential journal.
`Clearly neither IF nor total number of
`citations is, per se, the metric of the
`overall influence of a journal.
`Bibliometricians have introduced vari-
`ous scales of ranking journals; some
`based on publications, some based on
`usage as well, including the internet,
`using social networking analysis. Bollen
`et al. (1) recently concluded that no sin-
`gle indicator adequately measures im-
`pact and the IF is at the periphery of 39
`scales analyzed. But there is a new pa-
`rameter, the Eigenfactor™, which at-
`tempts to rate the influence of journals
`(www.eigenfactor.org). The Eigenfactor™
`ranks journals in a manner similar to that
`used by Google for ranking the impor-
`tance of Web sites in a search. To quote
`from www.eigenfactor.org/methods.htm:
`
`The Eigenfactor™ algorithm corre-
`sponds to a simple model of research
`in which readers follow chains of ci-
`tations as they move from journal to
`journal. Imagine that a researcher
`goes to the library and selects a jour-
`nal article at random. After reading
`the article, the researcher selects at
`random one of the citations from the
`
`www.pnas.org兾cgi兾doi兾10.1073兾pnas.0903307106
`
`Downloaded by guest on November 12, 2019
`
`

`

`EDITORIAL
`
`The most influential journals: Impact Factor
`and Eigenfactor
`
`Alan Fersht1
`Medical Research Council Centre for Protein Engineering, Cambridge CB2 0QH, United Kingdom
`
`P rogress in science is driven by
`
`the publication of novel ideas
`and experiments, most usually in
`peer-reviewed journals, but
`nowadays increasingly just on the inter-
`net. We all have our own ideas of which
`are the most influential journals, but is
`there a simple statistical metric of the
`influence of a journal? Most scientists
`would immediately say Impact Factor
`(IF), which is published online in
`Journal Citation Reports威 as part of
`the ISI Web of KnowledgeSM (www.
`thomsonreuters.com/products㛭services/
`scientific/Journal㛭Citation㛭Reports). The
`IF is the average number of citations in
`a year given to those papers in a journal
`published in the previous 2 years. But
`what, for example, is the most influen-
`tial of the 3 following journals: A, which
`publishes just 1 paper a year and has a
`stellar IF of 100; B, which published
`1,000,000 papers per year and has a dis-
`mal IF of 0.1 but 100,000 citations; or
`C, which publishes 5,000 papers a year
`with an IF of 10? Unless there is a very
`odd distribution of citations in B, or A
`has a paradigm-shifting paper like the
`Watson and Crick DNA structure, C is
`likely to be the most influential journal.
`Clearly neither IF nor total number of
`citations is, per se, the metric of the
`overall influence of a journal.
`Bibliometricians have introduced vari-
`ous scales of ranking journals; some
`based on publications, some based on
`usage as well, including the internet,
`using social networking analysis. Bollen
`et al. (1) recently concluded that no sin-
`gle indicator adequately measures im-
`pact and the IF is at the periphery of 39
`scales analyzed. But there is a new pa-
`rameter, the Eigenfactor™, which at-
`tempts to rate the influence of journals
`(www.eigenfactor.org). The Eigenfactor™
`ranks journals in a manner similar to that
`used by Google for ranking the impor-
`tance of Web sites in a search. To quote
`from www.eigenfactor.org/methods.htm:
`
`The Eigenfactor™ algorithm corre-
`sponds to a simple model of research
`in which readers follow chains of ci-
`tations as they move from journal to
`journal. Imagine that a researcher
`goes to the library and selects a jour-
`nal article at random. After reading
`the article, the researcher selects at
`random one of the citations from the
`
`Plot of the 2007 Eigenfactor rating against total number of citations listed in the Journal Citation
`Fig. 1.
`Reports姞.
`
`article. She then proceeds to the
`journal that was cited, reads a ran-
`dom article there, and selects a cita-
`tion to direct her to her next journal
`volume. The researcher does this ad
`infinitum.
`The Eigenfactor™ is now listed by Jour-
`nal Citation Reports威. In practice, there
`is a strong correlation between Eigen-
`factors and the total number of citations
`received by a journal (2). A plot of the
`2007 Eigenfactors for the top 200 cited
`journals against the total number of ci-
`tations shows some startling results (Fig.
`1). Three journals have by far and away
`the most overall influence on science:
`Nature, PNAS, and Science, closely fol-
`lowed by the Journal of Biological Chem-
`istry. So, publish in PNAS with the full
`knowledge that you are contributing to
`one of the most influential drivers of
`scientific progress.
`The terrible legacy of IF is that it is
`being used to evaluate scientists, rather
`than journals, which has become of
`increasing concern to many of us.
`Judgment of individuals is, of course,
`best done by in-depth analysis by ex-
`pert scholars in the subject area. But,
`some bureaucrats want a simple metric.
`My experience of being on interna-
`tional review committees is that more
`notice is taken of IF when they do not
`
`have the knowledge to evaluate the
`science independently.
`An extreme example of such behavior
`is an institute in the heart of the Euro-
`pean Union that evaluates papers from
`its staff by having a weighting factor of
`0 for all papers published in journals
`with IF ⬍5 and just a small one for 5 ⬍
`IF ⬍ 10. So, publishing in the Journal of
`Molecular Biology counts for naught,
`despite its being at the top for areas
`such as protein folding.
`All journals have a spread of cita-
`tions, and even the best have some pa-
`pers that are never cited plus some
`fraudulent papers and some excruciat-
`ingly bad ones. So, it is ludicrous to
`judge an individual paper solely on the
`IF of the journal in which it is
`published.
`Fortunately, PNAS has both a good
`IF and a high reliability because of its
`access to so many expert National Acad-
`emy of Sciences member–editors. If a
`paper has to be judged by a metric, then
`it should by the citations to it and not to
`the journal. The least evil of the metrics
`for individual scientists is the h-index
`(3), which ranks the influence of a sci-
`entist by the number of citations to a
`significant number of his or her papers;
`an h of 100 would mean that 100 of
`
`1E-mail: arf25@cam.ac.uk.
`
`www.pnas.org兾cgi兾doi兾10.1073兾pnas.0903307106
`
`PNAS 兩 April 28, 2009 兩 vol. 106 兩 no. 17 兩 6883– 6884
`
`Downloaded by guest on November 12, 2019
`
`

`

`their publications have been cited at
`least 100 times each. In terms of a ‘‘us-
`age’’ metric, Hirsch’s h-index paper (3)
`is exceptional in its number of down-
`loads (111,126 downloads versus 262
`
`citations since it was published in No-
`vember 2005).
`While new and emerging measures of
`scientific impact are developed, it is im-
`portant not to rely solely on one standard.
`
`After all, science is about progress, which
`is ultimately assessed by human judgment.
`
`ACKNOWLEDGMENTS. I thank Philip Davis for
`pointing me toward the relevant literature.
`
`1. Bollen J, Van de Sempel H, Hagberg E (2009) A principal
`component analysis of 39 scientific impact measures.
`e-Print Archive, http://xxx.lanl.gov/abs/0902.2183.
`
`2. Davis PM (2008) Eigenfactor: Does the principle of re-
`peated improvement result in better estimates than raw
`citation counts? J Am Soc Info Sci Tech 59:2186–2188.
`
`3. Hirsch JE (2005) An index to quantify an individual’s
`scientific research output. Proc Natl Acad Sci USA
`102:16569 –16572.
`
`6884 兩 www.pnas.org兾cgi兾doi兾10.1073兾pnas.0903307106
`
`Fersht
`
`Downloaded by guest on November 12, 2019
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket