Peer review in sport and exercise science: What is it and what are the problems?

April 26, 2017

Professor Graeme L. Close and Professor Kevin Tipton

This week I decided to take a look at peer review. This is on the back of conversations with several practitioners who were not fully aware of the process and therefore how less than credible papers can so easily slip into the literature. I am also delighted to say I have a co-author this week. Professor Kevin Tipton is not only a good friend of mine, but one of the most respected sport scientist’s in the world and also shares many of my concerns with regards to the peer review process. I was therefore delighted when Kev agreed to pen this blog with me.

The perils of peer review

Love it or hate it, if you are going to publish your research you will have to engage in peer review. And perhaps just as importantly, if you are going to engage in evidence-based practice it is important that you have some degree of understanding as to what peer review involves, the strengths of this approach but perhaps most importantly its limitations. Having been a researcher publishing papers since 1999 (almost 100 papers to date and >23,000 publication reads), and for the last 3 years serving as a section editor for European Journal of Sport Sciences, I feel qualified to give you some of my thoughts on the process. I must say I think there are some serious shortcomings in the system and without change I fear for the future of sport science research. Over the last few weeks you may have seen me tweeting asking how some papers “got through peer review” and me expressing my serous concerns for our discipline. I therefore felt it was useful to explain the process in more detail.

Above – My early research experiences in 1999 during my PhD. This first paper on free radicals and muscle soreness was my first experience of peer review. I’ve been through this process about 100 times now and it does not get any easier.

 

What is peer review?

Quite simply, peer review is the cornerstone of maintaining academic rigour when it comes to the publication of original research and review articles in journals. Unlike writing books, when you read a journal article the idea is that you can be more confident in the data given that 2-3 “experts” in the field will have reviewed it prior to publication. With a book, all you need is a publisher willing to publish it and that is why there are some quite frankly horrendous sport nutrition books available that in all honesty should be filed in the fiction section of book shops.

 

How does peer review typically work?

OK, you have performed the research, written the manuscript, targeted a specific journal and then you are ready to upload the manuscript to the publisher’s submission site. You are about to place your blood, sweat and tears into the hands of the peer review system. You will / should have declared any conflicts of interest (COIs) when you submit this paper. A COI could be that you have received funding from a supplement company to perform the research, or an honorarium to give advice to a company who will benefit from the publication of this paper or that you are the Editor-in-chief of the journal in which your paper is published or that you have a financial interest in products about which the study is focused. I do not feel COIs make research less important, I just feel that it is crucial that these COIs are clearly declared. In fact, without research funding from some companies, sport science research would never occur because let’s be honest, major research funding is (correctly) targeted at disease prevention / cures rather than how to make someone run faster.

When you submit a manuscript, many journals ask you to nominate a number of people you feel are suitably qualified to review your paper. Other journals do not ask for self-nomination of reviewers for reasons I will come onto later. Once the journals receives the paper and it passes initial triage for formatting etc, the paper will be forwarded to the journal editor. This step really begins the peer review process. The following stages will occur (this can vary slightly (or, in fact, a great deal with some) journal to journal):

  1. The editor in chief will decide on the most suitable section editor to handle this paper. The section editor at this stage should perform a review of the manuscript and decide if it suitable to be sent out for peer review. Personally, I do reject a few papers at this stage if there are any serious problems with the methodology or data interpretation. I do not think that all section editors do this and there are good reasons for this that again I will come to later.
  2. The section editor will then send the paper to 2-3 appropriate reviewers. If the journal allows self-nominations of reviewers, one of these names often will be chosen along with one from the section editor’s own experiences. I am personally not sure on the exact rules of nominating your own reviewers although I have always been trained that this should not be a close friend or anyone you have published with. This, however, will vary from journal to journal. Also, given the intricacies of some methodologies and the small number of scientific experts in a particular field, choosing reviewers who have not published together or do not consider themselves to be friends often is virtually impossible. It is incumbent upon the editor to trust the integrity of the reviewers. Most often that is not a problem, but sometimes it shapes the choice of reviewers.
  3. The reviewers invited by the editor will be sent the abstract and then the reviewer will decide if s/he would like to review the paper. In my experience as an editor it usually takes 4-5 invitations before I find an academic happy to review the paper.
  4. Once the editor has found 2-3 people willing to review the paper job done for the editor for a few weeks. Typically, reviewers are asked to turn around papers in 2-4 weeks, but in reality this often takes longer. This situation illustrates one of the hypocrisies of academics. When we submit a paper, we moan at the length of time it takes to get comments back. However I have lost count of the number of reviewers I have had to chase to get a review back! In some cases, as an editor it has taken me several weeks to find 2 people willing to review the paper. This delay is one reason why it can take several months to get a paper accepted for publication following submission. It is important to add that with most journals the review is blind and the authors or reading public never know who performed the review.
  5. Once the reviews come back in from the 2-3 reviewers the section editor has a decision to make. Reviewers are asked to provide comments, as well as decision. The decision varies with each journal, but is typically some version of: Accept as submitted (incredibly rare), Accept – with minor changes, Accept – with major changes, Re-submit, or Reject. If the 3 reviewers all agree then this decision is easy. However, sometimes we may get one reviewer suggesting Minor, another Major and a 3rd In these cases, the section editor may either ask for another reviewer or make the decision themselves.
  6. If it is not “Reject” the comments of the reviewers are sent back to the author, alongside any comments from the editor and the reviewers have the chance to reply with their amendments. This phase is when you will see the “Damn you reviewer 3” comments on twitter. This is because you often get 2 glowing reviews and then a 3rd reviewer wants a whole host of new studies performing.
  7. When the resubmitted paper is submitted, the section editor will either send back to the reviewers (if major comments) or may make a call themselves if the amendments were initially classed as minor. Once the editor and reviewers are happy the paper is accepted and it goes off to the publisher for type-setting and publication.

That whole process seems straight-forward and rigorous, right? However, it is far from ideal.

 

Where does it go wrong?

There are several hot spots where this system can be less than ideal. First, it should be noted that the review process is not meant to be a barrier to publication, but a process that helps authors submit papers that offer the best contribution to the knowledge in the field. Many reviewers forget this concept and it is the editor’s job to make sure that is the result.

  1. Self-nominating reviewers. Whilst good section editors will look carefully at these nominations it appears that not all do. It is therefore feasible that a paper could be reviewed by a “best friend” who will be more lenient than they perhaps should be. It has also been identified that some less than honest researchers create “fake” email addresses and “names of academics” to allow themselves to review their own paper! I could not believe this myself when I was told it but apparently, this is why some journals have stopped asking for suggested referees with one journal having to withdraw hundreds of manuscripts for this reason! Moreover, I was once invited to review my own paper. This invitation was, of course, a mistake made by the editor who had seen vitamin D in the title and not realised I was a co-author. I promise you I did not review this paper and immediately pointed out this error to the editor.
  2. Getting the true experts to review the paper. As an editor, I often play the following game and it is not a fun one. I will get a paper on, for example, protein so I immediately go the world leaders in this field. I fully expect them to say no as I know how busy they are. Often, they do say no but suggest capable alternates. These are typically junior colleagues in their group. However, these colleagues are sometimes also too busy and they then recommend someone else to review the paper. Sometimes this can be a PhD student in their group. So, we started off inviting world leaders but we end up with junior people reviewing professor’s papers. This situation does not sound great does it? I personally I do not think so. But what do you do when the 10 leading people in the world turn down the invitation to review the paper? Someone has to review it and we need to get this process performed in a timely manner or the journal’s reputation crashes.
  3. Inexperienced reviewers do their best but they may not be qualified to review the paper. It is possible that a reviewer knows a great deal about most of the paper so agrees to review it. This paper, for example, could about be nutrition to prevent muscle soreness. In the paper the authors may have used a novel assay that they describe as being brilliant to measure free radical involvement in the damage and give what appears to be a suitable reference to support this decision. However, if this reviewer is not a redox biologist they may not be able to give this section of the paper a thorough review and consequently it slips though the reviewing net, despite that issue potentially being an inappropriate assay that has resulted in incorrect interpretations being drawn on the paper. Another good example of this sort of problem is DXA for measurement of muscle mass. Lots of sport scientists are now using this technique where in reality it has many major flaws that are not acknowledged in the paper.
  4. It is hard to say to someone “I don’t believe you”, especially if you are a young scientist reviewing the paper. Imagine the scenario. A paper is submitted showing that a new herb results in lean mass gain of 12kg in 6 weeks with no change in diet. The authors do a great job in explaining why this result may have happened and the figures presented clearly show this gain is true. The methods used appear solid. But we know the data cannot be true. It takes a very confident reviewer and editor to demand proof that these data are legitimate.
  5. The pressure to publish or perish and the difficulty in publishing negative data. I like to give people the benefit of doubt and I do believe most people are good. I therefore like to think that rather than making up data, it is more likely that some people put an overly positive spin on negative data. Academics are under immense pressure to publish and it can be really hard to publish negative data.
  6. The paper could have been rejected several times by world leaders but the authors have simply moved on to another journal. Some papers are trashed by world experts, but the authors can simply move onto a lesser journal and hope it does not land back on that reviewer’s desk. So, flawed science could be pointed out by one, or even many journals but missed by another. It’s a bit like spinning a roulette wheel eventually you get what you want. This cannot be right? The one I find even stranger is when a journal tells you that the paper is not suitable for the main journal, but they would consider it for their sister “Open Access” journal where if you pay a few thousand pounds for open access there is a good chance it will get in.

Above – The paper I am most proud of. This was the first paper to take muscle biopsies from professional rugby league players. Unfortunately, this got a hard time during peer review as one reviewer did not see how this added to the literature given it has been done previously in football. You just never know what comments are going to come back from reviewers.

The end outcome is that poor science can often creep into the literature and then this is used to sell some useless product. But the product has a “peer reviewed study” to confirm the claim and therefore we are evidence based practitioners. But are we? Surely as evidence based practitioners we need the ability not just take the research on face value but critically appraise it?

 

Why is it so hard to find qualified reviewers?

Time pressures on the senior academics. I could stop writing this section now – but I won’t. I can’t help feel academics have been taken for a ride for a long time by this system and wonder if many are now making a stand by reducing their reviewing commitments. Think about it. As academics, we have to:

  1. Perform the study
  2. Get the income to pay for the analysis
  3. Write the paper
  4. In some cases, pay upfront to submit to the journal (many US based journals have submission fees, for example it is $100 to submit a paper to MSSE). On many occasions, I have paid this submission fee from my personal finances.
  5. At least 2 more academics will review the paper for free
  6. The journal then asks you to sign over copyright of YOUR work that you paid to do
  7. You then get a nice bill to publish it (often about 1K) and this is not open access. If you need open access this can cost several thousands. It is not unusual to submit a paper you are really proud of and then go into a blind panic when you realise how much it is going to cost you to publish it. And if you want a colour figure in the article, expect to pay an additional £1,000!
  8. People then pay to read it, either through subscribing to the journal or paying for the individual article. They don’t pay you, or your university, but the publisher.
  9. Just to reiterate the skewed system, the academic reviewing the paper does not get paid a penny to review the paper and the academic who published the work not only earns no money from publishing it, but pays to have it published. Yet…
  10. The publisher makes a huge profit, often several million dollars. As of 2015, the academic publishing market that Elsevier leads had an annual revenue of$25.2 billion. According to its 2013 financials Elsevier had a higher percentage of profit than Apple, Inc. For a better blog on this than what I could write have a look at this.

And we have all played this game for years thinking this is a good and fair system. We must be mad. It is therefore no wonder that the more experienced academics do not want to review for the smaller sport science journals. And fair enough. I spend about 1 day per week reviewing/editing papers (I also do not get paid for being an editor). How many other jobs are there in which part of your role is to work for free for publishers so they can make huge profits? Not only that, but with modern university workload plans, editing a journal is not considered a contribution to your time, at least not directly. The more senior academics will also have to review grants and conference proceedings and therefore often reserve their time for the higher profile journals.

 

Is there an answer to this?

I am not saying I have all the answers but I do wonder if we need to start thinking differently. A few suggestions that may work could be:

  • Use some of the huge profits made by the publishers to pay the reviewers. I think this way we could tempt some of the bigger names to perform timely and accurate reviews. This does not need to be huge money but some reward for giving half a day of your life up to review for a publisher that will make considerable profit from your efforts only seems fair. Or am I wrong?
  • Perhaps we need to name the reviewers. This way, following a review, people could point out if an author’s spouse or best mate reviewed the paper! It may also take away the temptation to ask a friend to review your paper. It would also give the more credibility, if for example, you had a paper on protein reviewed by Stu Phillips. Finally, it would hold the reviewers to accountability. Every reviewer would be less likely to be unreasonable. The reviewer should be named only at the end of the process, so there would be less probability of personal conflicts during the review process.
  • Maybe we need better post publication critical reviews? You can write letters to the editor but more timely post review discussions may be a good thing with manuscripts withdrawn if the authors cannot adequately defend major problems.
  • Papers should be withdrawn if authors fail to declare reasonable COIs.
  • We certainly need better research funding in sport science. Often, we have to go to companies with commercial interests in what we are doing otherwise no one will fund our work. Who else other than a sport drink company would be willing to fund research on sport drinks? Wouldn’t it be great if more independent funding was available for sport science? There are some very rich organisations connected to sport, surely some of these could support research.
  • I think we need fewer journals. There are so many ‘predatory’ journals now (that is, journals that will publish almost anything for a fee – we get at least 10 invitations per week to submit to one of these journals). These days, it is so easy to just keep submitting until someone accepts the paper.
  • Should we have to submit previous comments and the names of the referees when we re-submit our paper? We would be able to explain what we have done to address the original concerns or why we believe the reviewers were off the mark. This would stop people continually submitting until they find an underqualified reviewer to judge their work.
  • Maybe we need a “3 strikes and you are out” system. If the paper has been rejected 3 times (so 3 editors and 6 reviewers have found major problems with it) should that mark the end of the paper and suggest that it is time to perform a new study or at least a substantial re-write? I often use this rule myself and I have many papers that are assigned to the “almost made it pile”. But, of course, there is no way to enforce this system.

 

Conclusions

Hopefully you can see that despite peer review is an essential to maintain scientific integrity, there are many places that this system currently falls down. It is therefore essential that everyone not only looks for research but actually does some due diligence on the studies that are cited. I will try to round this up with a few tips that may help you to decide should you fully trust that paper being used to justify the next best thing in sport nutrition.

Tip 1 – Remember the old adage that if something looks too good to be true, it probably is. If there is a supplement that results in a 5kg increase in lean mass in a week, I think we can all be sure this increase is not quite right, to be generous. Use your Spider sense, chances are you will be correct.

Tip 2 – Judge the journal. If someone has game-changing research they would not publish it in the ‘Wigan Journal of Sport Nutrition’. If you find it is in this kind of journal there is a fair chance it has been rejected from the more credible journals. After all, we all want to publish our work in the very best journals, since we are judged as academics by publishing in high impact journals. I am very sceptical of some of the low impact open access journals. It feels like you can buy your way into some of these. One way to triage journal quality is to see which one is associated with a major scientific society. For example, the Journal of Physiology is associated with the Physiological Society – one of the most revered societies of physiologist researchers.

Tip 3 – Look at the reputation of each of the authors and that of the group leader. Look for the last name author on the paper; as in our area of research, this person is often the brains of the operation. Does this author regularly get asked to speak at International conferences, have they got a track record in the area etc. This is not to say new people cannot come along and change the field but it is certainly worth some thought.

Tip 4 – Unfortunately you need to read the full paper and scrutinise the methods. I used to criticise students for relying too heavily on an abstract but these days people seem to go even further and rely on a tweet. It is by looking at the methods, particularly the limitations and assumptions involved, that we can begin to judge the quality of the science. I do however think that some authors need to be more diligent with their abstracts after all this should be a concise and accurate reflection of the paper.

Tip 5 – Look for a COI statement. If there is one ask yourself have these data been over played to help the product. As I said, I have no issues with COIs but they need to be declared. If there is no COI why not do a quick “google search” of the authors and the product they are testing. It’s surprising how many times when you do this you find clear associations of authors with companies. If the authors do own or have shares in the company and there is no COI, personally I become very sceptical.

 

I hope this blog has given you some insights into peer review and some thoughts on some ways to navigate it slightly better. I must finish by saying that these are just my opinions and other senior academics will no doubt have very different views. I guess that’s what makes life interesting, if we all thought the same way life would be very dull.

Until the next close encounter of the nutritional kind

Good luck

Graeme (and Kev this week)

p.s. I thought I had better finish this with our own COIs;

 

Graeme:

GSSI – Received honorarium to speak at their expert meeting

Aliment Nutrition – currently funding research on probiotics in my lab

Science in Sport – funds research in our laboratory

GSK – funds research in our laboratory

Get Buzzing Bars – Help with new product development and scientific writing

Healthspan Elite – Help with new product development and scientific writing

Nutrition X – Help with new product development and scientific writing

Other sources of funding for my research include MRC, BBSRC, Everton FC, England Rugby and Warrington Wolves.

 

Kev:

GSSI – Received honorarium to speak at their expert meeting

GSK Nutritional Healthcare – funds research in our laboratory

Smartfish – funds research in our laboratory

 

 

Leave a reply

Your email address will not be published. Required fields are marked *

Go top