By Mariana Arroja, PHD STUDENT, University of Glasgow
“How was this paper published?” This is a question that every scientist has asked themselves at least once. Assessing specialised literature can be quite challenging, but as experience develops the easier it is to identify the bad science that is published on a daily basis. Prior to having a manuscript accepted for publication, it will have been subject to the scrutiny of a select group of scientists in a process called peer review. Despite this, poor quality research is still sometimes accepted for publication in peer reviewed journals and for some of those starting a scientific career, this reality can be, let’s say, a bit discouraging. So what exactly is happening? Is it peer review that is failing?
I recently attended a workshop organised by Sense About Science: “Peer Review: The Nuts and Bolts”. Students and researchers from an array of scientific specialities were present, as well as a discussion panel consisting of a publisher, an editor, reviewers and public advisors. The workshop aimed to discuss the good and bad aspects of peer review, what could be improved and what measures are beginning to be implemented in order to improve the process. From the beginning of the event, a very healthy, informal and engaging discussion was generated. In particular, for those without peer review experience, the talks given by the panel members were useful to gain access to more in depth knowledge about the process as well as what is expected from scientists when submitting a manuscript and how to review a paper once the opportunity arises in the future.
For me, the really exciting part was the discussion around existing issues and what could be done to improve them. A lot of focus has been given towards the overflow of publications with low quality experimental design. At the same time, which researcher does not know someone who could not publish due to ‘lack of good results’ and what does this really mean? It could be argued that the pressure for academics to publish and the bias of so-called ‘positive results’ are detrimental factors in study design and may even lead some scientists towards manipulating data – especially when research funding and tenured positions are often dependent on these publications. This, in turn, defeats the purpose of peer review.
I learnt that editors and publishers are debating the benefits of a shift towards well-designed experimental studies, instead of novelty, but also that the process of peer review itself is improving and becoming more engaging. For instance, scholar accreditation for reviewers is being discussed and the ORCID identification system for manuscript authors is starting to be implemented by publishers.
Ultimately, peer review can be an effective process. It promotes the exchange of ideas between researchers- after all, it even helped Einstein – but in reality, it still needs to be polished.
Overall, it was a great experience but most importantly, it encouraged me to continue ‘standing up for science.’
This workshop gives you the opportunity to learn about this interesting topic and I highly recommend it to others. For those who would like to find out more, the information is available here.
Further information and reading:
A publication ‘Peer review: The nuts and bolts’ can be downloaded from the Sense about Science website.
The inaugural peer review week runs from 28th Sep until 2nd October. It is a partnership between Wiley, ORCID, Sense about Science and Science Open. It is a virtual event and includes:
- Daily posts about peer review on each organization’s blog – Wiley, ORCID, Sense About Science & Science Open.
- A Twitter campaign (#peerrevwk15)
- A Webinar on Trust and Transparency in Peer Review with Kent Anderson (AAAS), Verity Brown (St Andrews University), Alexander Grossman (ScienceOpen), Laure Haak (ORCID), and Andrew Preston (Publons)