As the month comes to an end, you realize how busy you’ve been across this month reading, writing, publishing, doing research, and much more! But it’s also significant for you to be updated about the on-going discussions in the scholarly publishing industry is discussing. To help you stay up-to-date with the latest topics of discussion in academia, we’ve curated this list of interesting posts and updates. While most of these posts are about journal publishing and academic life, we also bring you an update about one of the most significant industry events of the year – Peer Review Week 2017!
1. MECA – A fresh manuscript exchange initiative: The process of taking an article that has been submitted to one journal and transferring to another journal after rejection is fraught with frustration and anxiety for researchers. They have to spend an inordinate amount of time and effort reformatting the article for resubmission. In this fine post, Charlie Rapple, co-founder of Kudos, which helps researchers, publishers, and institutions maximize research outreach and influence, writes about MECA, the Manuscript Exchange Common Treatment. This very interesting and fresh initiative for manuscript exchange was launched at the Annual Meeting of the Society for Scholarly Publishing (SSP) to simplify manuscript transfer across publishers and aims to concentrate on recommended best practices. Clarivate Analytics (ScholarOne), Aries Systems (Editorial Manager), eJournal Press (eJPress), HighWire (BenchPress) and PLOS (Aperta) are among the organizations backing this initiative. Whether MECA will introduce standardized formatting across different journals remains to be seen.
Two. Addressing the irreproducibility crisis with a fresh measure – the R-factor: A group of researchers led by Peter Grabitz has come up with a fresh solution to solve the irreproducibility crisis. They propose an treatment “that yields a ordinary numerical measure of veracity, the R-factor, by summarizing the outcomes of already published studies that have attempted to test a claim. The R-factor of a set of research results can be arrived at by dividing the number of published reports that have verified a scientific claim by the number of attempts to do so. Based on this idea, the R-factor of a researcher, journal, or research institution can be calculated by considering the average of the R-factors of the claims they have reported. The R-factor stands for responsibility, robustness, and reputation. In this blog post, the author critically evaluates this treatment and points out certain flaws in the R-factor: (i) it is too simplistic, (ii) it could be affected by publication biases, (iii) it does not add value to what we already have for ensuring reproducibility, (iv) it does not go into details about exactly how it should/could be used, only how it can be calculated. Overall, according to Neuroskeptic, “the R-factor might work in some fields, but I don’t think it’s adequate for any science that uses statistics – which includes the good majority of psychology and neuroscience.”