and the problems it solves • Credit for peer review • Anonymous post-publication review - pros, and mostly cons • Is peer review effective? • An alternative - publish than filter - utopian, but with seeds of the future • Automated peer review - not ready for prime time - yet • Who owns the review? • Conclusions Overview
are delivered quickly 2. Active scientists make all decisions 3. Revision requests are consolidated 4. Limited rounds of revision 5. Decisions and responses are available for all to read 6
data duplication in image data • Anonymity tends to be a driver of poor discourse online • Can find misconduct, but also can be used as a vehicle for a witch-hunt, am very ambivalent about this at the moment, key thing for me is looking at how the system drives a the tone of the conversation, tends towards adversarial over constructive
and journals J R Soc Med. 2006 Apr; 99(4): 178–182. doi: 10.1258/jrsm.99.4.178 • intentionally introduced errors are not discovered • already published papers that are re-submitted, but with institutions changed to less prestigious ones get rejected • blinding does not improve peer review outcomes • implies that peer review is not selecting for quality
measured via citations • https://elifesciences.org/content/5/e13323#fig1s1 After the top 3% rank, proposal effectiveness is statistically indistinguishable => ranking is mostly useless
The Winnower • Fast to publish • Interesting model • Not really first choice for most academics • To become plausible an entire field would have to flip to this model • Even in physics, where pre-prints are the norm, peer review on submission is a requirement
(Wake Forest School of Medicine) and Chad Devoss (Next Digital Publishing) to investigate if it is feasible to automate the statistical and methodological review of research. The programme, StatReviewer uses iterative algorithms to “look for” for critical elements in the manuscript, including CONSORT statement content and appropriate use and reporting of p-values. It makes no judgement call as to the quality of validity of the science, only regarding the reporting of the study. Automated statistical and methodological review
✓ ✓ ✓ ✓ Did you make any changes to your methods after the trial began (for example, to the eligibility criteria)? Why were these changed? Were there any unplanned changes to your study outcomes after the study began? Why were these changed? Please explain how your sample size was determined, including any calculations. Reviewer’s report
first very initial pilot is taking place now • have run the program against ~ 5 manuscripts • potential to extend to other kinds of submissions • contact Daniel Shannahan - [email protected]
• Extract many signals from manuscript, including disambiguated authors, affiliations, citation graph, keywords • Attempt to predict future citations of submitted manuscript
• editor feedback uniformity negative • feeling of “don’t tell me what good science is” • probably if reconfigured, has potential for value • there is a concern that it might be overfitting, and not selecting based on the content of the paper, but rather the context of the authors
of evidential support • No “one size fits all” approach • Social design of your system can have a massive impact on the effectiveness of the review process • Broad trend towards transparency, can take many forms • Automated systems not ready yet, possibly best fit for requirements checking, augmenting the process