On January 13th the National Information Standards Organization (NISO) held a webinar exploring the landscape of retractions in scholarly communications. The webinar featured three experts on the subject: Ivan Oransky from Retraction Watch, Veronique Kiermer from PLOS, and Kirsty Meddings from CrossRef. The three of them painted a picture of that landscape (like many of the painted landscapes that stay with us) that was thoroughly bleak, but ultimately hopeful.

Oransky opened with a comical yet troubling prank played on the Journal of Computer Science and Information Technology by a well-meaning algorithm (evident at least from the article’s title, “Robots No Longer Considered Harmful”). Some librarians and scholars have blogged that the journal is part of a predatory publisher group called the American Research Institute for Policy and Development (ARIPD), which solicits fees from authors for publication, failing to properly edit and review. The computer generated article, essentially gibberish, was accepted into the journal under the aliases I.P. Freely, Oliver Clothesoff, Jacques Strap, Hugh Jazz and Amanda Huginkiss. Oransky humored us on the other end of the webinar and read the names aloud.

Retraction Watch, funded by the MacArthur Foundation, the Arnold Foundation and the Helmsley Trust, serves as a resource for scholars to raise a flag for erroneous scholarship in science journals, and also provides a blog covering important retractions in the STEM community.

Kiermer highlighted that there has been a notable spike in retractions over the last few years. For instance, from 2001 to 2010 PubMed retractions went from 40 to 400. While this spike appears ominous, Kiermer believes it is likely indicative of the improved checks and balances in the scholarly community, as technology is leveraged to uncover errors in publications.

Kiermer offered a few remedies to the current situation: increased training in laboratory management, improved leadership, mentoring, and incentives for rigor. She concluded that, “Journals have a large role to play, but at end of day this will take more than journals, it will be a global effort.”

Kiermer noted that PLOS has adopted CrossMark, a feature developed by CrossRef. Meddings described it as a plugin that identifies the retraction status for an article, allowing for easy access to the current status of scholarly content that researchers and policy makers rely on to make decisions in their fields. Meddings emphasized the importance of taking the extra meta-data step when publishing of tying the ORCID ID and DOI together, noting that though DOIs often can connote credibility, CrossRef does not vet them. The ORCID ID connects the article to the author, and in that way can help researchers better evaluate rigor.

If the rise in retractions over the last decade is in fact the result of improved techniques in identifying erroneous scholarship through the application of meta-data, APIs and tech-savvy watch dogs, we are left naturally concerned over the slew of articles that have historically misled our scientific community. But while we’ll never know the impact of those errors, we can at least endeavor to dutifully tag our current publications with the proper metadata–to utilize the digital tools at our disposal to ascertain the standing of relevant scholarship and to move in a more transparent direction.