Frequently Asked Questions

Why is it important to make information about who did what machine-readable?

To visualize, or quantify, the contributions of a researcher or researchers across a set of articles, one needs comparable things to tally. While some might prefer to stick to free-text essays about one’s work, both for metascience for researcher evaluation purposes, the world needs things that can be counted. With a delineated taxonomy such as CRediT used in papers, this can be done. While AI can now read papers, some information is still hard to tally, and will not be taken up, unless a common standard is involved.

How is CRediT information made machine-readable by journals?

Most journals use a paper submission system provided by the publisher of the journal. These submission systems are designed to collect the journal article metadata (such as the names and affiliations of the authors, the abstract of the paper) from the corresponding author and store them in a machine-readable file. Most systems use the Journal Article Tag Suite XML format to append these metadata in an organized manner to the journal article published online. While the latest JATS 1.3 format allows for recording author contributions according to CRediT in the article metadata not all submission systems collect authors contributions. PLOS journals are collecting this information as metadata through a form.

The CRediT categories aren’t well-suited for my project; what should I do?

Other schemes exist, such as the Contributor Roles Ontology, which extends CRediT into more specific roles, and the Taxonomy of Digital Research Activities in the Humanities (TaDiRAH). However, within science at least, other schemes unfortunately are not widely used. For reform of CRediT itself, you may wish to contact the CRediT committee of NISO.