The Azimuth Project
Academic publishing (Rev #12)

The idea

The academic publishing is in crisis, but also at the time of great opportunities for the authorship and dissemination of knowledge. People nowdays rarely have enough time and reward for quality refereeing, editors work mainly for free for highly priced and often noncooperating commercial publishers, libraries have no money to follow the raising journal subscriptions and are often trapped in packet deals which made them long term dependent and buying unwanted journals; there is a lack of guarantee that the free services like arXiv will stay free and that the old knowledge will be preserved. The internet and cheap e-readers are now available opening new possibilities; publishers perceive them also as a danger because of easy infringation of author and publisher rights. Here we also discuss the academic evaluation practices, based on peer review and on the usage and measures of citation statistics, assessing creative originality and discovering plagiarism.

This community is interested in reforms for better publishing future, and developing technologies for the reforms. For more, see:

The initial version of this article (and some of the later additions) is compiled from nnLab-related private wiki zoranskoda:citations.

While in court it is easier to win if somebody had a prior registration of copyright in a copyright office, in principle most of the copyright laws and patent laws in provable cases give advantage to the factual priority of the work, even if not registered. That is, every author’s work is a priori protected from the moment of creation; the registration at a copyright office just makes it easier to prove the priority in disputes.

According to some historians and anti-copyright activists, the copyright in the 19th and early 20th centuries mainly worked for the authors, while today it is structured in a way which protects mainly the publishers and less the authors. In particular, often the authors loose battles with their own publishers in attempts to make parts of their work free or published in a form which they prefer.

Citations and impact factors

  • Joint Committee on Quantitative Assessment of Research Citation Statistics, A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS)

This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using “simple and objective” methods is increasingly prevalent today. The “simple and objective” methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.

Arnold-Fowler also prompted

The publisher’s game:

Ecologists Carl and Ted Bergstrom have written some papers on game-theoretic aspects of publishing, which mathematicians might want to study:

and some of the other papers and information at

Software initiatives for academic publishing

Plagiarism, bad science authors/editors etc.

The science of peer review in science

  • Is it reliable? There is low agreement between reviewers (Daniel, Mittag, & Bornmann, 2007). There is higher agreement about which papers should be rejected than for which papers should be accepted.
  • Should reviewers be blind to the identity of authors? An original study found that blinding reviewers to the identity of the authors improved the quality of the reviews in British Medical Journal (McNutt, Evans, Fletcher, & Fletcher, 1990). When this was expanded to other journals, there was no evidence that it improved the quality of the reviews (Justice, Cho, Winker, Berlin, & Rennie, 1998; van Rooyen, Godlee, Evans, Smith, & Black, 1998).
  • Should authors and the public be blind to the identity of reviewers? A study found that informing authors of their reviewers’s identities (ie an open review process) did not change the quality of the reviews (van Rooyen, Godlee, Evans, Smith, & Black, 1999). In an unpublished study, they extended the study by opening up the review process further. They posted both identity information of reviewers and authors on their website, this had no effect on the quality of the reviews (discussed in Smith, 2006).

category: publishing