As an independent scholar, I have anecdotal evidence in favor of Stevan’s prediction.  Two aspects stand out for me:


1.  In the evolving delivery of Massively Open Online Courses (MOOCs), some subjects do not admit well of machine grading, although multiple-choice quizzes are heavily used.  For meaty assignments, peer assessment procedures are used, since it is the only way for it to work among thousands of participants.  In some technical courses, peer assessment is central and quality of assessments is taken seriously by the students.  At the worst (and I am participating in one of those at the moment), peer assessments are largely ceremonial and the level of returned comments is superficial and banal, so the opportunity to learn and improve from the review and perspective of others is lost.  One can, of course, learn a great deal from making assessments of the work of other students, since it improves one’s own critical understanding and provides practice at affirmatively appraising the work of others.  The instilling of this spirit is uneven across MOOCs I have been in.  And, where taken seriously and guided appropriately, the peer assessment process is invaluable.

2.  Scholarly and scientific peer review are a different matter, and it has different drivers, including editorial limitations and the availability of qualified and interested reviewers.  (In the Coursera MOOCs, a student does not receive marks and appraisal of their own assignment without first providing blind peer assessments for at least five other students. This is valuable so long as the students decline to game the system by simply giving their peers high marks and no feedback. There are some who forget that review is supposed to be constructive and not ego-tripping, not unique to the MOOC case.) 

Recent experience with EasyChair suggests that on-the-whole reviewers take their duties seriously and provide excellent observations.  Here there are constraints on length of submissions, norms for the community, time available for review, and the fact that not all submissions, regardless of quality, can be selected.  In my case, I can still take value from the review process of a rejected submission and, if I choose, self-publish the work on one of the sites, such as arXive, available for that level of contribution.  It won’t have the imprimatur of inclusion in a conference proceeding or professional publication, yet I can place the work in public and it will have benefitted from the reviews obtained and from subsequent comments by those whose attention is drawn to the work, although that will be by informal means.


For me, none of this is a bad thing. It serves to make work available, there are quality drivers even if not up to peer-review standards of a given field, and sometimes it is the best way to have work preserved, available, and an invitation for further review and discussion.  I think, in the MOOC case where peer-assessment and the cultivation of Community Teaching Assistants (sort of like trustees) is well-nurtured, the long-term effect may be profound in the promotion of learning.  And the historically-revered edifices will remain at the pinnacle of all this.



-- Dennis E. Hamilton    +1-206-779-9430  PGP F96E 89FF D456 628A
    X.509 certs used and requested for signed e-mail






From: [] On Behalf Of Stevan Harnad
Sent: Thursday, August 21, 2014 12:19
To: ASIS&T Special Interest Group on Metrics
Cc: LibLicense-L Discussion Forum; Lib Serials list
Subject: [BOAI] Crowd-Sourced Peer Review: Substitute or Supplement?


Harnad, S. (2014) Crowd-Sourced Peer Review: Substitute or supplement for the current outdated system? LSE Impact Blog 8/21 


[ … ]

My own prediction (based on nearly a quarter century of umpiring both classical peer review and open peer commentary) is that crowdsourcing will provide an excellent supplement to classical peer review but not a substitute for it. Radical implementations will simply end up re-inventing classical peer review, but on a much faster and more efficient PostGutenberg platform. We will not realize this, however, until all of the peer-reviewed literature has first been made open access. And for that it is not sufficient for Google merely to provide a platform for authors to put their unrefereed papers, because most authors don’t even put their refereed papers in their institutional repositories until it is mandated by their institutions and funders.