There are three basic types of peer-review: open, single blind and double blind. Open peer-review means everybody knows each others’ names. This is normally discouraged, since people tend to read with a bias, even if they don’t know they do. Good points by somebody you don’t like are disregarded, bad points by a friend are accepted. Single-blind protects one or the other group, depending: are the papers or the reviewers anonymised? If the authors are anonymous, then the reviewers don’t know who they grade, if the reviewers are anonymous, then the authors don’t know who evaluated them. The last is often done if the issue can be expected to be severely contested, and the repercussions become aggressive. That kind of single blind is very rarely used. Double blind is when the authors don’t know the reviewers, and vice versa. This is the most common type, and what most journals and conferences practice, including DiGRA. DiGRA has recently (2010) tightened the peer-review net, and on top of having a double blind process, there’s also a meta review, which means that there are reviewers whose only job is to read the reviews and look for problematic reviews. This means that DiGRA not only peer-reviews articles, but also the reviews.
Why use meta reviews? Because a double blind peer-review process means that every review is weighted equally. This means that if you happen to hate one particular type of research, you can hide behind this process and downvote all that kind of research without being questioned. This tends to happen to new, challenging and very critical research. So if you have a brilliant new idea, peer-reviewing may destroy that idea rather than give it a chance, because most of the reviewers on any list will be dominated by who has the most common ideas about what is important. This is particularly visible when you want to introduce a challenging idea into a research community, for instance the idea that ethnography is as valid for collecting knowledge about society as surveys – if you visit a conference normally run by hard-data sociologists. The ethnographer is not likely to have a paper accepted. The same happens to a hard-data sociologist who feels that running books through a statistical program to see how many times certain phrases show up is a good idea , and then write about the quality of the book from counting, rather than reading – and then try to submit the results to a literature theory conference. It is very likely to be stopped in the peer-review process. This means that a regular, double-blind peer-review process is very good at maintaining a status quo, but bad at inviting innovation. Using meta reviews is a way to try to counter this. By reading through the reviews looking for methodological or theoretical bias, it gives a second chance to those articles that have been voted down because of the bias of the readers. Meta review is not used to remove already accepted papers, unless the process uncovers hints at collaborations.
How can one think to criticise a peer-review process? First of all, peer-reviewing is a way to restrict and control what can be published. It maintains the quality, but at the same time it frequently stops fresh, original and unusual ideas. Next, peer-reviewing is extremely costly, and makes it difficult to organise conferences on original, unusual topics. This means that scholars who want to study innovative topics – such as games – are punished in the academic system just because there are very few relevant conferences to go to, and few places to publish articles. No matter how good they are as scholars, they are handicapped in the general contest, just because they don’t have the same options for scoring academic points as other scholars. If let’s say a literature scholar can submit to a 100 potentially relevant conferences a year, and a game scholar can submit to five, getting the four accepted papers each year you need to impress your hiring board becomes very, very hard. And since Universities currently are focusing more on academic ”points” than on academic innovation, it means a game scholar will need to work a lot harder for the same recognition than the literature scholar. This is of course true of all new fields trying to gain traction in Academia.
This means that if a conference wants to be open to a wide range of ideas, it needs to go easy on the peer-reviewing. Many conferences do this by accepting abstracts, rather than full papers. This is risky, because you then don’t know what the quality will be, but at the same time it makes it easier for scholars who research actual new things to be heard. And since academia also is about actually learning new things, it is very important to keep some of these very open and welcoming conferences running, they are vital hubs for mixing the new and the old. Sadly, Universities tend to refuse funding to scholars that wish to go to the less rigidly reviewed conferences, which is another obstacle to innovation.
DiGRA currently has two types of paper submissions: full papers and abstracts. Full papers are very rigidly peer-reviewed, with two-three reviewers and then meta-reviewers. Abstracts are reviewed along the same lines, but the final full papers written from the accepted abstracts are not reviewed unless they are submitted to the journal, at which point they are reviewed again. This is to allow a mixture of traditional and new.
But all of this comes at a cost. A conference like DiGRA receives 2-300 papers each year. This means that scholars need to do from 600-900 reviews. Each reviewer can be expected to review 10 papers at the most – nobody gets paid to do this kind of work, so it’s all done on spare time next to very full schedules. This means that DiGRA each year needs 60-90 reviewers, and 10- 20 meta reviewers at least. These reviewers need to be scholars who know what they are talking about, which means preferably assistant- associate- and full professors. In a new field where there have not been all that many hires, finding 60 reviewers is not all that easy, particularly since you can’t expect more than a fraction of the existing professors to participate at any given time. It would be easier of there were 500 professors to choose from, harder if there are 100. This means that the double-blind peer-reviewing process itself becomes a huge bottle-neck for new fields of research, while it at the same time is absolutely vital to ensure academic quality and integrity. A dilemma well worth discussing.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment