Tuesday, June 17, 2003

Official Ratings Conference
In Sydney, 21 - 24 September, there is a conference discussing ratings of film and literature. The interesting part for me is that the keynotes and special papers include topics such as: "Effects of Playing Violent Video Games.", “Ratings, Content, and Regulation: A View from the U.S. Video Game Industry”, “The Diverse Worlds Project: Narrative, Style, Characters and Physical World in Popular Computer and Video Games”, "Cross Platform Labelling and Filtering - Future National and International Challenges", “Are Computer Games Good for Young People?”, “The Diverse Worlds Project: Narrative, Style, Characters and Physical World in Popular Computer and Video Games” and "What adaptations must be implemented so that a "traditional" rating system (such as the ESRB's) can continue to offer accurate, consistent, and reliable ratings for online, wireless, and future content applications.”

I think it is great that these topics are being discussed. But where is the overlap between the different arenas where these things are discussed? I don't recognize any of the topics from academic papers, I don't see any of the discussions from recent conferences on digital culture, on games or on other academic discussions concerned with digital content online and offline. Perhaps it's just me, but I don't really recognize any of the names either.

In Norway, "serious" media researchers don't want to get involved in the "media is dangerous for children" debate, despite the fact that there is a lot of money available for funding if you wish to explore this field. The result is that we get totally separate spheres where in one people discuss effect and influence and in the other people who discuss style, form and content. Judging from the list of names and topics for this conference, the same happens internationally.

Effect and influence is a very complicated area to address, and it is impossible to come to any real conclusions: yes or no. This has been used as an excuse to stay away from the rather rigid demand for answers which the effect-directed programs imply. But isn't it equally complicated to get a real answer within the field where we work? Isn't what we do to explore trends and indicate options for future development, register preferances and analyse anomalies? There seems to be a cultural divide within academia which makes one set of questions exclude an other set, and this in itself is interesting - perhaps even tantalizing.

No comments: