Inside the Reviewers Studio

By Gary Hodges in Features, Misc Nonsense
Friday, December 5, 2008 at 3:19 pm
Perusing Mitch Krpata's always-interesting blog Insult Swordfighting, I came across a questionnaire he pulled from Shawn Elliot about, well, reviewing games - which Krpata then took the considerable time to fill out. I can say it was considerable because I just finished doing it myself, and it's a bear.

Still, I think it has value. It forces the respondent - assuming he's actually putting some effort into it - to seriously think about his job and how he does it. I'd love it if every reviewer filled it out, even if they didn't ultimately share their answers with readers (though I'd love to read everyone's)... I think it's a worthwhile exercise.

Anyway, I did it too (you can read it after the jump). And even if you aren't interested in my answers, you might be interested in the questions, as readers, thinking about what you'd hope a reviewer to say. 

Review Scores


Question 1: How much is on our minds before we begin playing any given game for review purposes? Will we imagine a range of probable scores that a heavily marketed, highly budgeted, and hugely anticipated game will get? What when the game is branded "budget" or is the work of a lesser-known, less-storied studio? If so, how closely have actual scores correlated with our assumptions?

I can easily say I've never approached an unplayed game with a score in mind. But being honest, I have to admit what I expect of a game might be affected by whether it's "big-budgeted" or "heavily marketed" or "budget" or "indie" or whatever - but maybe not in the way you'd think. 

Personally, I feel a tendency to be more critical of hugely anticipated games from big companies, because with their glut of talent and resources I expect a certain level of polish in their final product. When it comes to smaller titles from smaller companies (or even an impressively innovative title from a bigger company), I tend to want to recognize what they've accomplished rather than obsess over frayed edges that might've been attended to, had they a few more people/months/million dollars. Simply put: the baseline of adequacy is higher for games with a pedigree.

While some might see that as perfectly reasonable, I see it as a bias that could easily lead me astray: it colors my attitude about games in a way that's inappropriate for a useful, credible critic. So I often give myself an intellectual Pepsi Challenge... for example: Would I be more forgiving of Mirror's Edge's flaws if it came out of nowhere from a company I'd never heard of? Would I be as dazzled by Braid had it been machined out by EA? I think about this.

Sometimes I wish I could just get games on a unmarked DVD sitting in a Ziploc bag, never having heard or seen a lick of it anywhere before and with no credit or hint in the opening screens about who made it. Maybe we'd all benefit from that.


Question 2: Ought reviewers settle on a score before, during, or after writing a review? How consistent are our practices with our prescriptions? Have we, for instance, revised a score after writing our reviews, even though we advocate against it, and if so, why?

The act of picking some symbol to represent my feelings about a game doesn't come naturally to me, so I usually come up with a score after the review is written and only after much hand-wringing.

Sometimes, though, when I'm a little blocked, I'll skip around to feel productive rather than just stare at the blank screen, filling in things I usually do last like the headline, the game's publisher and ESRB rating, and yes, even the score. Sometimes that score ends up being the same one the readers read, other times it'll change once, twice, or a dozen times before I submit it.

It really depends on how clear an opinion I have about the game, and when that opinion formed. If I sit down to write a review and already have a strong, clear feeling about the experience, then it's easy to pick a tone for the text and a number for the score and stick with it. But sometimes (usually with games that aren't bad enough to be bad yet not great enough to be great) the process of writing a review actually helps me find my opinion of a game. In those cases, a review's tone or score might vary quite a bit as I'm working through drafts and my opinion finally congeals.

But ultimately, I refuse to be a slave of a number (or in our case, a Blue Pig Ganon). If I finish a review and feel whatever score I came up with prematurely doesn't fit any more, the number gets changed. To me, the words are law; the score is mere interpretation of the words.


Question 3: When possible, do we look at the scores that other critics give to the games that we're reviewing, as we review them? If so, are groupthink or iconoclasty potential problems?

While I don't sequester myself, I do try and avoid reading other reviews at first. After a while, though - usually after I've played a game enough to have my own fully-formed opinion of it - I sometimes will look at other reviewers' takes in the spirit of dialogue: I want to see how my feelings compare to theirs. It's never changed my views, though; if anything it only solidifies my own.

As for whether there's a risk of groupthink or iconoclasty in reading other people's reviews: absolutely. But there's an equal risk in shutting yourself in a log cabin and tapping out manifestos about games irrespective of the outside world (a common mistake young writers make is making a point of not reading other people's work for fear it will contaminate their own). As with all things, there's a middle path.


Question 4: Often times we will have repeatedly played and/or previewed games in development prior to reviewing them. Does this familiarity with a particular game's developmental process influence the scores that we assign to the final product in the way that a professor will take into consideration her students' limitations and proven potential when she evaluates papers at the end of the semester?

No. I find the story of a game's journey to shelves interesting, but it doesn't affect my opinion of a game one way or the other.


Question 5: Review writing carries real consequence, especially among members of the enthusiast press. Once-warm PR people and game producers can become cold upon our publication of undesirable review scores, diminishing or eliminating our ability to secure subsequent interviews and access. Postmortem discussions and exclusive looks at the publisher and/or developer's forthcoming products are less likely. Conversely, a few publishers will permit us to post reviews before competitors, provided our review scores are favorable. Do such pressures produce a subliminal background or even enter our thoughts as we write reviews and assign scores?

When first I started with Village Voice Media about 3 years ago, I resented the fact we could barely get PR people and publishers to give us the time of day. Now - a smidgen older and wiser - I'm unequivocally thankful we've rarely had to deal with the poisonous horseshit described in the question above, and - thus far, knock on wood - we've never been obligated to.


Question 6: Is grade inflation an ongoing problem?

This is a sticky question as it quickly degrades into, essentially, someone arguing "yes" mainly because they didn't like a heavily-hyped major release as much as the mainstream enthusiast press did.

But yes.


Question 7: Do scores determine our tone? Can a "3" encourage us to explain an aspect of a game in clearly negative terms where our attitude is actually less decided? Example: Game X's camera obscures the action, combat is irritatingly difficult, and "save" stations are few and far between. In our reviews, is Game X's plot, which we're still thinking through, more likely to become miserable than plain?

Shouldn't tone determine the score, not vice versa?


Question 8: Do scores encourage our readers to conduct a sort of text-to-number calculus where the two obviously negative statements in an otherwise positive-sounding review necessarily translate into every point deducted from the "10" that the game didn't get? Does this make reviews with high marks more likely to overlook fault, and reviews with low marks less likely to celebrate accomplishment?

Well as far as the first part goes: obviously, yes. Look at any messageboard after a major review is posted and you'll find gamers saying some variation of "the text makes it sound like an X but the score is a Y." Intended or not, unwarranted or not, a fool's errand or not: readers do try and figure out the logarithm in which text goes in and a number comes out. That being said, there's no point in giving a score if there's a discrepancy between it and the text (unless of course you're just being a stinker trying to make a point about the silliness of scores).

I'm not sure how other reviewers do their job, my impression is that it's a highly personal process, but my method is for the totality of the text to reflect my general take on a game, and the score to reflect the text. If I'm doing my job, there shouldn't be any discrepancies between my opinion versus my copy versus my score.


Question 9: Which is more important to us, our scores or our copy? If the latter, have our responses revealed any inconsistencies between our attitudes and actions? Are we still convinced of the importance and power of scores?

My copy is far more important to me.

When I worked at Subway as a teenager, we wore plastic gloves when making sandwiches. When in the back, though - slicing meats, unpacking cheese, chopping onions - we never wore gloves, because we didn't have to (Arizona law didn't require it). As management explained to us: "We wear gloves out front as a courtesy to our customers."

I guess that's how a view scores: something we do as a courtesy to readers, because it's something - like gloves on a food handler - you want to see. But counterintuitively, giving an audience what it wants is sometimes a dead end. Sometimes, in fact, you need to starve an audience of what it wants and teach them to want something else.

Another topic for another time.



Have we ever submitted review scores to publishers prior to their publication? If so, why?

No, never. Can't imagine why I would.


Have we ever submitted review copy to publishers prior to its publication. If so, why?

No, never. Can't imagine why I would.


Have PR people suggested that specific critics review specific games? Have we complied with their suggestions?

That indeed happens, but - as far as I know - never with Chris or Anton or me.

A quick note: I would've liked to see more questions about pressure from editors in this section - for example, have you ever been asked to change a score by your editor, or have you ever had an angle or tone to take in a review given to you by an editor? - gwh


Reviews Vs Criticism

Question 1: What is the object of a review? What are the review writer's obligations?

Good lord, this is a rough question. I'm not going to drill down into it as deep as I could, because I want to actually finish this questionnaire, you know, today.

This is a contentious and, I think, ultimately personal topic. Here's my take:

There are two kinds of reviews: one is essentially a product review, describing what's good, what's bad, and ostensibly whether a reader should bother with it - i.e., a buyer's guide critique. The other is criticism, in the same vein as literary or film criticism: an appraisal of a game on a more esoteric level, be it the experience of playing it, its artistic merit, relevance, cultural context, or whatever else.

I think regardless of which sort of review the reviewer writes, he or she only has two inviolable obligations: 

1)     Be truthful.

2)     Communicate your ideas.


Question 2: If the purpose of a review is to suggest to consumers how they should spend their time and money, why do we avoid less-granular grading scales such as Buy, Try, or Avoid? Example: Giant Bomb founder and former Gamespot editorial director Jeff Gerstmann told MTV's Multiplayer blog that "'How can I save people money today?' is basically the kind of mentality that I tackle this stuff with." Under Gerstmann's directorship, Gamespot reviewed games on a hundred-point scale. Is a 9.6 different than a 9.7 when the wisdom of a purchase is what the reviewer wants to communicate?

It's a good question, and goes back to what I feel is the unnaturalness of review scores. As I've said before: if you're at a bar with a friend and he asks you what you thought of Castlevania: Order of Ecclesia, you wouldn't say "Well, I'd probably give it a 8.7... no wait, an 8.6." Or at least I hope you wouldn't. People don't think like that, and if they do it's only because we've taught them to. In the wild, people are "less-granular" and view games as bad, okay, or awesome.

And while I'll go along with a 5 or 10-point scale in the name of convention, I would flat out refuse to use a 100-point scale. Arguments over what makes a game a 79 rather than an 81 are either insane or ridiculous, take your pick.


Question 3: Actual sales rarely correlate with review scores in cases where games are not also heavily hyped and marketed. Increasingly, gamers pre-order games prior to the publication of reviews. Interactive demos allow our audiences to decide for themselves whether or not a game will be worth their dollars. In addition, word of mouth and message board discussions inform our potential audiences' purchasing decisions with an intimacy and directness that we cannot provide. Finally, review aggregation sites such as Metacritic mute the bias of individual reviewers and provide a bigger picture. Do these circumstances suggest that our self-perception is, well, delusional - a throwback to a time when magazines and websites were gaming's gatekeepers? If our audiences believe this, even if we do not, what are they really reading for?

Well firstly, I don't think affecting the success of a game (in either direction) should be a credible reviewer's ambition. I don't see my job as deciding how readers spend their own money; I see my job as giving readers the information to make that decision for themselves.

Because of all the factors named above, I think only two sources of reviews will carry more importance than the vast majority who will find themselves merely contributing to aggregates: 

1)     The Big (IGN, Gamespot, 1up, Gametrailers, etc.), and

2)     The Respected (people or outlets who may not hail from monolithic mainstream fixtures but have nevertheless earned a reputation for having consistent, credible, thoughtful criticism.

The example I'd give for each when it comes to film criticism would be Entertainment Weekly versus Roger Ebert.


Question 4: Can criticism (concerned with telling our audiences what they're spending time and/or money playing as opposed to whether or not a game is worth spending time and/or money to play) coexist with reviews? Is a competent review also a critique -- as is so often the case where lit, movies, and music are concerned -- or should we separate the two?

It can, but to me it's a matter of the writer deciding on proportions. I find criticism more enjoyable to write (and read) than reviews, but not as many games bring enough to the table to really inspire or elicit thoughtful criticism from me (or, recognizing that perhaps that's my own failing: I'm not a skilled enough writer to always elevate myself to critic rather than mere reviewer).

My tendency is to be single-minded about execution... so personally, I don't see much value in trying to accomplish both and excellent critique and an excellent review all at once in the same piece. I'd rather make a clean effort at doing one very well and not worry much about the other.


Question 5: What can (or should) such criticism take into account?

This may sound slippery, but: I think the game informs that - that is, the direction your criticism will take.  But maybe I don't understand the question.

Email Print

Join The Joystick Division!

Become part of the Joystick Division community by following us on Twitter and Liking us on Facebook.

More links from around the web!