In my review of the Xbox 360, I briefly mentioned a problem with the jump in graphics quality:
The character models have this weird shine to them, like they’re made from plastic.
As long as the video game Tim Duncan didn’t actually look like the real NBA star Tim Duncan, there was no pretending the two were the same. Now the graphics force our brains to compare game and reality. The digital Duncan’s skin is too rigid; the minute movements of hair and muscle that we subconsciously register are replaced by Silly Putty stasis.
I meant to post some follow-up to that, but didn’t get around to it. Now the issue is getting some attention, with recent columns by James Surowiecki at Slate and Clive Thompson at Wired (Thompson had a similar piece in Slate last year, though obviously not in terms of the 360.) And it should get a lot of attention: Just as the movie industry has yet to figure out the CGI revolution, the video game world is still working out how to deal with leaps in technology that are challenging developers and gamers alike.
When I showed my friend Madden 06 on the 360 last month, he recoiled at the plastic skin I mention above. The players looked creepy, he said; he couldn’t get over how unreal the ultra-realistic graphics were. He knew there was a name for this, and indeed there is: “the uncanny valley.” In his Slate essay, Thompson describes the phenomenon, coined byJapanese roboticist Masahiro Mori, this way:
When an android, such as R2-D2 or C-3PO, barely looks human, we cut it a lot of slack. It seems cute. We don’t care that it’s only 50 percent humanlike. But when a robot becomes 99 percent lifelike—so close that it’s almost real—we focus on the missing 1 percent. We notice the slightly slack skin, the absence of a truly human glitter in the eyes. The once-cute robot now looks like an animated corpse. Our warm feelings, which had been rising the more vivid the robot became, abruptly plunge downward. Mori called this plunge “the Uncanny Valley,” the paradoxical point at which a simulation of life becomes so good it’s bad.
Movies have been visiting the uncanny valley for more than a decade. The T-1000 in Terminator 2 wowed us in 1991, and I saw Jurassic Park four times in the theaters when it came out in 1993. But the technology got too good, too fast — and still wasn’t good enough. George Lucas thought he was revolutionizing cinema again by putting an all-CGI character in Star Wars Episode 1. Instead, Jar Jar Binks and all he represented ruined Star Wars. Jar Jar is hated mostly because he’s a racist Barney and Friends reject, but his existence as a CGI experiment was worse than the way the character acted. That is, Jar Jar could have been excused if he only looked real. But he didn’t. He looked flat, almost two-dimensional at times, and seemed superimposed on the screen. He lacked the mass, physics and details of a living thing. His CGI brethren weren’t any better. The robot and Gungan armies moved too much in sync to be real. Even the spaceships — super smooth and super clean, with none of the nicks and dust that we subconsciously register in actual things — seemed fake.
With the Xbox 360, video games have reached this point. Characters no longer look like polygons clunked together, so our brains want to start looking at them as representations of people. But our brains know that the Barbie skin and Dr. Doom faces have nothing to do with humans. This suggests that for all the capabilities of the next-gen systems, game developers might be facing a future of diminishing returns when it comes to graphics.
With each new set of systems, the expectations for graphics go up. Now prices and costs are going up to meet those expectations. But what if those expectations, at least where games are concerned, are impossible to meet? What if EA keeps throwing $20 million at its biggest games, only to see all the intense work results in graphics that weird people out?
One solution is to stop working so hard to create realistic people in video games. Concentrate on other things that games do well — or not so well — and refine those. I’d much rather play a video game with an original script than one with a zillion-dollar graphics engine. Or one that presents a unique visual style that’s cool but not realistic, like Psychonauts. The Pixar movies are great in part because we don’t spend half the movie thinking about how unreal the CGI is; they’re supposed to be cartoons, not reality. Or come up with a new gameplay element; this is the hardest task, but if your game is just a retread or copycat, the only way to compete is graphics. As Surowiecki points out, Grand Theft Auto became a blockbuster in part because the game design and expansive world were so meticulous and innovative and called attention away from the relatively unremarkable graphics. Or don’t worry about graphics at all; Guitar Hero and Katamari Damacy have minimal visuals, but both are great. Or imagine how vast and creative a recast old-school 2-D platformer like Castlevania: Symphony of the Night or 2-D RPG like Phantasy Star II could be, still 2-D, on the 360.
For video games to back away from the valley precipice, game companies and players will have to make a bargain. Companies will have to agree to try new and different things, concentrate on areas other than graphics, or redefine what’s acceptable for graphics in a next-gen game. Gamers will have to agree not to punish companies for experimenting and presenting visuals that aren’t necessarily cutting edge.
You can see this bargain play out in the success or buzz of Shadow of the Colossus, the Nintendo DS, Dance Dance Revolution/Karaoke Revolution, Guitar Hero, Katamari, Psychonauts, Indigo Prophecy (a cliched, boring game but at least an attempt at something different). Nintendo’s entire approach to the upcoming Revolution is based on this bargain; Microsoft has started to address it with Xbox Live Arcade. This doesn’t mean companies should stop making God of War, Resident Evil 4, and other graphical showcases. They just need to realize that those games are so good because of the whole package, not just the graphics, and that there will only be a handful of those titles per system. They should aim lower from the start for most other games’ graphics, and focus on the other aspects of the experience.
It won’t be easy to change the graphics-centric view of video games. But the industry won’t be able to sustain itself — and gamers will start yawning or shuddering — if it tries to live only on the cutting edge in the next generation.
— December 16, 2005