This post originated from an RSS feed registered with .NET Buzz
by Scott Hanselman.
Original Post: 3D Graphics may NEVER achieve Human Realism? - The Masahiro Mori Uncanny Valley
Feed Title: Scott Hanselman's ComputerZen.com
Feed URL: http://radio-weblogs.com/0106747/rss.xml
Feed Description: Scott Hanselman's ComputerZen.com is a .NET/WebServices/XML Weblog. I offer details of obscurities (internals of ASP.NET, WebServices, XML, etc) and best practices from real world scenarios.
An interesting postulate, although the
author forgets that there is a BIG difference in quality between the pre-rendered
cut-scenes, and the in-game engine. One day, we'll have pre-rendered quality
in the game engine, and THAT will be the last 1%.
The screwiest part of this phenomenon is that game designers pride themselves
on the quality of their sepulchral human characters. It's part of the malaise that
currently affects game design, in which too many designers assume that crisper 3-D
graphics will make a game better. That may be true when it comes to scenery, explosions,
or fog. But with human faces and bodies, we're harder to fool. Neuroscientists
argue that our brains have evolved specific mechanisms for face recognition, because
being able to recognize something "wrong" in someone else's face has long been crucial
to survival. If that's true, then game designers may never be able to capture that
last 1 percent of realism. The more they plug away at it—the more high-resolution
their human characters become—the deeper they'll trudge into the Uncanny
Valley.