> > I am not arguing that all software should be held to > the > > same standard wrt any given metric. For example, I > expect > > that medical applications to be more thoroughly tested > > than, say, a video game. > > Actually, in practice, it would surprise me if this were > generally true. At least console video games get very > hard testing for their domain; the general rule for > allowing a release used to be 100 hours of testing without > finding any flaw, *after* all flaws found in previous > testing had been fixed. I suspect that medical software > that isn't used in life-and-death situations see > significantly less quality assurance. > Of course software for game consoles has several things that make it easier to test well (and more vital as well): 1) it's a closed environment, there's no miriad different hardware/software combinations it's run on top of (and concurrently with). 2) it's distributed on a medium that makes updating it to fix bugs impossible (or nearly so). ROM Cartridges and CDs/DVDs need to be physically replaced for every customer, you can't just send them a patchfile as a download (modern consoles with embedded harddisks make this somewhat possible, but they're a very recent development).
That medical software (at least most of it) is run on PCs, each of which has potentially different hardware and other software (including operating system patch levels) installed. At the same time sending the users a patch via email or a download link is easy and cheap.
It's therefore (despite the seemingly more important problem domain in medicine) more important economically for the manufacturer to get that game console software correct out of the box than that medical software (as long as no patient dies of course, in which case the liability claims can run into astronomical figures).