Intel Blog: The Right Mindset and Toolset for Testing

I have a two-part series (one, two) on testing posted on my Software Evangelist blog on the Intel Developer Zone.  This is a long piece where I get back to the interesting question of how you test things and the fact that testing is not just the same as development.  I call the posts Mindset and Toolset

Continue reading “Intel Blog: The Right Mindset and Toolset for Testing”

When is Redundancy Cheaper?

fire from MS Office clip artI find the subject of fault tolerance and resiliency in computers quite interesting. It also very interesting to look into what kinds of faults actually do happen in the real world, and what impact they have. I recently found a couple of good sources on this. First of all, a paper from Super Computing 2012 by Fiala et al, called “Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing” (ACM Digital Library). One of its references was to a 2011 talk by Al Geist, “What is the Monster in the Closet”, which provided some more data on how common faults are.

Continue reading “When is Redundancy Cheaper?”

Why DO Computers Fail?

tandem2

I just found and read an old text in the computer systems field, “Why Do Computers Fail and What Can Be Done About It?” , written by Jim Gray at Tandem Computers  in 1985. It is a really nice overview of the issues that Tandem had encountered in their customer based, back in the early 1980s. The report is really a classic in the computer systems field, but I did not read it until now. Tandem was an early manufacturer of explicitly fault tolerant and highly reliable and available computers. In this technical report Jim Gray describes the basic principles of fault tolerance, and what kinds of faults happen in the field and that need to be tolerated.

Continue reading “Why DO Computers Fail?”