Wednesday, April 20, 2011

The Application of Six Sigma to Software Production - Part 1

I am a firm believer in continuous process improvement.  My first computer testing position was with General Electric Process Control Division, (1964, later Honeywell) and though we had no formal process improvement process, usually, when something failed, we performed root-cause-analysis (RCA) to discover what mistakes we had made so that we could avoid making those same mistakes in the future.

In later years (c. 1998?), I was again working for GE and had the opportunity to learn about and apply Six Sigma continuous process improvement techniques to a software development environment.  I saw it work.

I hope to publish a series of posts that will describe the Six Sigma process as I used it, the arguments against using it for improvement of the software development process, the myths concerning Six Sigma circulating in the software development community, and, perhaps, an occasional rant about the failure to apply engineering discipline to software development.

Here is my RCA for why software has such a rotten quality reputation - and an economically justified solution:

You can bet that the cause of mistakes resides in the culture in which the mistakes were made. Cultures are generally created by the founding fathers and propagated by successive generations; in businesses, by generations of management. Cultures have a lot of momentum; See NASA for example. Is saying that we can have a better culture tantamount to saying that our current culture is not good enough? Is it wrong? Is it not satisfying? 

This is enough to test our very belief systems.

In the most generic sense, I suppose the problem is that mistakes cost money, and if we can avoid the mistakes, we can save money.  This is a problem we can address.

One method of addressing this problem is the Six Sigma process.

Tuesday, April 19, 2011

Qualitative Analysis and Software Testing Instincts

I think there is value in subjective, qualitative software analysis.  I think that is a guiding principal of Exporatory Software Testing (ET): "Should I look here or over there?"  and our well trained instincts guide us.  So what trains us and what are those instincts?  Ideas worth looking into. 

I rely on the "One Roach Conjecture":  "If you see one roach . . ."

I frequently use this principal while performing ET.  For example, I look to see if there is a simple parsing problem, if there is, then I think, "Ah HA! These programmers don't know how to parse.  I wonder where else parsing is critical to the function?"  (Examples of simple parsing test cases: 0.0.0, --1, -0.  I found that Acrobat Reader had a lot of trouble with these. ergo .  . .)

The subjective led to the objective so I think exploring subjective analysis has some bearing on our software testing craft.

(A slightly different version of this post appeared Oct 27, 2010 in the "Software-Testing" Yahoo group.)

Monday, April 18, 2011

Microsoft(tm) Calculator Challenge

The Microsoft Calculator Challenge

Here is challenge for those of you with some time on your hands (it has happened to me) and illustrates an interesting testing technique.

Background

I have been a nemesis to the calculator people at Microsoft.  One time in the late 90's, I had some time on my hands (See?  I told you.) while I was working, and I was supposed to be testing but the system under test was not available (probly never happened to you) and I was looking for something to test.

My desktop at work was a PC running Windows NT and there it was: Windows Calculator.

So I began trying things.  It was definitely broken.  Mostly precision issue matters.  For a complete list, see my Ph.D. dissertation, "Appendix A - Calculator Anomalies."

Microsoft "fixed" most of these problems by adding "infinite" precision.  You can try this with the following string (without quotes) pasted into the scientific view (ALT-2) of MS Calculator: "1x9999s".  You can click on "Continue" as many times as you want. 

So I faced that challenge myself, since what I considered a really stupid fix made me somewhat unhappy with Microsoft.  I had developed a random test case generator and it generated a very long string that I could paste into calculator as above and got a catastrophic failure.  I spent some time reducing that test case and found that the following string (again, without quotes) would cause that same symptom:

"(((((((((((((((0=)))))))))))))))"

During the test string reduction, I also discovered the following string that produced an interesting result:

"(0=)(2+2=)"

The problems with these test cases persisted in subsequent versions of Windows including 2000, XP, and Vista.  If you have one of those operating systems, try those test cases.

However, if you have Windows 7, as I now have, these cases no longer "fail."  Microsoft has once again "fixed" the problem by terminating the operation once the "=" sign is entered.  Thus:

"(((((((((((((((0="

Is a complete and valid calculator operation.

The Challenge

I am incensed!  So you know what I did?  I removed the "=" from my random test case generator and generated another long string, and guess what?  It still fails catastrophically in Windows 7.  (I haven't tried it with other Windows versions, but I'll bet it fails.)

The original test case is  17,475 characters long.  As before, I have reduced this case to a case that is only 376 characters long as follows:

(1+(1+(1/(1/((1+((1+1)+1))+(1+1/((1+(1+(1+(1+(1+(1+(1+(1+(1+((1/(1+(1/(1/((1+(1+(1/((1+(1/(1)+(1/(1+(1+(((1))/1))/1))))/1))))))/1)))+(1+(1+(1+(1/(1+(1)/1))/1))+1)/1)/1)/1))+1))+1)/1)/1+(1/1)))))/1))/1)/1+(1+(1+((1/(1+((1/(1/(1/(1+(1/(1/(1+(1+((1+1)+1)/1)+(1+(1/(1+(1/(1+(1+1+1)+1)/1))/1)+1)))+1/(1/(1)/(1/((1)/1)/1)/1)+1)))/1)/1)/1)))/1)/1)+(1/(1/(1+((1+(1)/1)/1))))/1)/1)/1)/1

So the challenge is this:  What is the minimum length string that will cause this failure?

An interesting note: If you remove that last "/1" from the test case, it will not fail.

Hint:  I save the string in MS Notepad, make a change reducing the length, copy it, and paste it into calculator and if it fails I save it under a new name, if not, I restore the previously failed version.  The reduced case you see above is TC18.txt.  Be sure there are no line breaks in the string if you copy it from here.

I am also interested in intermediate results, different results, different symptoms, and failing results with a different OS or application.

Getting Started - - - Again

I once started a BlogSpot Blog before  Google took it over and I didn't take advantage of the account conversion offer so I had to create this new account.

Though my primary thrust here is to relate to software testing, I do reserve the right to comment on anything that comes to mind.

I have particular interest in software exploratory testing, test automation and static code review and I plan to publish, here, tools that I have developed over the years.