30 October 2006

My first bug

I found my first bug on my first day at the Giant Fruit Company. It was a crash bug.

I mention this in interviews a lot, as I think it sounds cool. At the time, it made me think that I could get into this "testing thing", but there is a better lesson to be learned from how and why I found it.

As I mentioned, I started working at the Giant Fruit Company testing the porting of existing software onto their range of laptops. My first day, after introductions, I was given the task of testing Microsoft Word on the new laptop. For this kind of system, we had two guidelines. One was for testing the things that all applications were supposed to do on the Giant Fruit Company's operating system, in other words the GUI consistency. The second was for testing the behaviour of the particular application itself. In current jargon, they were test oracles. Altogether we spent ten hours on each application, five or six hours following the guidelines, the rest of the time on exploratory testing, where we just used the application to complete a task it would normally be used for.

My mentor left for a meeting, leaving me armed with the guidelines, a prototype laptop with correct installation, and some excitement at My Very First Testing Attempt.

Twenty minutes later I had crashed the laptop. I had a moment of panic, before remembering that in my new career this was actually a Good Thing. I carefully wrote down what I had done, then rebooted and checked whether I could do it again. I could! But what if I was doing it wrong?

My mentor returned from his meeting, and I started into my preamble. I had got to this point in the first guideline, and double-clicked on the menu bar, and the result wasn't what it was supposed to be.

"Oh yeah," he started. "That test is out of date, they got rid of that functionality a couple of releases back. Just skip it."

"But," I told him, anxiously. And showed him I crashed the laptop by carrying on double-clicking on the menu bar.

For each bug, we had a set regression path to follow, up through previous laptop incarnations until we found where the bug was introduced. This one had been with range since its very start. It remained unknown until I, as a complete newbie, stumbled on it.

From this, I got two lessons:
One: fresh eyes and ignorance are good things
Two: when bored, break things


Okay, I'm cheating a bit. I broke it because I was confused, there was a bug in my test oracle, and repeating the test until it worked was the only thing I could think of doing. But the idea of carrying on doing unexpected things was valuable, and I've found a lot of bugs on that principle alone. It's something I do when I feel bored, when dutifully verifying that x variation of test dimension y becomes too dull. There are always bugs like that to find.

Sadly, a couple of months later I learned a third lesson from that bug:
Three: large companies aren't necessarily structured so that bugs get fixed

When I left the Giant Fruit Company, that bug was still unfixed. The test team had one point of contact outside the test organisation, somebody called the Quality Lead. It became obvious that the Quality Lead was being evaluated on the number of bugs open on his project domain in the database. By labeling the bug as "Third party problem", it moved out of his domain and became somebody else's problem. And since it was a problem found with third party software, his rule of thumb was that it was a third party problem - even though outside this range of laptops it didn't occur.

Can you see where this is leading? We spent a large part of our time testing how third party applications behaved on our new laptop. Any bugs we found were deemed to be third party problems and never fixed, even when they clearly weren't. Morale was pretty low. Bug reports started getting abusive, people were fired for abusive bug reports, change was promised but never quite happened.

Most of the problem? We never even met the Quality Lead, never mind the people actually responsible for developing the new laptop. They were based on the West Coast US, we were in Ireland, we had phone conferences with other testers but that was about it. This did actually begin to change before I left, but too little too late.

2 comments:

Anonymous said...

Well that is the first thing I learned. Clearly determine the scope of your tests. That way you won't waste valuable time on testing things that are going to be labelled as outside your field. This attitude thus works two ways. First of all if it's tested under that denominator it's going to be within the scope and area of interest, and this will validate any claims that it is within the scope of interest. As such the second advantage is that it has to be fixed if possible, in order to get the tested software to comply with the set standards for passing the tests you previously determined those denominators for. So it's all in the test setup to prevent frustrations later on.

Fionna said...

Hi Quindana

You're right, although it is not always so easy to create a clearly defined scope. I can also think of reasons not to try and have a clearly defined scope, too, but that would be a whole entry in itself.

One of the most frustrating things in that job was that the organisation was structured leaving the testers with no real power. The person with the power to push for fixes was the Quality Lead, and as I said he seemed to have incentives not to do so.