History of software Bugs and Debugger

A computer bug is a mistake done by developer during development of hardware or software. Without bugs you find any hardware or software, it is just a matter of time. Majority of bugs occur due to ignorance of developers.

History of computer bug

The term ‘Bug’ has been part of engineering jargon for many decades. It’s been originally in hardware engineering to describe mechanical malfunctions or problems.

Problems with radar electronics during World War II are referred as bugs (or glitches). There is evidence that usage of this term date back to much earlier. This term is mentioned letter from Edison to an associate in 1878:

It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise — this thing gives out and [it is] then that “Bugs” — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached.

Source: Edison to Puskas, 13 November 1878, Edison papers, Edison National     Laboratory, U.S. National Park Service, West Orange, N.J., cited in Thomas P. Hughes, American Genesis: A History of the American Genius for Invention, Penguin Books,  1989, on page 75

The term ‘Computer Bug’ is coined by Grace Murray Hopper. During her work on Mark II and Mark III machine was facing some problem, later operators investigated and identified that there was a moth (Insect) trapped in a relay which was causing problem. This Computer Bug (Insect) removed and taped to log book September 9th 1945 at 3:45 p.m. “First, actual case of bug being found in computer”. Grace Murray Hopper is also a Mother of COBOL.

The word went out that they had “debugged” the machine and the term “debugging a computer program” was born.

Ancient methods of software debugging

In the beginning of computer era, debugging is something of a hit-or-miss procedure for a quite a few years. Early debugging efforts are mostly centered on either data dumps of the system or used output devices, such as printers and display lights to indicate when an error occurred. Programmer would then step though the code line by line until they could determine the location of the problem.

After computer programs moved past the punch card stage and as paper line-writers and finally CRT terminals became available. These methods have enhanced debugging techniques too. Computer programs too were changing at the same time from batch-oriented programs that were run at night or successive days, programs were becoming user interactive, required input from the user to startup or even during the program run.

The program output can be seen as the program running father then waiting until the program had finished. This was considered by many to be a major leap forward in computer programming.

The next evolution of debugging came with the advent of command-line debuggers. These simple programs were amazing step forward for the programmer. He doesn’t need to guess what values of the memory address in the program contained. The debugger could now dump the values given memory locations.

This allowed a programmer working in an assembler to look directly at the registers and the memory blocks that contained the address of the local variables of the program. This is easy compare to earlier methods of debugging from a hit-or-miss proposition into reproducible process. Reproducing something is the next step toward making it a scientific or engineering process.

As the software projects started getting bigger and the same technique that worked well for small projects no longer worked when program reached certain size. Scalability was an issue even at the beginning of software complexity curve.

Compiler vendor discovered that the information they had while they parsing high level languages such a C, FORTRAN, COBOL, etc… Could be kept in a separate file called Symbol map, which could map the variables names in the programs to the actual memory address into which they could load at runtime. The ability to look at variables names and map them to memory address allowed programmer to dump memory by name. These debuggers were called “Symbolic debuggers”.

Next big thing in debugging was the ability to set breakpoints, the term breakpoint means the ability to “break” into the code and stop program execution at a given point. The program still loaded in memory but it just would not be running at the breakpoint when certain critical areas of code were reached and allowed programmer to dump the contents of variables (symbols) before the program crashed or before continuing execution.

Before breakpoint improvement, programmer had only two states: Initial state of application before it ran and the final state of the application. Due to breakpoint programmers can check state when they want it and still guessing where to set a breakpoint was still difficult (Guessing to add breakpoint even now is hard in larger projects).

As software was moving ahead faster, even debugging was getting improved a lot. The ability to see the original lines of code in order to set breakpoints was added. Better ways to dump memory and look at changes in that memory were added. Also, conditional breakpoint is also added to many debuggers.

Conditional breakpoint allowed you to set a condition, such as when a given variable became equal to zero, under which the program would stop ad if you had set breakpoint at the particular line (Even now not much programmer’s uses conditional breakpoint).

Modern methods of software debugging

Next big thing in modern debuggers is Turbo Pascal with IDE (Integrated development environment) and this was introduced by Borland where you can edit, compile, link and debug code in the same system. No need to run programs and no need to load special symbol table or use special compiler options. Turbo Pascal signified the dawn of the modern age of debugging.

Next comes true multi-threaded, multi-tasking UNIX operating system, here using debuggers to debug multi-threading was very hard. Although Microsoft Windows applications were not truly multitasking at the beginning, they emulated the concept well. Previous application debuggers were text based. But, to debug GUI based applications needed additional thought for debugging. Existing debugger were not supporting this, you need to flip back between application and debugger screen, but this was of little help when it was your application that “painted” screen. If the application was not running, the screen would be blank.

Modern Debuggers and IDE’s widely used are VC++ (Visual), Eclipse (Visual), GDB (Command Line), DDD (Front-end for GDB and other debuggers of command line), etc…

Due to these types of IDE and debuggers we are able to debug large projects. But, still there are lot of innovation should happen in Debugging areas. Since, more then 50% of the time programmer spends in debugging.

Software bugs, or errors, are so prevalent and so detrimental that they cost the U.S. economy an estimated $59.5 billion annually, or about 0.6 percent of the gross domestic product.

Source: Department of Commerce’s National Institute of Standards and Technology on June 28, 2002.