In the last few years, the software industry has gotten a bad reputation for producing a buggy, unreliable product. Many software projects are late and over budget. A few spectacular software-related failures have caught the attention of the media, such as the crash of an Ariane rocket, and the problems with the Denver Airport's baggage handling system. Currently there are many popular methodologies for improving the quality and reliability of software. These include Design Patterns, Object Modeling, Extreme Programming, and others. They all have their good features, but there is no "silver bullet" for quality.
In my opinion, the biggest problem has been the reluctance of many companies to provide sufficient resources for debugging and testing of software. For example: years ago I was part of a large team working on a huge project that was plagued with bugs. Project managers mandated the use of "code coverage" testing similar to that used in avionics and other safety-critical systems. This sounded like a good idea, but due to financial and time pressures, the project's leadership was unwilling to extend the project deadline sufficiently to allow the testing to be properly completed. (This may have been one of the reasons the project was eventually scrapped.) The moral of the story? You get what you pay for. Or worse yet, if you design an elaborate, feature-rich system but scrimp on quality, you get much less than what you pay for.
I'm personally not committed to any particular software ideology. Open source is an interesting trend that has produced a lot of good software, much of it much better than conventional wisdom would expect. On the other hand, I don't claim that open source is inherently better than proprietary systems, and I don't expect the latter to go away, either. Good programming methods are independent of these philosophical considerations. I can say what's worked for me personally, and I'd like to offer a few commonsense suggestions.
- Simplicity - Keep everything as simple as possible - specifications, feature lists, user interfaces, and the overall design. The final product will be easier to use and more reliable. One way to gauge complexity is through the project's specifications and other documentation. My rule of thumb is, if it's difficult to explain, it's too darned complicated.
- Component Structure - Divide projects into small, manageable pieces. This allows more realistic schedules, reduces debugging costs and makes it easier to measure progress. When I estimate a project, I try to list every task, broken down small enough so that each one should take a single developer a day or less to complete. This practice helps eliminate unpleasant surprises.
- Well-Documented Code - Keep classes and methods small, and choose names to indicate function. Use comments to explain algorithms and relationships between components. Many companies have standards that mandate the inclusion of comments of a certain style in particular places. Unfortunately, it's difficult to mandate that comments are useful. Comments should not state the obvious, but any functionality that's not obvious should definitely be explained.
- Code Recycling - Reuse existing code whenever possible. The information contained in existing code is a valuable resource. Perhaps due to ego (or a desire for job security) many developers are tempted to "reinvent the wheel" with each software project. A better attitude is "enlightened laziness" - reusing existing code allows more time for creating software that is truly original. Also, the original coders may have already found problems and workaround that would need to be rediscovered if the code was redone.
- Expect the Unexpected - No project can be specified completely. Make allowances for research and prototyping, and schedule time for unforeseen problems. Some project managers often want to design every detail before writing any code, but creating a prototype - even if that code is thrown away - can often save much time and expense. Debug time is just as important. Another rule of thumb I use is that the time to debug a particular piece of code is about equal to the time it takes to write it in the first place.
- Continuous Testing - Test code as it is written, integrate frequently and test further, and test the system as a whole. Reliable systems are based on well-tested components. When schedules are tight, it's tempting to hack away and save testing for the end-- after all, every change requires additional testing. But bugs are much easier to find when they're confined to a small piece of software. Inevitably, there will be plenty of integration, installation, and timing problems anyway. These will become totally unmanageable if the simple problems aren't discovered and fixed early.
- Diagnostics Facilities - Implement trace code and other diagnostics to keep track of what the system is doing. Don't optimize code too soon. I view trace code as a necessary evil; the problem is that it changes the timing of the system. If done improperly, the system's diagnostics can consume excessive resources and cause problems of their own. Still, there's no substitute for a log of what code is running. Debuggers can only do so much; many systems require they be run at (or near) full speed for problems to be found.
Back to Article List
Back to Home
This page Copyright 2002, Nakota Software, Inc.