Monday, May 16, 2011

Common Testing Pitfalls

Poor estimation: Developers underestimate the effort and resources required for testing. Consequently, they miss deadlines or deliver a partially tested system to the client.

Untestable requirements: Describing requirements ambiguously renders them impossible or difficult to test.

Insufficient test coverage: An insufficient number of test cases cannot test the full functionality of the system.

Inadequate test data: The test data fails to cover the range of all possible data values—that is, it omits boundary values.

False assumptions: Developers sometimes make claims about the system based on assumptions about the underlying hardware or software. Watch out for statements that begin “the system should” rather than “the system [actually] does.”

Testing too late: Testing too late during the development process leaves little time to maneuver when tests find major defects.

“Stress-easy” testing: When testing does not place the system under sufficiently high levels of stress, it fails to investigate system breakpoints, which therefore remain unknown.

Environmental mismatch: Testing the system in an environment that is not the same as the environment on which it will be installed doesn’t tell you about how it will work in the real world. Such mismatched tests make little or no attempt to replicate the mix of peripherals, hardware, and applications present in the installation environment.

Ignoring exceptions: Developers sometimes erroneously slant testing toward normal or regular cases. Such testing often ignores system exceptions, leading to a system that works most of the time, but not all the time.

Configuration mismanagement: In some cases, a software release contains components that have not been tested at all or were not tested with the released versions of the other components. Developers can’t ensure that the component will work as intended.

Testing overkill: Over-testing relatively risk-free areas of the system diverts precious resources from the system’s more high risk (and sometimes difficult to test) areas.

No contingency planning: There is no contingency in the test plan to deal with significant defects discovered during testing.

Non-independent testing: When the development team carries out testing, it can lack the objectivity of an independent testing team.


Source: Article “Testing E-Commerce Systems: A Practical Guide” by Wing Lam in IT Pro (March|April 2001) magazine.

No comments:

Post a Comment

^ Go to Top