Tuesday, May 24, 2011

Performance & Security Testing Checklist

Creating checklists for performance & security is extremely important. This checklist helps in better definition of performance and security requirement. In the absence of properly defined performance & security testing requirements, teams can spend great deal time in things which probably does not matter much.

LOAD

  • Many users requesting a certain page at the same time or using the site simultaneously.
  • Increase the number of users and keep the data constant.
  • Does the home page load quickly? within 8 seconds.
  • Is load time appropriate to content, even on a slow dial-in connection?
  • Can the site sustain long periods of usage by multiple users?
  • Can the site sustain long periods of continuous usage by 1 user?
  • Is page loading performance acceptable over modems of different speeds?
  • Does the system meet its goals for response time, throughput, and availability?
  • Have you defined standards for response time (i.e. all screens should paint within 10 seconds)?
  • Does the system operate in the same way across different computer and network configurations, platforms and environments, with different mixes of other applications?
VOLUME
  • Increase the data by having constant users.
  • Will the site allow for large orders without locking out inventory if the transaction is invalid?
  • Can the site sustain large transactions without crashing?
STRESS
  • Increase both number of users and the data.
  • Performance of memory, CPU, file handling etc.
  • Error in software, hardware, memory errors (leakage, overwrite or pointers).
  • Is the application or certain features going to be used only during certain periods of time or will it be used continuously 24 hours a day 7 days a week? Test that the application is able to perform during those conditions. Will downtime be allowed or is that out of the question?
  • Verify that the application is able to meet the requirements and does not run out of memory or disk space.
SECURITY
  • Is confidentiality/user privacy protected?
  • Does the site prompt for user name and password?
  • Are there Digital Certificates, both at server and client?
  • Have you verified where encryption begins and ends?
  • Are concurrent log-ons permitted?
  • Does the application include time-outs due to inactivity?
  • Is bookmarking disabled on secure pages?
  • Does the key/lock display on status bar for insecure/secure pages?
  • Is Right Click, View, Source disabled?
  • Are you prevented from doing direct searches by editing content in the URL?
  • If using Digital Certificates, test the browser Cache by enrolling for the Certificate and completing all of the required security information. After completing the application and installation of the certificate, try using the <-- Backspace key to see if that security information is still residing in Cache. If it is, then any user could walk up to the PC and access highly sensitive Digital Certificate security information.
  • Is there an alternative way to access secure pages for browsers under version 3.0, since SSL is not compatible with those browsers?
  • Do your users know when they are entering or leaving secure portions of your site?
  • Does your server lock out an individual who has tried to access your site multiple times with invalid login/password information?
  • Test both valid and invalid login names and passwords. Are they case sensitive? Is there a limit to how many tries that are allowed? Can it be bypassed by typing the URL to a page inside directly in the browser?
  • What happens when timeout is exceeded? Are users still able to navigate through the site?
  • Relevant information is written to the log files and that the information is traceable.
  • In SSL verify that the encryption is done correctly and check the integrity of the information.
  • Scripting on the server is not possible to plan or edit scripts without authorization.
  • Have you tested the impact of Secure Proxy Server?
  • Test should be done to ensure that the Load Balancing Server is taking the session information of Server A and pooling it to Server B when A goes down.
  • Have you verified the use of 128-bit Encryption?

Reference - http://www.testinggeek.com/performance-security-testing-checklist

Saturday, May 21, 2011

6 Basic Criteria For Testing Requirements

There are six basic criteria that must be used during the static testing of specification requirements. The criteria require that each requirement consistent with the principles of completeness, unambiguity, consistency, traceability, the practicability and testability.

Completeness: A set of requirements is considered to be complete if all its constituent parts are represented and each part is made in full. During testing of set of requirements for completeness should pay special attention to the following moments:

  • Requirements should not contain expressions such as “and so on,” and others “and the like.
  • Requirements must not refer to non-existent background information such as, for example, a non-existent specification.
  • The requirement should not rely on the functionality that have not defined.

Unambiguity: Each claim must be clearly and precisely formulated; it should allow a unique interpretation. The demand must be legible and understandable. If the demand is particularly complex, then to facilitate understanding it can be called an auxiliary material such as diagrams, or table. If a more convincing use expressions like “It’s obvious” or “self-evident, it is quite possible that the author trying to divert your attention from one or another ambiguous statement.

Consistency: Requirements should not contradict each other or current standards. If the requirements conflict with each other, we must introduce priorities in order to resolve such conflicts. The ability to detect damages resulting from violation of the requirements, involves a good knowledge of the document containing the requirements and familiarity with the existing standards or other external specifications.

Traceability: Each claim must have a unique identifier, which allows you to trace its development throughout the life cycle. In the work products that appear at later stages of life cycle, such as the test plan, every reference to the property system must be traceable to the definition and specification requirements.

Practicability: Each claim should include the system task to provide such remedies as appropriate to develop and maintain. If a customer makes to the system of unrealistic demands in terms of time spent and funds for the development of various functions or else requires the development of functions which will be unreliable and dangerous to use, you must define risks and take appropriate action. In fact, the developed system must be economically feasible, reliable, easy-to-use and accompaniment.

Testability: We should be able to develop economically viable and easy to use tests for each requirement in order to demonstrate that the tested software product possesses the necessary functionality, performance, and appropriate current standards. This means that each claim must be measured or be quantifiable and that testing should performed in acceptable conditions.

Reference- http://blog.qatestlab.com/2011/05/07/6-basic-criteria-for-testing-requirements/

Monday, May 16, 2011

Common Testing Pitfalls

Poor estimation: Developers underestimate the effort and resources required for testing. Consequently, they miss deadlines or deliver a partially tested system to the client.

Untestable requirements: Describing requirements ambiguously renders them impossible or difficult to test.

Insufficient test coverage: An insufficient number of test cases cannot test the full functionality of the system.

Inadequate test data: The test data fails to cover the range of all possible data values—that is, it omits boundary values.

False assumptions: Developers sometimes make claims about the system based on assumptions about the underlying hardware or software. Watch out for statements that begin “the system should” rather than “the system [actually] does.”

Testing too late: Testing too late during the development process leaves little time to maneuver when tests find major defects.

“Stress-easy” testing: When testing does not place the system under sufficiently high levels of stress, it fails to investigate system breakpoints, which therefore remain unknown.

Environmental mismatch: Testing the system in an environment that is not the same as the environment on which it will be installed doesn’t tell you about how it will work in the real world. Such mismatched tests make little or no attempt to replicate the mix of peripherals, hardware, and applications present in the installation environment.

Ignoring exceptions: Developers sometimes erroneously slant testing toward normal or regular cases. Such testing often ignores system exceptions, leading to a system that works most of the time, but not all the time.

Configuration mismanagement: In some cases, a software release contains components that have not been tested at all or were not tested with the released versions of the other components. Developers can’t ensure that the component will work as intended.

Testing overkill: Over-testing relatively risk-free areas of the system diverts precious resources from the system’s more high risk (and sometimes difficult to test) areas.

No contingency planning: There is no contingency in the test plan to deal with significant defects discovered during testing.

Non-independent testing: When the development team carries out testing, it can lack the objectivity of an independent testing team.


Source: Article “Testing E-Commerce Systems: A Practical Guide” by Wing Lam in IT Pro (March|April 2001) magazine.

Monday, May 9, 2011

How to decide the priority of execution of Test Cases

After building & validating the testing models several test cases are generated. The next biggest task is to decide the priority for executing them by using some systematic procedure.

The process begins with identification of "Static Test Cases" and "Dynamic Test Runs", brief introduction of which is as under.

Test Case: It is a collection of several items and corresponding information, which enables a test to be executed or performing a test run.

Test Run: It is a dynamic part of the specific testing activities in the overall sequence of testing on some specific testing object.

Every time we invoke a static test case, we in-turn perform an individual dynamic test run. Hence we can say that, every test case can correspond to several test runs.

Why & how do we prioritize?
Out of a large cluster of test cases in our hand, we need to scientifically decide their priorities of execution based upon some rational, non-arbitrary, criteria. We carry out the prioritization activity with an objective to reduce the overall number of test cases in the total testing feat.
There are couples of risks associated with our prioritization activities for the test cases. We may have the risk that some of the application features may not undergo testing at all.

During prioritization we work out plans addressing following two key concepts:
Concept – 1: Identify the essential features that must be tested in any case.
Concept – 2: Identify the risk or consequences of not testing some of the features.
The decision making in selecting the test cases is largely based upon the assessment of the risk first.
The objective of the test case prioritization exercise is to build confidence among the testers and the project leaders that the tests identified for execution are adequate from different angles.
The list of test cases decided for execution can be subjected to n-number of reviews in case of doubts / risks associated with any of the omitted tests.

Following four schemes are quite common for prioritizing the test cases.
All these methods are independent of each other & are aimed at optimizing the number of test cases. It is difficult to brand either of the methods better than the other. We can use any one method as a standalone scheme or can be used in conjunction with another one. When we get similar results out of different prioritization schemes, level of confidence increases.
Scheme – 1: Categorization of Priority.
Scheme – 2: Risk analysis.
Scheme – 3: Brainstorming to dig out the problematic areas.
Scheme – 4: Combination of different schemes.

Let us discuss the priority categorization scheme in greater detail here.
Easiest of all methods for categorizing our tests is to assign a priority code directly to every test description. This involves assigning a unique number to each & every test description.
A popular three-level priority categorization scheme is described as under
Priority - 1: Allocated to all tests that must be executed in any case.
Priority - 2: Allocated to the tests which can be executed, only when time permits.
Priority - 3: Allocated to the tests, which even if not executed, will not cause big upsets.
After assignment of priority codes, the tester estimates the amount of time required to execute the tests selected in each category. In case the estimated time happens to lie within the allotted schedule, means successful identification of tests & completion of the partitioning exercise. In case of any deviation of time plans, partitioning exercise is carried out further.

There is another extension to the above scheme i.e. new five-level scale using which we can classify the test priorities further.
The Five-Level Priority scheme is as under
Priority-1a: Allocated to the tests, which must pass, otherwise the delivery date will be affected.
Priority-2a: Allocated to the tests, which must be executed before the final delivery.
Priority-3a: Allocated to the tests which can be executed, only when time permits.
Priority-4a: Allocated to the tests, which can wait & can be executed even after the delivery date.
Priority-5a: Allocated to the tests, which have remote probability of execution ever.
Testers plan to divide the tests in various categories. For instance, say tests from priority 2 are further divided among priority levels like 3a, 4a and 5a. Likewise any test can be downgraded or upgraded.


Other considerations used while prioritizing or sequencing the test cases -
01. Relative dependencies: Some test cases are such that they can run only after the others because the one is used to set up the other. This is applicable especially for continuously operating systems involving test run to start from a state created by the previous one.
02. Timings of defect detection: Applies to cases wherein the problems can be detected only when many other problems have been found and already fixed. For example it applies to integration testing involving many components having their own problems at individual components level.
03. Damage or accidents: Applies to cases wherein acute problems or even severe damages can happen during testing unless some critical areas had not been checked before the present test run. For example it applies to embedded software involving safety critical systems, wherein the testers would not prefer to start testing the safety features prior to first testing the other related functions.
04. Difficulty levels: This is one of the most natural & commonly used sequences to execute the test cases involving moving from simple & easy test cases to difficult and complicated ones. This applies to scenarios where complicated problems can be expected. Here the testers prefer to execute comparatively simpler test cases first to narrow down the problematic areas.
05. Combining the test cases: Applies to majority of cases in large-scale software testing exercises involving interleaving and parallel testing to accelerate the testing process.

Reference - http://www.softwaretestinggenius.com/articalDetails.php?qry=715

Monday, May 2, 2011

Product Based Testing versus Project Based Testing

Before we delve into the differences, for a better clarity, I would like to explain what is a software product and a software project.

Software product: A software application that is developed by a company with its own budget. The requirements are driven by market surveys. The developed product is then sold to different customers with licenses. Example for software products: Tally (by TCS), Acrobat reader/writer (Adobe), Internet Explorer (MS), Finnacle (Infosys), Windows (MS), QTP (HP) etc.

Software project: A software application that is developed by a company with the budget from a customer. In fact, the customer gives order to develop some software that helps him in his business. Here, the requirements come from the customer. Example for software projects: A separate application ordered by a manufacturing company to maintain its office inventory etc. This application is used only by this company and no one else.

With the above insights, we will now discuss the differences.

As an end tester, there will not be much difference in testing either a product or a project. It is test scenarios and test cases, and requirements everywhere. However, here below are some differences between a product and a project, and differences from a testing perspective:

01. For a project, test plan is a must. All the documents related to that are to be prepared. For a product, test plan would have been made long time back. It is at max updated.
02. In project, client has to approve the test plan and test cases, and sign them off. In product it is not necessary.
03. In project, tester directly interacts with the client. In a product, tester interacts with the FD team or business analysts.
04. In project, the deadlines are not flexible. In product testing, deadlines are flexible.
05. In project client hold the authority on the developed code. In product, client doesn't hold the ultimate authority on the code.
06. The budget for development of the project is given by the customer in case of project development. In case of product, the budget is given by the own company.
07. The features in a project are always new. However, in Product, the basic features remain same, and only a few new features will be added, or few existing features will be modified.
08. Because of point #6, more regression testing needs to be done in case of a product, and less in case of a project.
09. Since a product runs for years, test automation saves lot of effort, where as in case of projects, it depends on the duration of the project.
10. Usually, a project complies to a small environment, as specified by the client. So, testing only on the specified environment is sufficient. A product can be installed on a number of OS and other environment parameters, depending on the end user. So, usually, more compatibility testing needs to be done in case of product.
11. A project is used by the specific client for the specific purpose only. Hence tester needs to test only in that end user’s perspective. In case of a product, the same product can be used by a variety of clients. (For example, the same Enterprise Incentive Management application can be used by Pharma clients, Insurance clients etc). So, tester needs to consider all types of potential users and their usage and needs to test accordingly.
12. Licensing comes into picture in case of a product. Thus scenarios related types of license, their registration and expiry etc needs to be tested for a product. Licensing does not exist for a project.
13. Test planning depends on the software development life cycle. Usually, it will be different for a project and a product.
14. Chances are very high to get onsite opportunities for the tester working in a project. Chances are very less in case of a tester working on a product.
15. Economic recession badly hits a software project. The customer may halt or stop the project, in which case, the test engineer may lose the job sometimes. In case of a product, a short term (0-1year) recession may not hit the engineer, as the company keeps adding the new requirements. In fact, the test engineer may get more work, if the company tries to add some innovative requirements to lure the customers into buying their product.
16. In case of a project, competitors do not come into picture, except at the senior level management. In case of a product, the tester also should consider the competitive products while testing. Sometimes, the tester needs to evaluate the performance against competitors' products. Behavior of the product with the competitors' products coexisting on the same machine needs to be considered. Also, tester needs to check any violation of the copyrights.

Reference - http://ravilandu.blogspot.com/2010/01/product-vs-project-testing.html

^ Go to Top