Tuesday, May 24, 2011

Performance & Security Testing Checklist

Creating checklists for performance & security is extremely important. This checklist helps in better definition of performance and security requirement. In the absence of properly defined performance & security testing requirements, teams can spend great deal time in things which probably does not matter much.

LOAD

  • Many users requesting a certain page at the same time or using the site simultaneously.
  • Increase the number of users and keep the data constant.
  • Does the home page load quickly? within 8 seconds.
  • Is load time appropriate to content, even on a slow dial-in connection?
  • Can the site sustain long periods of usage by multiple users?
  • Can the site sustain long periods of continuous usage by 1 user?
  • Is page loading performance acceptable over modems of different speeds?
  • Does the system meet its goals for response time, throughput, and availability?
  • Have you defined standards for response time (i.e. all screens should paint within 10 seconds)?
  • Does the system operate in the same way across different computer and network configurations, platforms and environments, with different mixes of other applications?
VOLUME
  • Increase the data by having constant users.
  • Will the site allow for large orders without locking out inventory if the transaction is invalid?
  • Can the site sustain large transactions without crashing?
STRESS
  • Increase both number of users and the data.
  • Performance of memory, CPU, file handling etc.
  • Error in software, hardware, memory errors (leakage, overwrite or pointers).
  • Is the application or certain features going to be used only during certain periods of time or will it be used continuously 24 hours a day 7 days a week? Test that the application is able to perform during those conditions. Will downtime be allowed or is that out of the question?
  • Verify that the application is able to meet the requirements and does not run out of memory or disk space.
SECURITY
  • Is confidentiality/user privacy protected?
  • Does the site prompt for user name and password?
  • Are there Digital Certificates, both at server and client?
  • Have you verified where encryption begins and ends?
  • Are concurrent log-ons permitted?
  • Does the application include time-outs due to inactivity?
  • Is bookmarking disabled on secure pages?
  • Does the key/lock display on status bar for insecure/secure pages?
  • Is Right Click, View, Source disabled?
  • Are you prevented from doing direct searches by editing content in the URL?
  • If using Digital Certificates, test the browser Cache by enrolling for the Certificate and completing all of the required security information. After completing the application and installation of the certificate, try using the <-- Backspace key to see if that security information is still residing in Cache. If it is, then any user could walk up to the PC and access highly sensitive Digital Certificate security information.
  • Is there an alternative way to access secure pages for browsers under version 3.0, since SSL is not compatible with those browsers?
  • Do your users know when they are entering or leaving secure portions of your site?
  • Does your server lock out an individual who has tried to access your site multiple times with invalid login/password information?
  • Test both valid and invalid login names and passwords. Are they case sensitive? Is there a limit to how many tries that are allowed? Can it be bypassed by typing the URL to a page inside directly in the browser?
  • What happens when timeout is exceeded? Are users still able to navigate through the site?
  • Relevant information is written to the log files and that the information is traceable.
  • In SSL verify that the encryption is done correctly and check the integrity of the information.
  • Scripting on the server is not possible to plan or edit scripts without authorization.
  • Have you tested the impact of Secure Proxy Server?
  • Test should be done to ensure that the Load Balancing Server is taking the session information of Server A and pooling it to Server B when A goes down.
  • Have you verified the use of 128-bit Encryption?

Reference - http://www.testinggeek.com/performance-security-testing-checklist

Saturday, May 21, 2011

6 Basic Criteria For Testing Requirements

There are six basic criteria that must be used during the static testing of specification requirements. The criteria require that each requirement consistent with the principles of completeness, unambiguity, consistency, traceability, the practicability and testability.

Completeness: A set of requirements is considered to be complete if all its constituent parts are represented and each part is made in full. During testing of set of requirements for completeness should pay special attention to the following moments:

  • Requirements should not contain expressions such as “and so on,” and others “and the like.
  • Requirements must not refer to non-existent background information such as, for example, a non-existent specification.
  • The requirement should not rely on the functionality that have not defined.

Unambiguity: Each claim must be clearly and precisely formulated; it should allow a unique interpretation. The demand must be legible and understandable. If the demand is particularly complex, then to facilitate understanding it can be called an auxiliary material such as diagrams, or table. If a more convincing use expressions like “It’s obvious” or “self-evident, it is quite possible that the author trying to divert your attention from one or another ambiguous statement.

Consistency: Requirements should not contradict each other or current standards. If the requirements conflict with each other, we must introduce priorities in order to resolve such conflicts. The ability to detect damages resulting from violation of the requirements, involves a good knowledge of the document containing the requirements and familiarity with the existing standards or other external specifications.

Traceability: Each claim must have a unique identifier, which allows you to trace its development throughout the life cycle. In the work products that appear at later stages of life cycle, such as the test plan, every reference to the property system must be traceable to the definition and specification requirements.

Practicability: Each claim should include the system task to provide such remedies as appropriate to develop and maintain. If a customer makes to the system of unrealistic demands in terms of time spent and funds for the development of various functions or else requires the development of functions which will be unreliable and dangerous to use, you must define risks and take appropriate action. In fact, the developed system must be economically feasible, reliable, easy-to-use and accompaniment.

Testability: We should be able to develop economically viable and easy to use tests for each requirement in order to demonstrate that the tested software product possesses the necessary functionality, performance, and appropriate current standards. This means that each claim must be measured or be quantifiable and that testing should performed in acceptable conditions.

Reference- http://blog.qatestlab.com/2011/05/07/6-basic-criteria-for-testing-requirements/

Monday, May 16, 2011

Common Testing Pitfalls

Poor estimation: Developers underestimate the effort and resources required for testing. Consequently, they miss deadlines or deliver a partially tested system to the client.

Untestable requirements: Describing requirements ambiguously renders them impossible or difficult to test.

Insufficient test coverage: An insufficient number of test cases cannot test the full functionality of the system.

Inadequate test data: The test data fails to cover the range of all possible data values—that is, it omits boundary values.

False assumptions: Developers sometimes make claims about the system based on assumptions about the underlying hardware or software. Watch out for statements that begin “the system should” rather than “the system [actually] does.”

Testing too late: Testing too late during the development process leaves little time to maneuver when tests find major defects.

“Stress-easy” testing: When testing does not place the system under sufficiently high levels of stress, it fails to investigate system breakpoints, which therefore remain unknown.

Environmental mismatch: Testing the system in an environment that is not the same as the environment on which it will be installed doesn’t tell you about how it will work in the real world. Such mismatched tests make little or no attempt to replicate the mix of peripherals, hardware, and applications present in the installation environment.

Ignoring exceptions: Developers sometimes erroneously slant testing toward normal or regular cases. Such testing often ignores system exceptions, leading to a system that works most of the time, but not all the time.

Configuration mismanagement: In some cases, a software release contains components that have not been tested at all or were not tested with the released versions of the other components. Developers can’t ensure that the component will work as intended.

Testing overkill: Over-testing relatively risk-free areas of the system diverts precious resources from the system’s more high risk (and sometimes difficult to test) areas.

No contingency planning: There is no contingency in the test plan to deal with significant defects discovered during testing.

Non-independent testing: When the development team carries out testing, it can lack the objectivity of an independent testing team.


Source: Article “Testing E-Commerce Systems: A Practical Guide” by Wing Lam in IT Pro (March|April 2001) magazine.

Monday, May 9, 2011

How to decide the priority of execution of Test Cases

After building & validating the testing models several test cases are generated. The next biggest task is to decide the priority for executing them by using some systematic procedure.

The process begins with identification of "Static Test Cases" and "Dynamic Test Runs", brief introduction of which is as under.

Test Case: It is a collection of several items and corresponding information, which enables a test to be executed or performing a test run.

Test Run: It is a dynamic part of the specific testing activities in the overall sequence of testing on some specific testing object.

Every time we invoke a static test case, we in-turn perform an individual dynamic test run. Hence we can say that, every test case can correspond to several test runs.

Why & how do we prioritize?
Out of a large cluster of test cases in our hand, we need to scientifically decide their priorities of execution based upon some rational, non-arbitrary, criteria. We carry out the prioritization activity with an objective to reduce the overall number of test cases in the total testing feat.
There are couples of risks associated with our prioritization activities for the test cases. We may have the risk that some of the application features may not undergo testing at all.

During prioritization we work out plans addressing following two key concepts:
Concept – 1: Identify the essential features that must be tested in any case.
Concept – 2: Identify the risk or consequences of not testing some of the features.
The decision making in selecting the test cases is largely based upon the assessment of the risk first.
The objective of the test case prioritization exercise is to build confidence among the testers and the project leaders that the tests identified for execution are adequate from different angles.
The list of test cases decided for execution can be subjected to n-number of reviews in case of doubts / risks associated with any of the omitted tests.

Following four schemes are quite common for prioritizing the test cases.
All these methods are independent of each other & are aimed at optimizing the number of test cases. It is difficult to brand either of the methods better than the other. We can use any one method as a standalone scheme or can be used in conjunction with another one. When we get similar results out of different prioritization schemes, level of confidence increases.
Scheme – 1: Categorization of Priority.
Scheme – 2: Risk analysis.
Scheme – 3: Brainstorming to dig out the problematic areas.
Scheme – 4: Combination of different schemes.

Let us discuss the priority categorization scheme in greater detail here.
Easiest of all methods for categorizing our tests is to assign a priority code directly to every test description. This involves assigning a unique number to each & every test description.
A popular three-level priority categorization scheme is described as under
Priority - 1: Allocated to all tests that must be executed in any case.
Priority - 2: Allocated to the tests which can be executed, only when time permits.
Priority - 3: Allocated to the tests, which even if not executed, will not cause big upsets.
After assignment of priority codes, the tester estimates the amount of time required to execute the tests selected in each category. In case the estimated time happens to lie within the allotted schedule, means successful identification of tests & completion of the partitioning exercise. In case of any deviation of time plans, partitioning exercise is carried out further.

There is another extension to the above scheme i.e. new five-level scale using which we can classify the test priorities further.
The Five-Level Priority scheme is as under
Priority-1a: Allocated to the tests, which must pass, otherwise the delivery date will be affected.
Priority-2a: Allocated to the tests, which must be executed before the final delivery.
Priority-3a: Allocated to the tests which can be executed, only when time permits.
Priority-4a: Allocated to the tests, which can wait & can be executed even after the delivery date.
Priority-5a: Allocated to the tests, which have remote probability of execution ever.
Testers plan to divide the tests in various categories. For instance, say tests from priority 2 are further divided among priority levels like 3a, 4a and 5a. Likewise any test can be downgraded or upgraded.


Other considerations used while prioritizing or sequencing the test cases -
01. Relative dependencies: Some test cases are such that they can run only after the others because the one is used to set up the other. This is applicable especially for continuously operating systems involving test run to start from a state created by the previous one.
02. Timings of defect detection: Applies to cases wherein the problems can be detected only when many other problems have been found and already fixed. For example it applies to integration testing involving many components having their own problems at individual components level.
03. Damage or accidents: Applies to cases wherein acute problems or even severe damages can happen during testing unless some critical areas had not been checked before the present test run. For example it applies to embedded software involving safety critical systems, wherein the testers would not prefer to start testing the safety features prior to first testing the other related functions.
04. Difficulty levels: This is one of the most natural & commonly used sequences to execute the test cases involving moving from simple & easy test cases to difficult and complicated ones. This applies to scenarios where complicated problems can be expected. Here the testers prefer to execute comparatively simpler test cases first to narrow down the problematic areas.
05. Combining the test cases: Applies to majority of cases in large-scale software testing exercises involving interleaving and parallel testing to accelerate the testing process.

Reference - http://www.softwaretestinggenius.com/articalDetails.php?qry=715

Monday, May 2, 2011

Product Based Testing versus Project Based Testing

Before we delve into the differences, for a better clarity, I would like to explain what is a software product and a software project.

Software product: A software application that is developed by a company with its own budget. The requirements are driven by market surveys. The developed product is then sold to different customers with licenses. Example for software products: Tally (by TCS), Acrobat reader/writer (Adobe), Internet Explorer (MS), Finnacle (Infosys), Windows (MS), QTP (HP) etc.

Software project: A software application that is developed by a company with the budget from a customer. In fact, the customer gives order to develop some software that helps him in his business. Here, the requirements come from the customer. Example for software projects: A separate application ordered by a manufacturing company to maintain its office inventory etc. This application is used only by this company and no one else.

With the above insights, we will now discuss the differences.

As an end tester, there will not be much difference in testing either a product or a project. It is test scenarios and test cases, and requirements everywhere. However, here below are some differences between a product and a project, and differences from a testing perspective:

01. For a project, test plan is a must. All the documents related to that are to be prepared. For a product, test plan would have been made long time back. It is at max updated.
02. In project, client has to approve the test plan and test cases, and sign them off. In product it is not necessary.
03. In project, tester directly interacts with the client. In a product, tester interacts with the FD team or business analysts.
04. In project, the deadlines are not flexible. In product testing, deadlines are flexible.
05. In project client hold the authority on the developed code. In product, client doesn't hold the ultimate authority on the code.
06. The budget for development of the project is given by the customer in case of project development. In case of product, the budget is given by the own company.
07. The features in a project are always new. However, in Product, the basic features remain same, and only a few new features will be added, or few existing features will be modified.
08. Because of point #6, more regression testing needs to be done in case of a product, and less in case of a project.
09. Since a product runs for years, test automation saves lot of effort, where as in case of projects, it depends on the duration of the project.
10. Usually, a project complies to a small environment, as specified by the client. So, testing only on the specified environment is sufficient. A product can be installed on a number of OS and other environment parameters, depending on the end user. So, usually, more compatibility testing needs to be done in case of product.
11. A project is used by the specific client for the specific purpose only. Hence tester needs to test only in that end user’s perspective. In case of a product, the same product can be used by a variety of clients. (For example, the same Enterprise Incentive Management application can be used by Pharma clients, Insurance clients etc). So, tester needs to consider all types of potential users and their usage and needs to test accordingly.
12. Licensing comes into picture in case of a product. Thus scenarios related types of license, their registration and expiry etc needs to be tested for a product. Licensing does not exist for a project.
13. Test planning depends on the software development life cycle. Usually, it will be different for a project and a product.
14. Chances are very high to get onsite opportunities for the tester working in a project. Chances are very less in case of a tester working on a product.
15. Economic recession badly hits a software project. The customer may halt or stop the project, in which case, the test engineer may lose the job sometimes. In case of a product, a short term (0-1year) recession may not hit the engineer, as the company keeps adding the new requirements. In fact, the test engineer may get more work, if the company tries to add some innovative requirements to lure the customers into buying their product.
16. In case of a project, competitors do not come into picture, except at the senior level management. In case of a product, the tester also should consider the competitive products while testing. Sometimes, the tester needs to evaluate the performance against competitors' products. Behavior of the product with the competitors' products coexisting on the same machine needs to be considered. Also, tester needs to check any violation of the copyrights.

Reference - http://ravilandu.blogspot.com/2010/01/product-vs-project-testing.html

Thursday, April 21, 2011

Integration Testing: Why? What? & How?

Introduction:
There are various levels of testing:
Unit Testing, Integration Testing, System Testing, Acceptance Testing.
Each level of testing builds on the previous level.

“Unit testing” focuses on testing a unit of the code.
“Integration testing” is the next level of testing. This ‘level of testing’ focuses on testing the integration of “units of code” or components.

How does Integration Testing fit into the Software Development Life Cycle?
Even if a software component is successfully unit tested, in an enterprise n-tier distributed application it is of little or no value if the component cannot be successfully integrated with the rest of the application. Once unit tested components are delivered we then integrate them together. These “integrated” components are tested to weed out errors and bugs caused due to the integration. This is a very important step in the Software Development Life Cycle.

It is possible that different programmers developed different components.
A lot of bugs emerge during the integration step.
In most cases a dedicated testing team focuses on Integration Testing.

Prerequisites for Integration Testing:
Before we begin Integration Testing it is important that all the components have been successfully unit tested.

Integration Testing Steps:
Integration Testing typically involves the following Steps:
Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the components have been successfully integrated

What is an ‘Integration Test Plan’?
As you may have read in the other articles in the series, this document typically describes one or more of the following:
  • How the tests will be carried out
  • The list of things to be Tested
  • Roles and Responsibilities
  • Prerequisites to begin Testing
  • Test Environment
  • Assumptions
  • What to do after a test is successfully carried out
  • What to do if test fails
  • Glossary

How to write an Integration Test Case?
Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control from one component to the other.
So the Integration Test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure the app works when the different components are brought together.
The various Integration Test Cases clubbed together form an Integration Test Suite. Each suite may have a particular focus. In other words different Test Suites may be created to focus on different areas of the application.
As mentioned before, a dedicated Testing Team may be created to execute the Integration test cases. Therefore the Integration Test Cases should be as detailed as possible.

Sample Test Case Table:
Test Case ID Test Case Description Input Data Expected Result Actual Result Pass/Fail Remarks
       
       

Additionally the following information may also be captured:
  • Test Suite Name
  • Tested By
  • Date
  • Test Iteration (One or more iterations of Integration testing may be performed)
Working towards Effective Integration Testing:
There are various factors that affect Software Integration and hence Integration Testing:

1) Software Configuration Management: Since Integration Testing focuses on Integration of components and components can be built by different developers and even different development teams, it is important the right versions of components are tested. This may sound very basic, but the biggest problem faced in n-tier development is integrating the right version of components. Integration testing may run through several iterations and to fix bugs components may undergo changes. Hence it is important that a good Software Configuration Management (SCM) policy is in place. We should be able to track the components and their versions. So each time we integrate the application components we know exactly what versions go into the build process.
2) Automate Build Process where Necessary: A Lot of errors occur because the wrong version of components were sent for the build or there are missing components. If possible write a script to integrate and deploy the components this helps reduce manual errors.
3) Document: Document the Integration process/build process to help eliminate the errors of omission or oversight. It is possible that the person responsible for integrating the components forgets to run a required script and the Integration Testing will not yield correct results.
4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked correctly. Each defect should be documented and tracked. Information should be captured as to how the defect was fixed. This is valuable information. It can help in future integration and deployment processes.

Summary:
Integration testing is the most crucial steps in Software Development Life Cycle. Different components are integrated together and tested. This can be a daunting task in enterprise applications where diverse teams build different modules and components.

Reference -
http://www.exforsys.com/tutorials/testing/integration-testing-whywhathow.html

Tuesday, April 19, 2011

Web Testing Checklist about Usability

Navigation -
  1. Is terminology consistent?
  2. Are navigation buttons consistently located?
  3. Is navigation to the correct/intended destination?
  4. Is the flow to destination (page to page) logical?
  5. Is the flow to destination the page top-bottom left to right?
  6. Is there a logical way to return?
  7. Are the business steps within the process clear or mapped?
  8. Are navigation standards followed?
Ease of Use -
  1. Are help facilities provided as appropriate?
  2. Are selection options clear?
  3. Are ADA standards followed?
  4. Is the terminology appropriate to the intended audience?
  5. Is there minimal scrolling and resizable screens?
  6. Do menus load first?
  7. Do graphics have reasonable load times?
  8. Are there multiple paths through site (search options) that are user chosen?
  9. Are messages understandable?
  10. Are confirmation messages available as appropriate?
Presentation of Information -
  1. Are fonts consistent within functionality?
  2. Are the company display standards followed?
  3. - Logos
    - Font size
    - Colors
    - Scrolling
    - Object use
  4. Are legal requirements met?
  5. Is content sequenced properly?
  6. Are web-based colors used?
  7. Is there appropriate use of white space?
  8. Are tools provided (as needed) in order to access the information?
  9. Are attachments provided in a static format?
  10. Is spelling and grammar correct?
  11. Are alternative presentation options available (for limited browsers or performance issues)?
How to interpret/Use Info -
  1. Is terminology appropriate to the intended audience?
  2. Are clear instructions provided?
  3. Are there help facilities?
  4. Are there appropriate external links?
  5. Is expanded information provided on services and products? (why and how)
  6. Are multiple views/layouts available?

Reference - http://testinglink.in/topics/web-testing-checklist-about-usability

Sunday, April 17, 2011

An approach for Security Testing of Web Applications

As more and more vital data is stored in web applications and the number of transactions on the web increases, proper security testing of web applications is becoming very important. Security testing is the process that determines that confidential data stays confidential (i.e. it is not exposed to individuals/ entities for which it is not meant) and users can perform only those tasks that they are authorized to perform (e.g. a user should not be able to deny the functionality of the web site to other users, a user should not be able to change the functionality of the web application in an unintended way etc.).

Some key terms used in security testing -

Before we go further, it will be useful to be aware of a few terms that are frequently used in web application security testing:

What is “Vulnerability”?
This is a weakness in the web application. The cause of such a “weakness” can be bugs in the application, an injection (SQL/ script code) or the presence of viruses.
What is “URL manipulation”?
Some web applications communicate additional information between the client (browser) and the server in the URL. Changing some information in the URL may sometimes lead to unintended behavior by the server.
What is “SQL injection”?
This is the process of inserting SQL statements through the web application user interface into some query that is then executed by the server.
What is “XSS (Cross Site Scripting)”?
When a user inserts HTML/ client-side script in the user interface of a web application and this insertion is visible to other users, it is called XSS.
What is “Spoofing”?
The creation of hoax look-alike websites or emails is called spoofing.

Security testing approach:

In order to perform a useful security test of a web application, the security tester should have good knowledge of the HTTP protocol. It is important to have an understanding of how the client (browser) and the server communicate using HTTP. Additionally, the tester should at least know the basics of SQL injection and XSS. Hopefully, the number of security defects present in the web application will not be high. However, being able to accurately describe the security defects with all the required details to all concerned will definitely help.

1. Password cracking:
The security testing on a web application can be kicked off by “password cracking”. In order to log in to the private areas of the application, one can either guess a username/ password or use some password cracker tool for the same. Lists of common usernames and passwords are available along with open source password crackers. If the web application does not enforce a complex password (e.g. with alphabets, number and special characters, with at least a required number of characters), it may not take very long to crack the username and password.
If username or password is stored in cookies without encrypting, attacker can use different methods to steal the cookies and then information stored in the cookies like username and password. For more details see article on "Website cookie testing".
2. URL manipulation through HTTP GET methods:
The tester should check if the application passes important information in the querystring. This happens when the application uses the HTTP GET method to pass information between the client and the server. The information is passed in parameters in the querystring. The tester can modify a parameter value in the querystring to check if the server accepts it.
Via HTTP GET request user information is passed to server for authentication or fetching data. Attacker can manipulate every input variable passed from this GET request to server in order to get the required information or to corrupt the data. In such conditions any unusual behavior by application or web server is the doorway for the attacker to get into the application.
3. SQL Injection:
The next thing that should be checked is SQL injection. Entering a single quote (‘) in any textbox should be rejected by the application. Instead, if the tester encounters a database error, it means that the user input is inserted in some query which is then executed by the application. In such a case, the application is vulnerable to SQL injection.
SQL injection attacks are very critical as attacker can get vital information from server database. To check SQL injection entry points into your web application, find out code from your code base where direct MySQL queries are executed on database by accepting some user inputs.
If user input data is crafted in SQL queries to query the database, attacker can inject SQL statements or part of SQL statements as user inputs to extract vital information from database. Even if attacker is successful to crash the application, from the SQL query error shown on browser, attacker can get the information they are looking for. Special characters from user inputs should be handled/escaped properly in such cases.
4. Cross Site Scripting (XSS):
The tester should additionally check the web application for XSS (Cross site scripting). Any HTML e.g. <HTML> or any script e.g. <SCRIPT>> should not be accepted by the application. If it is, the application can be prone to an attack by Cross Site Scripting.
Attacker can use this method to execute malicious script or URL on victim’s browser. Using cross-site scripting, attacker can use scripts like JavaScript to steal user cookies and information stored in the cookies.
Many web applications get some user information and pass this information in some variables from different pages.
E.g.: http://www.examplesite.com/index.php?userid=123&query=xyz
Attacker can easily pass some malicious input or <script> as a ‘&query’ parameter which can explore important user/server data on browser.
Important: During security testing, the tester should be very careful not to modify any of the following:
• Configuration of the application or the server
• Services running on the server
• Existing user or customer data hosted by the application
Additionally, a security test should be avoided on a production system.
The purpose of the security test is to discover the vulnerabilities of the web application so that the developers can then remove these vulnerabilities from the application and make the web application and data safe from unauthorized actions.

Reference - http://www.softwaretestinghelp.com/security-testing-of-web-applications/

Friday, April 15, 2011

Difference between various Specifications Documents – For Test Design, Test Cases & Test Procedures

IEEE 829 standard prescribes many specifications related documents. Three such documents are -
1. Test Design Specifications
2. Test Case Specifications
3. Test Procedure Specifications

Let us go a bit deeper into the salient features of each of these documents being crucially important in any testing effort.

1. Test Design Specification:

“A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.”

The objective of compiling test design specifications is to identify set of features or a combination of features to be tested and to identify the group of test cases that will adequately test those features. In addition to these it contains all types of refinements done to the approach described in the test plan.

The test design specification consists of following essential parts:

  1. Test design specification identifier: A unique identifier is to be allocated so that the test design specification document can be distinguished from all other documents.
  2. Features to be tested: It describes the test items and the features that are the object of this test design specification.
  3. Approach refinements: It describes the test techniques to be adopted for this test design.
  4. Test identification: It describes a comprehensive list of test cases associated with this test design. It provides a unique identifier and a short description for every test case.
  5. Acceptance criteria: It describes the criteria to confirm as to whether each feature has passed or failed during testing.
2. Test Case Specification:

“A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.”

The objective of compiling the test case specifications is to specify in detail each test case listed in the test design specification.

The test case specification consists of following essential parts:

  1. Test case specification identifier: A unique identifier so that this document can be distinguished from all other documents.
  2. Test items: Identifies the items and features to be tested by the particular test case.
  3. Input specifications: It describes details of each & every input required by the particular test case.
  4. Output specifications: It describes each output expected after executing the particular test case.
  5. Environmental needs: It describes any special hardware, software, facilities, etc. required for the execution of the particular test case that were not listed in its associated test design specification.
  6. Special procedural requirements: It describes any special setup, execution, or cleanup procedures unique to the particular test case.
  7. Inter-case dependencies: It describes a comprehensive list of all test cases that must be executed prior to the particular test case.
3. Test Procedure Specification:

“A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.”

The objective of compiling the test procedure specifications is to specify the steps for executing a test case and the process for determining whether the software passed or failed the test.

The test procedure specification consists of following essential parts:

  1. Test procedure specification identifier: A unique identifier is to be allocated so that the test procedure specification document can be distinguished from all other documents.
  2. Objective: It describes the objective of the test procedure and its corresponding test cases.
  3. Special requirements: It describes a comprehensive list of all special requirements for the execution of the particular test procedure.
  4. Procedure steps: It describes a comprehensive list of all steps of the procedure. Possible steps may consist of the following:
    • Set up
    • Start
    • Proceed
    • Measure
    • Shut Down
    • Restart
    • Stop & Finally
    • Wind up

Reference -
ISTQB Glossary of Testing Terms 2.1
http://www.softwaretestinggenius.com/articaldetails.php?mode=details&qry=764&parent=125

Wednesday, April 13, 2011

Six functional tests to ensure software quality

Six types of functional testing can be used to ensure the quality of the end product. Understand these testing types and scale the execution to match the risk to the project.

1. Ensure every line of code executes properly with Unit Testing.

Unit testing is the process of testing each unit of code in a single component. This form of testing is carried out by the developer as the component is being developed. The developer is responsible for ensuring that each detail of the implementation is logically correct. Unit tests are normally discussed in terms of the type of coverage they provide:
Function coverage: Each function/method executed by at least one test case.
Statement coverage: Each line of code covered by at least one test case (need more test cases than above).
Path coverage: Every possible path through code covered by at least one test case (need plenty of test cases).
Unit tests allow developers to continually ensure that a unit of code does what is intended even as associated units change. As the software evolves, unit tests are modified, serving as an up-to-date form of documentation.

2. Ensure every function produces its expected outcome with Functional Testing.

Functional testing addresses concerns surrounding the correct implementation of functional requirements. Commonly referred to as black box testing, this type of testing requires no knowledge of the underlying implementation.
Functional test suites are created from requirement use cases, with each scenario becoming a functional test. As a component is implemented, the respective functional test is applied to it after it has been unit tested.
For many projects, it is unreasonable to test every functional aspect of the software. Instead, define functional testing goals that are appropriate for the project. Prioritize critical and widely used functions and include other functions as time and resources permit.
For detailed information on how to correctly develop use cases to support functional testing, refer to the Info-Tech Advisor research note, "Use Cases: Steer Clear of the Pitfalls."

3. Ensure all functions combine to deliver the desired business result with System Testing.

System testing executes end-to-end functional tests that cross software units, helping to realize the goal of ensuring that components combine to deliver the desired business result. In defining the project's system testing goals, focus on those scenarios that require critical units to integrate.
Also, consider whether all subsystems should be tested first or if all layers of a single subsystem should be tested before being combined with another subsystem.
Combining the various components together in one swift move should be avoided. The issue with this approach is the difficulty in localizing error. Components should be integrated incrementally after each has been tested in isolation.

4. Ensure new changes did not adversely affect other parts of the system with Regression Testing.

Regression testing ensures code modifications have not inadvertently introduced bugs into the system or changed existing functionality. Goals for regression testing should include plans from the original unit, as well as functional and system tests phases to demonstrate that existing functionality behaves as intended.
Determining when regression testing is sufficient can be difficult. Although it is not desirable to test the entire system again, critical functionality should be tested regardless of where the modification occurred. Regression testing should be done frequently to ensure a baseline software quality is maintained.

5. Ensure the system integrates with and does not adversely affect other enterprise systems with System Integration Testing.

System integration testing is a process that assesses the software's interoperability and cooperation with other applications. Define testing goals that will exercise required communication. (It is fruitless to test interaction between systems that will not collaborate once the developed system is installed.) This is done using process flows that encapsulate the entire system.
The need for a developed system to coexist with existing enterprise applications necessitates developing testing goals that can uncover faults in their integration. In the case that the new system is standalone software and there is no requirement for compatibility with any other enterprise system, system integration testing can be ignored.

6. Ensure the customer is satisfied with the system with Acceptance Testing.

Acceptance testing aims to test how well users interact with the system, that it does what they expect and is easy to use. Although it is the final phase of testing before software deployment, the tests themselves should be defined as early as possible in the SDLC. Early definition ensures customer expectations are set appropriately and confirms for designers that what they are building will satisfy the end user's requirements. To that end, acceptance test cases are developed from user requirements and are validated in conjunction with actual end users of the system. The process results in acceptance or rejection of the final product.

Reference - http://searchsoftwarequality.techtarget.com/report/Six-functional-tests-to-ensure-software-quality

Sunday, April 10, 2011

How a Good Bug Hunter Prioritizes his Bug Hunting Activities in Software Testing

Let us firstly understand as to what is a bug hunter?

A bug hunter is an experienced & enthusiastic exploratory tester. Good bug hunters usually do the following:
  1. Do initial exploratory testing of a suspect area, to develop ideas for more detailed attacks that can be performed by less experienced testers.
  2. Explore an area that is allegedly low risk - can he quickly find bugs that would lead to reassessment of the risk?
  3. Troubleshoot key areas that seem prone to irreproducible bugs.
  4. Find critical bugs that will convince the project manager to slip a (premature) release date.

How to prioritize the bug hunting activities?

Generally the mission of a good bug hunter is finding bugs that are important (as opposed to insignificant) and finding them quickly. If so, what does this mean in terms of the tests that are run?
You can use following suggestions to prioritize bug hunting in your software testing effort.
  1. Test things that are changed before things that are the same. Fixes and updates mean fresh risk.
  2. Test core functions before contributing functions. Test the critical and the popular things that the product does. Test the functions that make the product what it is.
  3. Test capability before reliability. Test whether each function can work at all before going deep into the examination of how any one function performs under many different conditions.
  4. Test common situations before esoteric situations. Use popular data and scenarios of use.
  5. Test common threats before exotic threats. Test with the most likely stress and error situations.
  6. Test for high-impact problems before low-impact problems. Test the parts of the product that would do a lot of damage in case of failure.
  7. Test the most wanted areas before areas not requested. Test any areas and for any problems that are of special interest to someone else on the team.
Conclusion: You will also find important problems sooner if you know more about the product, the software and hardware it must interact with, and the people who will use it. Study these thoroughly.


Reference - http://www.softwaretestinggenius.com/articalDetails.php?qry=960

Tuesday, April 5, 2011

Software Test Process / STLC

I) Test Planning

(Primary Role: Test Lead/Team Lead)

Input:/Reference:
  1. Requirements specification
  2. Test Strategy
  3. Project plan
  4. Use cases/design docs/Prototype screen
  5. Process Guidelines docs
Templates:
  1. Review Report
  2. Test Plan
Roles:
  1. Test Lead/Team Lead: Test Planning
  2. Test Engineers: Contribution to Test plan
  3. BA: Clarifications on Requirements
Tasks:
  1. Understanding & Analyzing the Requirements
  2. Test Strategy Implementation
  3. Test Estimations (Time, Resources-Environmental, Human, Budget)
  4. Risk Analysis
  5. Team formation
  6. Configuration management plan
  7. Test Plan Documentation
  8. Test Environment set-up defining
Output:
          Test Plan Document


II) Test Design



Input:/Reference:
  1. Requirements specification
  2. Test Plan
  3. Use cases/design docs/Prototype screen
  4. Process Guidelines docs
Templates:
  1. Test scenarios
  2. Test case
  3. Test data
Roles:
  1. Test Engineers: Test case documentation
  2. Test Lead/Team Lead: Guidance, monitoring & Control
  3. BA: Clarifications on Requirements
Tasks:
  1. Creating Test scenarios
  2. Test case documentation
  3. Test data collection
Output:
  1. Test case Documents
  2. Test Data


III) Test Execution



Input:/Reference:
  1. Requirements specification
  2. Test Plan
  3. Test Case docs
  4. Test data
  5. Test Environment
Templates:
  1. Defect Report
  2. Test Report
Roles:
  1. Test engineers: Test execution
  2. Test Lead: Guidance, monitoring & Control
  3. BA: Clarifications on Requirements
  4. System Administrator/Network Administration: Test Environment set-up
Tasks:
  1. Forming Test Batches
  2. Verifying Test Environment set-up
  3. Test Execution
  4. Test reporting
  5. Defect Reporting
  6. Regression Testing
Output:
  1. Test Reports
  2. Opened/Closed Defect Reports


IV) Test Closure



Input:/Reference:
  1. Requirements
  2. Test Plan
  3. Test Reports
  4. Opened/Closed Defect Reports
Templates:
          Test Summary Report

Roles:
  1. Test Lead: decide when to stop testing & Creating Test summary Report
  2. Testers: Contribution
Tasks:
  1. Evaluating Exit criteria
  2. Collecting all facts from Testing activities
  3. Sending Test deliverables to the Customer
  4. Improvement suggestions for future projects
Output:
  1. Test Summary Report
  2. Test Deliverables (Test Plan, Test scenarios, Test cases, Test Data, Test Reports, Opened/Closed defect reports, Test Summary Report)


Reference - http://www.gcreddy.com/2010/01/synchronization.html

Monday, March 28, 2011

Top 20 practical software testing tips you should read before testing any application.

01. Learn to analyze your test results thoroughly. Do not ignore the test result. The final test result may be ‘pass’ or ‘fail’ but troubleshooting the root cause of ‘fail’ will lead you to the solution of the problem. Testers will be respected if they not only log the bugs but also provide solutions.

02. Learn to maximize the test coverage every time you test any application. Though 100 percent test coverage might not be possible still you can always try to reach near it.

03. To ensure maximum test coverage break your application under test (AUT) into smaller functional modules. Write test cases on such individual unit modules. Also if possible break these modules into smaller parts.
E.g: Let’s assume you have divided your website application in modules and ‘accepting user information’ is one of the modules. You can break this ‘User information’ screen into smaller parts for writing test cases: Parts like UI testing, security testing, functional testing of the ‘User information’ form etc. Apply all form field type and size tests, negative and validation tests on input fields and write all such test cases for maximum coverage.

04. While writing test cases, write test cases for intended functionality first i.e. for valid conditions according to requirements. Then write test cases for invalid conditions. This will cover expected as well unexpected behavior of application under test.

05. Think positive. Start testing the application by intend of finding bugs/errors. Don’t think beforehand that there will not be any bugs in the application. If you test the application by intention of finding bugs you will definitely succeed to find those subtle bugs also.

06. Write your test cases in requirement analysis and design phase itself. This way you can ensure all the requirements are testable.

07. Make your test cases available to developers prior to coding. Don’t keep your test cases with you waiting to get final application release for testing, thinking that you can log more bugs. Let developers analyze your test cases thoroughly to develop quality application. This will also save the re-work time.

08. If possible identify and group your test cases for regression testing. This will ensure quick and effective manual regression testing.

09. Applications requiring critical response time should be thoroughly tested for performance. Performance testing is the critical part of many applications. In manual testing this is mostly ignored part by testers due to lack of required performance testing large data volume. Find out ways to test your application for performance. If not possible to create test data manually then write some basic scripts to create test data for performance test or ask developers to write one for you.

10. Programmers should not test their own code. As discussed in our previous post, basic unit testing of developed application should be enough for developers to release the application for testers. But you (testers) should not force developers to release the product for testing. Let them take their own time. Everyone from lead to manger know when the module/update is released for testing and they can estimate the testing time accordingly. This is a typical situation in agile project environment.

11. Go beyond requirement testing. Test application for what it is not supposed to do.

12. While doing regression testing use previous bug graph (Bug graph – number of bugs found against time for different modules). This module-wise bug graph can be useful to predict the most probable bug part of the application.

13. Note down the new terms, concepts you learn while testing. Keep a text file open while testing an application. Note down the testing progress, observations in it. Use these notepad observations while preparing final test release report. This good habit will help you to provide the complete unambiguous test report and release details.

14. Many times testers or developers make changes in code base for application under test. This is required step in development or testing environment to avoid execution of live transaction processing like in banking projects. Note down all such code changes done for testing purpose and at the time of final release make sure you have removed all these changes from final client side deployment file resources.

15. Keep developers away from test environment. This is required step to detect any configuration changes missing in release or deployment document. Sometimes developers do some system or application configuration changes but forget to mention those in deployment steps. If developers don’t have access to testing environment they will not do any such changes accidentally on test environment and these missing things can be captured at the right place.

16. It’s a good practice to involve testers right from software requirement and design phase. These way testers can get knowledge of application dependability resulting in detailed test coverage. If you are not being asked to be part of this development cycle then make request to your lead or manager to involve your testing team in all decision making processes or meetings.

17. Testing teams should share best testing practices, experience with other teams in their organization.

18. Increase your conversation with developers to know more about the product. Whenever possible make face-to-face communication for resolving disputes quickly and to avoid any misunderstandings. But also when you understand the requirement or resolve any dispute – make sure to communicate the same over written communication ways like emails. Do not keep anything verbal.

19. Don’t run out of time to do high priority testing tasks. Prioritize your testing work from high to low priority and plan your work accordingly. Analyze all associated risks to prioritize your work.

20. Write clear, descriptive, unambiguous bug report. Do not only provide the bug symptoms but also provide the effect of the bug and all possible solutions.

Reference -
http://ezinearticles.com/?The-Top-20-Practical-Software-Testing-Tips&id=5888971

Sunday, March 27, 2011

Top 10 Negative Test Cases

01. Embedded Single Quote –
Most SQL based database systems have issues when users store information that contain a single quote (e.g. John's car). For each screen that accepts alphanumeric data entry, try entering text that contains one or more single quotes.

02. Required Data Entry –
Your functional specification should clearly indicate fields that require data entry on screens. Test each field on the screen that has been indicated as being required to ensure it forces you to enter data in the field.

03. Field Type Test –
Your functional specification should clearly indicate fields that require specific data entry requirements (date fields, numeric fields, phone numbers, zip codes, etc). Test each field on the screen that has been indicated as having special types to ensure it forces you to enter data in the correct format based on the field type (numeric fields should not allow alphabetic or special characters, date fields should require a valid date, etc)

04. Field Size Test –
Your functional specification should clearly indicate the number of characters you can enter into a field (for example, the first name must be 50 or less characters). Write test cases to ensure that you can only enter the specified number of characters. Preventing the user from entering more characters than is allowed is more elegant than giving an error message after they have already entered too many characters.

05. Numeric Bounds Test –
For numeric fields, it is important to test for lower and upper bounds. For example, if you are calculating interest charged to an account, you would never have a negative interest amount applied to an account that earns interest, therefore, you should try testing it with a negative number. Likewise, if your functional specification requires that a field be in a specific range (e.g. from 10 to 50), you should try entering 9 or 51, it should fail with a graceful message.

06. Numeric Limits Test –
Most database systems and programming languages allow numeric items to be identified as integers or long integers. Normally, an integer has a range of -32,767 to 32,767 and long integers can range from -2,147,483,648 to 2,147,483,647. For numeric data entry that do not have specified bounds limits, work with these limits to ensure that it does not get an numeric overflow error.

07. Date Bounds Test –
For date fields, it is important to test for lower and upper bounds. For example, if you are checking a birth date field, it is probably a good bet that the person's birth date is no older than 150 years ago. Likewise, their birth date should not be a date in the future.

08. Date Validity –
For date fields, it is important to ensure that invalid dates are not allowed (04/31/2007 is an invalid date). Your test cases should also check for leap years (every 4th and 400th year is a leap year).

09. Web Session Testing –
Many web applications rely on the browser session to keep track of the person logged in, settings for the application, etc. Most screens in a web application are not designed to be launched without first logging in. Create test cases to launch web pages within the application without first logging in. The web application should ensure it has a valid logged in session before rendering pages within the application.

10. Performance Changes –
As you release new versions of your product, you should have a set of performance tests that you run that identify the speed of your screens (screens that list information, screens that add/update/delete data, etc). Your test suite should include test cases that compare the prior release performance statistics to the current release. This can aid in identifying potential performance problems that will be manifested with code changes to the current release.

Reference - http://www.qaguild.com/weekly_archives.php?UID=21

Twenty Essential Firefox Add-ons For Testing

01. Firesizer 1.2
Provides a menu and status bar to resize the window dimensions to a specific size. Unlike other similar extensions, this one sets the size of the *entire window*, not just the HTML area. This add-on is extremely useful if you want to test how your application will look in different size windows.

02. W3C Page Validator 3.0.0
Validates a page using the W3C Markup Validation Service. Adds an option to the right-click context menu and to the Tools menu to allow for easy validation of the current page. Opens the results in a new tab. This is a simple extension that will work only for online pages. Depending on your context, if your organization is committed to create W3C compliant web applications, this might be very handy.

03. SQL Injection 1.2
This is an excellent tool to help developers in identifying SQL injection vulnerabilities. This add-on transform checkboxes, radio buttons and select elements to an input text box. It sets all form fields free to edit their values.

04. QuickRestart 1.1.6
If you need to restart firefox to test your cookies, sessions or because of any other reason, this little button in your toolbar will do it with a mouse click. This simple extension adds a "Restart Firefox" item to the "File" menu. You can also use the Ctrl+Alt+R keyboard shortcut, or the included toolbar button. To use the toolbar button: right click on toolbar -> Customize... then drag the Restart button to the toolbar. This is a very simple utility, but can be extremely useful.

05. Firebug 1.7.0
Firebug integrates with Firefox to put a wealth of web development tools at your fingertips while you browse. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page. Your firefox should have firebug if you are working in web application testing domain.

06. Regular Expressions Tester 3.2.11
This small add-on allows you to test your regular expression. The tool includes options like case sensitive, global and multiline search, color highlighting of found expressions and of special characters, a replacement function incl. backreferences, and auto-closing of brackets, testing while writing and saving and managing of expressions.

07. HttpFox 0.8.8
It is a HTTP analyzer addon for Firefox. This add-on is an amazing tool if you want to dig deeper into http request / response and analyze all the traffic. Using this add-on, you can get information about request / response header, sent and received cookies, Querystring parameters, POST parameters and Response body.

08. HackBar 1.6.0
It is a Simple security audit / Penetration test tool. It is useful for SQL injections, XSS holes and site security.

09. Web Developer 1.1.9
The Web Developer extension adds various web developer tools to a browser. This one along with the firebug are must have extensions if you are working in web application domain. These extensions are complete toolset and probably require separate articles to explain their capabilities.

10. Accessibar 0.7.8
It is an amazing add-on to test accessibility of any site. Using this add-on, you can change font size, line spacing, hide all the images / flash and so on. It also integrates text to speech reader and reads string on mouse hover / focus elements.

11. Add N Edit Cookies 0.2.1.3
There are very few applications which does not use cookies these days. This little add-on allows you to add / edit session, saved cookies and test how your application will respond to changes in the cookies setting.

12. LinkChecker 0.6.7
This is a very simple tool to check if there are any broken links on the web page. It highlights all the links in various colors to show if they are broken or not.

13. MeasureIt 0.4.7
If you have a design team which is very particular about the size of every element, than this little add-on can be used to test height and width of all the elements. It draws a ruler around any element and shows height / width of the element.

14. YSlow 2.1.0
YSlow analyzes web pages and gives you information on why they are slow based on the Yahoo's rule for high performance websites.

15. User Agent Switcher 0.7.3
If your application behaves differently based on the user agent or if it records this information than this add-on can be very handy. It simply switch the user agent of the browser. Please note, changing user agent will not make it render like IE.

16. FireShot 0.88
FireShot can be handy in situations where you want to take screen shot of your web application. FireShot is a Firefox extension that creates screenshots of web pages (entirely or just visible part). It even allows you to write comments, highlight specific element of your application and so on.

17. URLParams 2.2.0
This little add-on make it convenient to analyze GET and POST parameters of the current website in the sidebar. You can ever change their values, add new parameters etc.

18. Tamper Data 11.0.1
Tamper Data allows you to view and modify HTTP/HTTPS headers and post parameters. You can use it to test security of your web applications by modifying POST data and analyzing how your application will respond to those changes.

19. View Dependencies 0.3.3.2
View Dependencies adds a tab to the Page Info window, in which it lists all the files which were loaded to show the current page. This is an extremely useful add-on if you want to figure out what are all the files / images etc. are getting downloaded for every page, how much time they are taking and many such details.

20. Flash Switcher 2.0.2
If you are involved in testing Flash Applications, than this little add-on is extremely useful to test your flash application on various versions of flash players. This extension allows you to easily switch from one flash player plug in version to another, or also to remove or save the currently installed plug in (maybe for testing the express install).




Reference - http://www.testinggeek.com/index.php/testing-articles/176-twenty-essential-firefox-addons-for-testing

Friday, March 25, 2011

Web Standards Checklist

The term web standards can mean different things to different people. For some, it is 'table-free sites', for others it is 'using valid code'. However, web standards are much broader than that. A site built to web standards should adhere to standards (HTML, XHTML, XML, CSS, XSLT, DOM, MathML, SVG etc) and pursue best practices (valid code, accessible code, semantically correct code, user-friendly URLs etc).

In other words, a site built to web standards should ideally be lean, clean, CSS-based, accessible, and usable and search engine friendly.

About the checklist-

This is not an uber-checklist. There are probably many items that could be added. More importantly, it should not be seen as a list of items that must be addressed on every site that you develop. It is simply a guide that can be used:

• To show the breadth of web standards
• As a handy tool for developers during the production phase of websites
• As an aid for developers who are interested in moving towards web standards

The checklist:




* Quality of code :
01. Does the site use a correct Doctype?
02. Does the site use a Character set?
03. Does the site use Valid (X)HTML?
04. Does the site use Valid CSS?
05. Does the site use any CSS hacks?
06. Does the site use unnecessary classes or ids?
07. Is the code well structured?
08. Does the site have any broken links?
09. How does the site perform in terms of speed/page size?
10. Does the site have JavaScript errors?

* Degree of separation between content and presentation :
01. Does the site use CSS for all presentation aspects (fonts, colour, padding, borders etc)?
02. Are all decorative images in the CSS, or do they appear in the (X)HTML?

* Accessibility for users :
01. Are "alt" attributes used for all descriptive images?
02. Does the site use relative units rather than absolute units for text size?
03. Do any aspects of the layout break if font size is increased?
04. Does the site use visible skip menus?
05. Does the site use accessible forms?
06. Does the site use accessible tables?
07. Is there sufficient colour brightness/contrasts?
08. Is colour alone used for critical information?
09. Is there delayed responsiveness for dropdown menus (for users with reduced motor skills)?
10. Are all links descriptive (for blind users)?

* Accessibility for devices :
01. Does the site work acceptably across modern and older browsers?
02. Is the content accessible with CSS switched off or not supported?
03. Is the content accessible with images switched off or not supported?
04. Does the site work in text browsers such as Lynx?
05. Does the site work well when printed?
06. Does the site work well in Hand Held devices?
07. Does the site include detailed metadata?
08. Does the site work well in a range of browser window sizes?

* Basic Usability :
01. Is there a clear visual hierarchy?
02. Are heading levels easy to distinguish?
03. Does the site have easy to understand navigation?
04. Does the site use consistent navigation?
05. Are links underlined?
06. Does the site use consistent and appropriate language?
07. Do you have a sitemap page and contact page? Are they easy to find?
08. For large sites, is there a search tool?
09. Is there a link to the home page on every page in the site?
10. Are visited links clearly defined with a unique color?

* Site management :
01. Does the site have a meaningful and helpful 404 error page that works from any depth in the site?
02. Does the site use friendly URLs?
03. Do your URLs work without "www"?
04. Does the site have a favicon?

01. Quality of code

1.1 Does the site use a correct Doctype? :
A doctype (short for 'document type declaration') informs the validator which version of (X)HTML you're using, and must appear at the very top of every web page. Doctypes are a key component of compliant web pages: your markup and CSS won't validate without them.

1.2 Does the site use a Character set? :
If a user agent (eg. a browser) is unable to detect the character encoding used in a Web document, the user may be presented with unreadable text. This information is particularly important for those maintaining and extending a multilingual site, but declaring the character encoding of the document is important for anyone producing XHTML/HTML or CSS.

1.3 Does the site use Valid (X)HTML? :
Valid code will render faster than code with errors. Valid code will render better than invalid code. Browsers are becoming more standards compliant, and it is becoming increasingly necessary to write valid and standards compliant HTML.

1.4 Does the site use Valid CSS? :
You need to make sure that there aren't any errors in either your HTML or your CSS, since mistakes in either place can result in botched document appearance.

1.5 Does the site use any CSS hacks? :
Basically, hacks come down to personal choice, the amount of knowledge you have of workarounds, the specific design you are trying to achieve.

1.6 Does the site use unnecessary classes or ids? :
I've noticed that developers learning new skills often end up with good CSS but poor XHTML. Specifically, the HTML code tends to be full of unnecessary divs and ids. This results in fairly meaningless HTML and bloated style sheets.

1.7 Is the code well structured? :
Semantically correct markup uses html elements for their given purpose. Well structured HTML has semantic meaning for a wide range of user agents (browsers without style sheets, text browsers, PDAs, search engines etc.)

1.8 Does the site have any broken links? :
Broken links can frustrate users and potentially drive customers away. Broken links can also keep search engines from properly indexing your site.

1.9 How does the site perform in terms of speed/page size? :
Don't make me wait... That's the message users give us in survey after survey. Even broadband users can suffer the slow-loading blues.

1.10 Does the site have JavaScript errors? :
Internet Explore for Windows allows you to turn on a debugger that will pop up a new window and let you know there are javascript errors on your site. This is available under 'Internet Options' on the Advanced tab. Uncheck 'Disable script debugging'.

02. Degree of separation between content and presentation

2.1 Does the site use CSS for all presentation aspects (fonts, colour, padding, borders etc)?
Use style sheets to control layout and presentation.

2.2 Are all decorative images in the CSS, or do they appear in the (X)HTML?
The aim for web developers is to remove all presentation from the html code, leaving it clean and semantically correct.

03. Accessibility for users

3.1 Are "alt" attributes used for all descriptive images?
Provide a text equivalent for every non-text element

3.2 Does the site use relative units rather than absolute units for text size?
Use relative rather than absolute units in markup language attribute values and style sheet property values'.

3.3 Do any aspects of the layout break if font size is increased?
Try this simple test. Look at your website in a browser that supports easy incrementation of font size. Now increase your browser's font size. And again. And again... Look at your site. Does the page layout still hold together? It is dangerous for developers to assume that everyone browses using default font sizes.

3.4 Does the site use visible skip menus?
A method shall be provided that permits users to skip repetitive navigation links.
Group related links, identify the group (for user agents), and, until user agents do so, provide a way to bypass the group....blind visitors are not the only ones inconvenienced by too many links in a navigation area. Recall that a mobility-impaired person with poor adaptive technology might be stuck tabbing through that morass.

3.5 Does the site use accessible forms?
Forms aren't the easiest of things to use for people with disabilities. Navigating around a page with written content is one thing, hopping between form fields and inputting information is another.

3.6 Does the site use accessible tables?
For data tables, identify row and column headers... For data tables that have two or more logical levels of row or column headers, use markup to associate data cells and header cells.

3.7 Is there sufficient colour brightness/contrasts?
Ensure that foreground and background colour combinations provide sufficient contrast when viewed by someone having colour deficits.

3.8 Is colour alone used for critical information?
Ensure that all information conveyed with colour is also available without colour, for example from context or markup.
There are basically three types of colour deficiency; Deuteranope (a form of red/green colour deficit), Protanope (another form of red/green colour deficit) and Tritanope (a blue/yellow deficit- very rare).

3.9 Is there delayed responsiveness for dropdown menus?
Users with reduced motor skills may find dropdown menus hard to use if responsiveness is set too fast.

3.10 Are all links descriptive?
Link text should be meaningful enough to make sense when read out of context - either on its own or as part of a sequence of links. Link text should also be terse.

4. Accessibility for devices.

4.1 Does the site work acceptably across modern and older browsers?
Before starting to build a CSS-based layout, you should decide which browsers to support and to what level you intend to support them.

4.2 Is the content accessible with CSS switched off or not supported?
Some people may visit your site with either a browser that does not support CSS or a browser with CSS switched off. In content is structured well, this will not be an issue.

4.3 Is the content accessible with images switched off or not supported?
Some people browse websites with images switched off - especially people on very slow connections. Content should still be accessible for these people.

4.4 Does the site work in text browsers such as Lynx?
This is like a combination of images and CSS switched off. A text-based browser will rely on well structured content to provide meaning.

4.5 Does the site work well when printed?
You can take any (X)HTML document and simply style it for print, without having to touch the markup.

4.6 Does the site work well in Hand Held devices?
This is a hard one to deal with until hand held devices consistently support their correct media type. However, some layouts work better in current hand-held devices. The importance of supporting hand held devices will depend on target audiences.

4.7 Does the site include detailed metadata?
Metadata is machine understandable information for the web
Metadata is structured information that is created specifically to describe another resource. In other words, metadata is 'data about data'.

4.8 Does the site work well in a range of browser window sizes?
It is a common assumption amongst developers that average screen sizes are increasing. Some developers assume that the average screen size is now 1024px wide. But what about users with smaller screens and users with hand held devices? Are they part of your target audience and are they being disadvantaged?

05. Basic Usability

5.1 Is there a clear visual hierarchy?
Organise and prioritise the contents of a page by using size, prominence and content relationships.

5.2 Are heading levels easy to distinguish?
Use header elements to convey document structure and use them according to specification.

5.3 Is the site's navigation easy to understand?
Your navigation system should give your visitor a clue as to what page of the site they are currently on and where they can go next.

5.4 Is the site's navigation consistent?
If each page on your site has a consistent style of presentation, visitors will find it easier to navigate between pages and find information

5.5 Does the site use consistent and appropriate language?
The use of clear and simple language promotes effective communication. Trying to come across as articulate can be as difficult to read as poorly written grammar, especially if the language used isn't the visitor's primary language.

5.6 Does the site have a sitemap page and contact page? Are they easy to find?
Most site maps fail to convey multiple levels of the site's information architecture. In usability tests, users often overlook site maps or can't find them. Complexity is also a problem: a map should be a map, not a navigational challenge of its own.

5.7 For large sites, is there a search tool?
While search tools are not needed on smaller sites, and some people will not ever use them, site-specific search tools allow users a choice of navigation options.

5.8 Is there a link to the home page on every page in the site?
Some users like to go back to a site's home page after navigating to content within a site. The home page becomes a base camp for these users, allowing them to regroup before exploring new content.

5.9 Are links underlined?
To maximise the perceived affordance of clickability, colour and underline the link text. Users shouldn't have to guess or scrub the page to find out where they can click.

5.10 Are visited links clearly defined?
Most important, knowing which pages they've already visited frees users from unintentionally revisiting the same pages over and over again.

06. Site management

6.1 Does the site have a meaningful and helpful 404 error page that works from any depth in the site?
You've requested a page - either by typing a URL directly into the address bar or clicking on an out-of-date link and you've found yourself in the middle of cyberspace nowhere. A user-friendly website will give you a helping hand while many others will simply do nothing, relying on the browser's built-in ability to explain what the problem is.

6.2 Does the site use friendly URLs?
Most search engines (with a few exceptions - namely Google) will not index any pages that have a question mark or other character (like an ampersand or equals sign) in the URL... what good is a site if no one can find it?
One of the worst elements of the web from a user interface standpoint is the URL. However, if they're short, logical, and self-correcting, URLs can be acceptably usable.

6.3 Does the site's URL work without "www"?
While this is not critical, and in some cases is not even possible, it is always good to give people the choice of both options. If a user types your domain name without the www and gets no site, this could disadvantage both the user and you.

6.4 Does the site have a favicon?
A Favicon is a multi-resolution image included on nearly all professionally developed sites. The Favicon allows the webmaster to further promote their site, and to create a more customized appearance within a visitor's browser.

Favicons are definitely not critical. However, if they are not present, they can cause 404 errors in your logs (site statistics). Browsers like IE will request them from the server when a site is bookmarked. If a favicon isn't available, a 404 error may be generated. Therefore, having a favicon could cut down on favicon specific 404 errors. The same is true of a 'robots.txt' file.


Reference -http://www.maxdesign.com.au/articles/checklist/

Friday, March 4, 2011

35 Useful Test Cases for Testing User Interfaces

1. Required Fields
If the screen requires data entry on a specific field, designers should identify the required fields with a red asterisk and generate a friendly warning if the data is left blank.

2. Data Type Errors
If the screen contains dates, numeric, currency or other specific data types, ensure that only valid data can be entered.

3. Field Widths
If the screen contains text boxes that allow data entry, ensure that the width of data entered does not exceed the width of the table field (e.g. a title that is limited to 100 characters in the database should not allow more than 100 characters to be entered from the user interface).

4. Onscreen Instructions
Any screen that is not self-explanatory to the casual user should contain onscreen instructions that aid the user.

5. Keep Onscreen Instructions Brief
While onscreen instructions are great, keep the wording informative, in layman’s terms, but concise.

6. Progress Bars
If the screen takes more than 5 seconds to render results, it should contain a progress bar so that the user understands the processing is continuing.

7. Same Document Opened Multiple Times
If the application opens the same document multiple times, it should append a unique number to the open document to keep one document from overwriting another. For example, if the application opens a document named Minutes.txt and it opens the same document for the same user again, consider having it append the time to the document or sequentially number it (Minutes2.txt or Minutes_032321.txt).

8. Cosmetic Inconsistencies
The screen look, feel, and design should match the other screens in your application. Creating and using a style guide is a great way to ensure consistency throughout your application.

9. Abbreviation Inconsistencies
If the screens contain abbreviations (e.g. Nbr for number, Amt for amount, etc), the abbreviations should be consistent for all screens in your application. Again, the style guide is key for ensuring this.

10. Save Confirmations
If the screen allows changing of data without saving, it should prompt users to save if they move to another record or screen.

11. Delete Confirmations
If a person deletes an item, it is a good idea to confirm the delete. However, if the user interface allows deleting several records in a row, in some cases developers should consider allowing them to ignore the confirmation as it might get frustrating to click the confirmation over and over again.

12. Type Ahead
If the user interface uses combo boxes (drop down lists), be sure to include type ahead (if there are hundreds of items in a list, users should be able to skip to the first item that begins with that letter when they type in the first letter).

13. Grammar and Spelling
Ensure the test cases look for grammar or spelling errors.

14. Table Scrolling
If the application lists information in table format and the data in the table extends past one page, the scrolling should scroll the data but leave the table headers intact.

15. Error Logging
If fatal errors occur as users use your application, ensure that the applications writes those errors to a log file, event viewer, or a database table for later review. Log the routine the error was in, the person logged on, and the date/time of the error.

16. Error Messages
Ensure that error messages are informative, grammatically correct, and not condescending.

17. Shortcuts
If the application allows short cut keys (like CTRL+S to save), test each shortcut to ensure it works in all different browsers (if the application is web based).

18. Invalid Choices
Do not include instructions for choices not available at the time. For example, if a screen cannot be printed due to the state of the data, the screen should not have a Print button.

19. Invalid Menu Items
Do not show menu items that are not available for the context users are currently in.

20. Dialog Box Consistency
Use a style guide to document what choices are available for dialog boxes. Designers should not have Save/Cancel dialog on one screen and an OK/Cancel on another. This is inconsistent.

21. Screen Font Type
Ensure that the screen font family matches from screen to screen. Mismatching fonts within the same sentence and overuse of different fonts can detract from the professionalism of your software user interface.

22. Screen Font Sizes
Ensure that the screen font sizes match from screen to screen. A good user interface will have an accompanying style guide that explicitly defines the font type and size for headers, body text, footers, etc.

23. Colors
Ensure that screens do not use different color sets as to cause an inconsistent and poorly thought-out user interface design. Your style guide should define header colors, body background colors, footer colors, etc.

24. Icons
Ensure that icons are consistent throughout your application by using a common icon set. For example, a BACK link that contains an icon next to it should not have a different icon on one screen versus another. Avoid free clip-art icons, opt for professionally designed icons that complement the overall look and feel of your screen design.

25. Narrative Text
Having narrative text (screen instructions) is a great way to communicate how to use a specific screen. Ensure that narrative text appears at the same location on the screen on all screens.

26. Brevity
Ensure that narrative text, error messages and other instructions are presented in laymen’s terms but are brief and to-the-point.

27. Dialog Box Consistency
Use a style guide to document what choices are available for dialog boxes. You should have not have Save/Cancel dialog on one screen and an OK/Cancel on another, this is inconsistent.

28. Links
If your application has links on the screen (e.g. Save as Spreadsheet, Export, Print, Email, etc.), ensure that the links have consistent spacing between them and other links, that the links appear in the same order from screen to screen, and that the color of the links are consistent.

29. Menus
If your application has menu items, ensure that menu items that are not applicable for the specific screen are disabled and the order in which each menu item appears is consistent from screen to screen.

30. Buttons
If your application has buttons (e.g. Submit, OK, Cancel, etc), ensure that the buttons appear in a consistent order from screen to screen (e.g. Submit then Cancel).

31. Abbreviation Inconsistencies
If your screens contain abbreviations (e.g. Nbr for number, Amt for amount, etc), the abbreviations should be consistent for all screens in your application. Again, the style guide is key for ensuring this.

32. Delete Confirmations
It is a good practice to ask the user to confirm before deleting an item. Create test cases to ensure that all delete operations require the confirmation. Taking this a step further, it would also be great to allow clients to turn off specific confirmations if they decide to do this.

33. Save Confirmations
It is good practice to ask the user to confirm an update if updates are made and they navigate to another item before explicitly saving. Create test cases to ensure that all record movement operations require the confirmation when updates are made. Taking this a step further, it would also be great to allow clients to turn off specific confirmations if they decide to do this.

34. Grammar and Spelling
Ensure that you have test cases that look for grammar or spelling errors.

35. Shortcuts
If your application allows short cut keys (like CTRL+S to save), ensure that all screens allow using of the consistent shortcuts.

Reference -http://www.vietnamesetestingboard.org/zbxe/?document_srl=194413

^ Go to Top