Software Testing
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software.
The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation.
Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.
Software testing is not a "silver bullet'' that can guarantee the production of high quality software systems. While a "correct'' correctness proof demonstrates that a software system (which exactly meets its specification) will always operate in a given manner, software testing that is not fully exhaustive can only suggest the presence of flaws and cannot prove their absence.
Software Engineering
Software :
* Software is Computer programs that when executed provide function and performance
* Data structures that enable the programs to adequately manipulate the information.
* Documents that describe the operation and use of programs.
* Software Engineering: It is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently and effectively on real machines.
* System engineering defines the role of software in an enterprise to boost the growth and better functionalities in a business.
* System Engineering is a way of solving problems taking long-term needs into consideration.
* System engineering defines the real world problem and allocates functions to computer based system elements such as
* Software
* Hardware
* People
* Database
* Documentation
* When system engineering focuses on a business enterprise to define the context for software, it is called Information engineering.
* When system engineering focuses on building a product to cater the needs of some of common business functions across enterprise , it is treated as product engineering.
* There are three phases in software engineering in general
* Definition phase
* Development phase
* Maintenance phase
* Definition phase of software engineering focuses on what to build? or what is a problem ?
* System/Information Engineering
* Requirement Analysis
* Software Project Planning
* Development phase focuses on how to build a product or how to implement functionality
* Software Program Design
* Software Program Implementation
* Software Testing
* Maintenance focuses on change management. The changes are
* Correction
* Adaptation
* Enhancement
* Prevention
* Corrective maintenance is done when the end users report some bugs in the shipped product.
* Adaptation is needed when the external entities of software such as operation system or database etc. have undergone changes.
* Enhancements are done when the existing software needs additional functionalities because of the changed business needs.
* Preventive maintenance is similar to the business process reengineering which concentrates defining the some or all the functionality of the software to avoid further corrections, adaptations and enhancements.
Black-box Testing
Black-box Testing treats the system as a "black-box", so it doesn't use knowledge of the internal structure explicitly. It is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box.Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but has no idea how the program actually arrives at those outputs.
It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work.
The so-called ``functionality testing'' is central to most testing exercises. Its primary objective is to assess whether the program does what it is supposed to do, i.e. it should meet user specified requirements. There are different approaches to functionality testing. One is the testing of each program feature or function in sequence. The other is to test module by module, i.e. each function where it is called first.
Advantages:
* More effective on larger units of code than glass box testing
* Tester needs no knowledge of implementation, including specific programming languages
* Tester and programmer are independent of each other
* Tests are done from a user's point of view
* Test cases can be designed as soon as the specifications are complete
Disadvantages
* Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever
* Without clear and concise specifications, test cases are hard to design
* There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
* May leave many program paths untested
* Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)
* Most testing related research has been directed toward glass box testing
1) Techniques in BlackBox
Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:
* incorrect or missing functions,
* interface errors,
* errors in data structures or external database access,
* performance errors, and
* initialization and termination errors.
Tests are designed to answer the following questions:
* How is the function's validity tested?
* What classes of input will make good test cases?
* Is the system particularly sensitive to certain input values?
* How are the boundaries of a data class isolated?
* What data rates and data volume can the system tolerate?
* What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which
* reduce the number of additional test cases that must be designed to achieve reasonable testing, and
* tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.
Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
* If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
* If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
* If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
* If an input condition is boolean, then one valid and one invalid equivalence class are defined.
Boundary Value Analysis
This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:
* For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
* If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
* Apply guidelines 1 and 2 to the output.
* If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.
Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:
* Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
* A cause-effect graph is developed.
* The graph is converted to a decision table.
* Decision table rules are converted to test cases.
2) Manual Testing :
BlackBox Manual Testing
SQA team members upon receipt of the Development builds, walk through the GUI and either update existing hard copy of the product Roadmaps, or create new hard copy. This is then passed on to the Tools engineer to automate for new builds and regression testing. Defects are entered into the bugs tracking database, for investigation and resolution.
Features & Functions - SQA test engineers, swearing on the team definition, exercise the product features and functions accordingly. Defects in feature/function capability are entered into the defect tracking system and are communicated to the team. Features are expected to perform as expected and their functionality should be oriented toward ease of use and clarity of objective.
Tests are planned around new features and regression tests are exercised to validate
existing features and functions are enabled and performing in a manner consistent with prior releases. SQA using the exploratory testing method manually tests and then plans more exhaustive testing and automation. Regression tests are exercised which consist of using developed test cases against the product to validate field input, boundary conditions and so on... Automated tests developed for prior releases are also used for regression testing.
Installation - Product is installed on each of the supported operating systems in either default, flat file configuration, or with one of the supported databases. Every operating system and database, supported by the product, are tested, though not in all possible combinations. SQA is committed to executing, during the development life cycle, the combinations most frequently used by the customers. Clean and upgrade installations are the minimum requirements.
Documentation - All documentation, which is reviewed by Development prior to Alpha is reviewed by the SQA
team prior to Beta. SQA not only verifies technical accuracy, clarity and completeness, they also provide editorial input on consistency, style and typographical errors.
1) Functionality Testing
Functional testing is validating an application or web site, conforms to its specifications and correctly performs all its required functions.
This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product's user interface, database management, security, installation, networking, etc.
The purpose of functionality testing is to reveal issues concerning the product’s functionality and conformance to user requirement.
The first step in functionality testing is to become familiar with the program itself, and with the program’s desired behavior. For this the tester should have clear idea about the documentation such as the program’s functional specification or user manual. Once a program’s expected functionality has been defined, test cases or test procedures can be created that exercise the program in order to test actual behavior against expected behavior. Testing the program’s functionality then involves the execution of any test cases that have been created. Certain portions of a functionality testing effort can also be automated, depends on several factors, and should be discussed with a qualified engineer.
1. Range checking- minimum and maximum values should not be exceeded (invalid values should not be accepted)
2. Check whether numeric fields accept only numeric values
3. Check ‘online Help’ feature (including buttons to open Help feature)
4. Check ‘Print’ feature
5. Check ‘Open file’ feature (must open correct file extensions and incorrect file type should give error messages)
6. Check ‘Graph’ features
7. If there are logins, enter invalid login information for each field
8. Check for error messages for clarity and whether they come up when they are supposed to.
9. In the presence of a database, check all connections through application are valid when accessing data (error messages like “could not connect to database” should not appear.
10. Modify data files (like add extra special characters) to make sure the application gives correct error messages
11. For administrative features make sure only administrators of application may access the features
12. Check by adding duplicate records
13. Delete all records to check whether such an action does not crash the application
14. Check for compatibility using MS Office application (like copy and paste)
15. Click all buttons to make sure all of them are functioning appropriately
16. Click ‘save’ feature (should not be able to overwrite existing file without permission), should save to correct directory, must create correct extension)
17. Check options/settings
18. Check international units are converted correctly
19. Make sure no spellings are incorrect
20. Check for valid date formats
21. Make sure windows are properly minimized, maximized and resized
22. Check whether keyboard shortcuts are working properly
23. Check that right mouse clicks show correct pop up menus
24. If hardware/software keys are present check if the application works as intended with and without execution of keys
2) Compatability Testing
A Testing to ensure compatibility of an application or Web site with different browsers, OS and hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the product and introduce costly and embarrassing bugs. We test for compatibility using real test environments. That is testing how will the system performs in the particular software, hardware or network environment. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.
The purpose of compatibility testing is to reveal issues related to the product’s interaction with other software as well as hardware. The product compatibility is evaluated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix. Finally, the script is executed against the matrix, and any anomalies are investigated to determine exactly where the incompatibility lies.
Some typical compatibility tests include testing your application:
* On various client hardware configurations
* Using different memory sizes and hard drive space
* On various Operating Systems
* In different network environments
* With different printers and peripherals (i.e. zip drives, USB’s, etc.)
3) Regression Testing
Regression testing is testing the module in which a bug was identified earlier along with the impacted areas to ensure that this fix has not introduced any further defects.
The purpose of regression testing is to ensure that previously detected and fixed issues really are fixed, they do not reappear, and new issues are not introduced into the program as a result of the changes made to fix the issues.
Regression testing also referred to as verification testing, is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.
Regression Testing is in general a black box testing strategy where test case execution of previously written test cases, that has exposed bugs, is done to check whether previously fixed faults have reemerged. In a test suite, all the tests that has caused bug are written and are re-tested whenever changes are made to the program to fix any bug. But this is a tedious process as after every compilation it is difficult to go through the process of retesting all the test cases repeatedly. To make this process simpler regression testing is automated using some testing tools.
Typically regression testing should be performed on a daily basis. Once an issue in the defect
tracking database has been fixed it is reassigned back for final resolution. Now it can be either reopens the issue, if it has not been satisfactorily addressed, or close the issue if it has, indeed, been fixed.
4) Performance Testing
Performance testing is a rigorous usability evaluation of a working system under realistic conditions to identify usability problems and to compare measures such as success rate, task time and user satisfaction with requirements.
The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing.
To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.
A clearly defined set of expectations is essential for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
* expected load in terms of concurrent users or HTTP connections
* acceptable response time
Load testing:
Load testing is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing
Examples of volume testing:
* testing a word processor by editing a very large document
* testing a printer by sending it a very large job
* testing a mail server with thousands of users mailboxes
Examples of longevity/endurance testing:
* testing a client-server application by running the client in a loop against the server over an extended period of time
Goals of load testing:
* expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
* ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seen similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels whereas load testing operates at a predefined load level, the highest load that the system can accept while still functioning properly.
Stress testing:
Stress testing is a form of testing that is used to determine the stability of a given system or entity. This is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs.
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.
Stress testing does not break the system but instead it allows observing how the system reacts to failure. Stress testing observes for the following.
* Does it save its state or does it crash suddenly?
* Does it just hang and freeze or does it fail gracefully?
* Is it able to recover from the last good state on restart? Etc.
Web Testing
During testing the websites the following scenarios should be considered.
*
Functionality
*
Performance
*
Usability
*
Server side interface
*
Client side compatibility
*
Security
Functionality:
In testing the functionality of the web sites the following should be tested.
* Links
+ Internal links
+ External links
+ Mail links
+ Broken links
* Forms
+ Field validation
+ Functional chart
+ Error message for wrong input
+ Optional and mandatory fields
* Database
+ Testing will be done on the database integrity.
* Cookies
+ Testing will be done on the client system side, on the temporary internet files.
Performance:
Performance testing can be applied to understand the web site's scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase.
* Connection speed:
o Tested over various Networks like Dial up, ISDN etc
* Load
o What is the no. of users per time?
o Check for peak loads & how system behaves.
o Large amount of data accessed by user.
* Stress
o Continuous load
o Performance of memory, cpu, file handling etc.
Usability :
Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction.
Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several criteria:
* Ease of learning
* Navigation
* Subjective user satisfaction
* General appearance
Server side interface:
In web testing the server side interface should be tested. This is done by
Verify that communication is done properly.
Compatibility of server with software, hardware, network and database should be tested.
The client side compatibility is also tested in various platforms, using various browsers etc.
Security:
The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them.
The following types of testing are described in this section:
* Network Scanning
* Vulnerability Scanning
* Password Cracking
* Log Review
* Integrity Checkers
* Virus Detection
3) Testing Skills
BlackBox Testing Skills
Essential Testing Skills needed for Testers:
Test Planning : Analyzing a project to determine the kinds of testing needed, the kinds of people needed, the scope of testing needed, the kinds of people needed, the scope of testing (including what should and should not be tested), the time available for testing activities, the initiation criteria for testing, the completion criteria and the critical success factors of testing.
Test Tool Usage : Knowing which tools are most appropriate in a given testing situation, how to apply the tools to solve testing problems effectively, how to organize automated testing, and how to integrate test tools into an organization
Test Execution : Performing various kinds of tests, such as unit testing, system testing, UAT, stress testing and regression testing. This can also include how to determine which conditions to test and how to evaluate whether the system under test passes or fails. Test execution can often be dependent on your unique environment and project needs, although basic testing principles can be adopted to test most projects
Defect Management : Understanding the nature of defects, how to report defects, how to track defects and how to use the information gained from defects to improve the development and testing processes
Risk analysis: Understanding the nature of risk, how to assess project and software risks, how to use the results of a risk assessment to prioritize and plan testing, and how to use risk analysis to prevent defects and project failure.
Test Measurement: Knowing what to measure during a test, how to use the measurements to reach meaningful conclusions and how to use measurements to improve the testing and development processes
4) Test Approach
BlackBox Test Approach
Design Validation
Statements regarding coverage of the feature design - including both specification and development documents. Will testing review design? Is design an issue on this release? How much concern does testing have regarding design, etc. etc..
Data Validation
What types of data will require validation? What parts of the feature will use what types of data? What are the data types that test cases will address? Etc.
API Testing
What level of API testing will be performed? What is justification for taking this approach (only if none is being taken)?
Content Testing
Is your area/feature/product content based? What is the nature of the content? What strategies will be employed in your feature/area to address content related issues?
Low-Resource Testing
What resources does your feature use? Which are used most, and are most likely to cause problems? What tools/methods will be used in testing to cover low resource (memory, disk, etc.) issues?
Setup Testing
How is your feature affected by setup? What are the necessary requirements for a successful setup of your feature? What is the testing approach that will be employed to confirm valid setup of the feature?
Modes and Runtime Options
What are the different run time modes the program can be in? Are there views that can be turned off and on? Controls that toggle visibility states? Are there options a user can set which will affect the run of the program? List here the different run time states and options the program has available. It may be worthwhile to indicate here which ones demonstrate a need for more testing focus.
Interoperability
How will this product interact with other products? What level of knowledge does it need to have about other programs -- “good neighbor”, program cognizant, program interaction, fundamental system changes? What methods will be used to verify these capabilities?
Integration Testing
Go through each area in the product and determine how it might interact with other aspects of the project. Start with the ones that are obviously connected, but try every area to some degree. There may be subtle connections you do not think about until you start using the features together. The test cases created with this approach may duplicate the modes and objects approaches, but there are some areas which do not fit in those categories and might be missed if you do not check each area.
Compatibility: Clients
Is your feature a server based component that interacts with clients? Is there a standard protocol that many clients are expected to use? How many and which clients are expected to use your feature? How will you approach testing client compatibility? Is your server suited to handle ill-behaved clients? Are there subtleties in the interpretation of standard protocols that might cause incompatibilities? Are there non-standard, but widely practiced use of your protocols that might cause incompatibilities?
Compatibility: Servers
Is your feature a client based component that interacts with servers? Is there a standard protocol supported by many servers that your client speaks? How many different servers will your client program need to support? How will you approach testing server compatibility? Is your client suited to handle ill-behaved or non-standard servers? Are there subtleties in the interpretation of standard protocols that might cause incompatibilities? Are there non-standard, but widely practiced use of protocols that might cause incompatibilities?
Beta Testing
What is the beta schedule? What is the distribution scale of the beta? What is the entry criteria for beta? How is testing planning on utilizing the beta for feedback on this feature? What problems do you anticipate discovering in the beta? Who is coordinating the beta, and how?
Environment/System - General
Are there issues regarding the environment, system, or platform that should get special attention in the test plan? What are the run time modes and options in the environment that may cause difference in the feature? List the components of critical concern here. Are there platform or system specific compliance issues that must be maintained?
Configuration
Are there configuration issues regarding hardware and software in the environment that may get special attention in the test plan? Some of the classical issues are machine and bios types, printers, modems, video cards and drivers, special or popular TSR’s, memory managers, networks, etc. List those types of configurations that will need special attention.
User Interface
List the items in the feature that explicitly require a user interface. Is the user interface designed such that a user will be able to use the feature satisfactorally? Which part of the user interface is most likely to have bugs? How will the interface testing be approached?
Performance & Capacity Testing
How fast and how much can the feature do? Does it do enough fast enough? What testing methodology will be used to determine this information? What criterion will be used to indicate acceptable performance? If modifications of an existing product, what are the current metrics? What are the expected major bottlenecks and performance problem areas on this feature?
Scalability
Is the ability to scale and expand this feature a major requirement? What parts of the feature are most likely to have scalability problems? What approach will testing use to define the scalability issues in the feature?
Stress Testing
How does the feature do when pushed beyond its performance and capacity limits? How is its recovery? What is its breakpoint? What is the user experience when this occurs? What is the expected behavior when the client reaches stress levels? What testing methodology will be used to determine this information? What area is expected to have the most stress related problems?
Volume Testing
Volume testing differs from performance and stress testing in so much as it focuses on doing volumes of work in realistic environments, durations, and configurations. Run the software as expected user will - with certain other components running, or for so many hours, or with data sets of a certain size, or with certain expected number of repetitions.
International Issues
Confirm localized functionality, that strings are localized and that code pages are mapped properly. Assure program works properly on localized builds, and that international settings in the program and environment do not break functionality. How is localization and internationalization being done on this project? List those parts of the feature that are most likely to be affected by localization. State methodology used to verify International sufficiency and localization.
Robustness
How stable is the code base? Does it break easily? Are there memory leaks? Are there portions of code prone to crash, save failure, or data corruption? How good is the program’s recovery when these problems occur? How is the user affected when the program behaves incorrectly? What is the testing approach to find these problem areas? What is the overall robustness goal and criteria?
Error Testing
How does the program handle error conditions? List the possible error conditions. What testing methodology will be used to evoke and determine proper behavior for error conditions? What feedback mechanism is being given to the user, and is it sufficient? What criteria will be used to define sufficient error recovery?
Usability
What are the major usability issues on the feature? What is testing’s approach to discover more problems? What sorts of usability tests and studies have been performed, or will be performed? What is the usability goal and criteria for this feature?
Accessibility
Is the feature designed in compliance with accessibility guidelines? Could a user with special accessibility requirements still be able to utilize this feature? What is the criteria for acceptance on accessibility issues on this feature? What is the testing approach to discover problems and issues? Are there particular parts of the feature that are more problematic than others?
User Scenarios
What real world user activities are you going to try to mimic? What classes of users (i.e. secretaries, artist, writers, animators, construction worker, airline pilot, shoemaker, etc.) are expected to use this program, and doing which activities? How will you attempt to mimic these key scenarios? Are there special niche markets that your product is aimed at (intentionally or unintentionally) where mimic real user scenarios is critical?
Boundaries and Limits
Are there particular boundaries and limits inherent in the feature or area that deserve special mention here? What is the testing methodology to discover problems handling these boundaries and limits?
Operational Issues
If your program is being deployed in a data center, or as part of a customer's operational facility, then testing must, in the very least, mimic the user scenario of performing basic operational tasks with the software.
Backup
Identify all files representing data and machine state, and indicate how those will be backed up. If it is imperative that service remain running, determine whether or not it is possible to backup the data and still keep services or code running.
Recovery
If the program goes down, or must be shut down, are there steps and procedures that will restore program state and get the program or service operational again? Are there holes in this process that may make a service or state deficient? Are there holes that could provide loss of data. Mimic as many states of loss of services that are likely to happen, and go through the process of successfully restoring service.
Archiving
Archival is different from backup. Backup is when data is saved in order to restore service or program state. Archive is when data is saved for retrieval later. Most archival and backup systems piggy-back on each other's processes.
Is archival of data going to be considered a crucial operational issue on your feature? If so, is it possible to archive the data without taking the service down? Is the data, once archived, readily accessible?
Monitoring
Does the service have adequate monitoring messages to indicate status, performance, or error conditions? When something goes wrong, are messages sufficient for operational staff to know what to do to restore proper functionality? Are the "hearbeat" counters that indicate whether or not the program or service is working? Attempt to mimic the scenario of an operational staff trying to keep a service up and running.
Upgrade
Does the customer likely have a previous version of your software, or some other software? Will they be performing an upgrade? Can the upgrade take place without interrupting service? Will anything be lost (functionality, state, data) in the upgrade? Does it take unreasonably long to upgrade the service?
Migration
Is there data, script, code or other artifacts from previous versions that will need to be migrated to a new version? Testing should create an example of installation with an old version, and migrate that example to the new version, moving all data and scripts into the new format.
List here all data files, formats, or code that would be affected by migration, the solution for migration, and how testing will approach each.
Special Code Profiling and Other Metrics
How much focus will be placed on code coverage? What tools and methods will be used to measure the degree to which testing coverage is sufficiently addressing all of the code?
5) Test Metrics
BlackBox Test Metrics
A Metric is a quantitative measure of the degree to which a system, component or process possesses a given attribute. Software metrics are measures that are used to quantify the software, software development resources and software development process. A metric is defined to be the name of a mathematical function used to measure some attribute of a product or process. The actual numerical value produced by a metric is a measure. For example, cyclomatic complexity is a metric; when applied to program code, the number yielded by the formula is the cyclomatic complexity measure.
Two general classes of metrics include the following:
* Management metrics , which assist in the management of the software development process.
* Quality metrics , which are predictors or indicators of the product qualities.
Metrics related to software error detection ("Testing") in the broad sense, grouped into the following categories:
General metrics that may be captured and analysed throughout the product life cycle
Software Requirements metrics , which may give early warning of quality problems in requirements specifications
Software Design metrics , which may be used to assess the status of software designs;
Code metrics reveal properties of the program source code;
Test metrics can be used to control the testing process, to assess its effectiveness, and to set improvement targets;
Software Installation metrics, which are applicable during the installation process;
Software Operation and Maintenance metrics , including those used in providing software product support.
Test Metrics
The following are the metrics collected in the testing process.
1.Defect age .
Defect age is the time from when a defect is introduced to when it is detected (or fixed). Assign the numbers 1 through 6 to each of the software development activities from software requirements to software operation and maintenance. The defect age is computed as shown.
(Activity Detected - Activity Introduced)
Average Defect Age = –——————————————————
Number of Defects
2. Defect response time .
This measure is the time between when a defect is detected to when it is fixed or closed.
3. Defect cost ($ d )
The cost of a defect may be computed as:
$ d = ( cost to analyse the defect) + (cost to fix it)
+ (cost of failures already incurred due to it)
4. Defect removal efficiency (DRE) .
The DRE is the percentage of defects that have been removed during an activity, computed with the equation below. The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g., design inspection, code walkthrough, unit test, 6 month operation, etc.). [SQE]
Number Defects Removed
DRE = –—————————————————— * 100
Number Defects At Start Of Process
5 Mean time to failure (MTTF) .
Gives an estimate of the mean time to the next failure, by accurately recording failure times t i , the elapsed time between the ith and the (i-1)st failures, and computing the average of all the failure times. This metric is the basic parameter required by most software reliability models. High values imply good reliability.
MMTF should be corrected by a weighted scheme similar to that used for computing Fault density (see under Test Metrics).
6 . Fault density (FD) .
This measure is computed by dividing the number of faults by the size (usually in
KLOC, thousands of lines of code).
6) Test Plan
BlackBox Test Plan
Test planning is one of the keys to successful software testing. Test plan can be defined as a document that describes the scope, approach, resources and schedule of intended test activities. The main purpose of preparing test plan is that every one concerned with the project are in synchronized with regards to scope, deliverables, deadlines and response for the project.
The complete document will help the people outside the test group understand the “WHY” and “HOW” of the product validation.
Test planning can and should occur at several levels. The first plan to consider is the Master Test Plan. The purpose of the Master Test Plan is to orchestrate testing at all levels (unit, integration, system, acceptance, beta, etc.). The Master Test Plan is to testing what the Project Plan is to the entire development effort.
The goal of test planning is not to create a long list of test cases, but rather to deal with the important issues of testing strategy, resource utilization, responsibilities, risks, and priorities.
Contents of test plan:
Purpose:
This section should contain the purpose of preparing the test plan.
Scope:
This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of the scope.
Test approach :
This would contain details on how the testing is to performed and whether any specific strategy is to be followed.
Entry criteria:
This section explains the various steps to be performed before the start of test (i.e) pre-requisites.
E.g. Environment setup, starting web server/ application server, successful implementation of latest build etc.
Resources:
This list out the people who would be involved in the project and their designation etc
Tasks and responsibilities:
This talk about the tasks to be performed and the responsibilities assigned to the various members in the project.
Exit criteria:
This contains tasks like bringing down the system or server, restoring system to pre-test environment, database, refresh etc.
Schedules/ Milestones :
This section deals with the final delivery date and the various milestone dates to be met in the course of project.
Hardware/ software requirements :
This section contains the details of system/server required to install the application or perform the testing, specific s/w that needs to be installed on the system to get the application running or to connect to the database, connectivity related issues etc.
Risks and mitigation process :
This section should list out all the possible risks that can arise during the testing and mitigation plans that the QA team plans to implement incase the risk actually turns into a reality.
Tools to be used :
This would list out the testing tools or utilities that are to be used in the project.
E.g. Winrunner, QTP, Test Director PCOM etc.
Deliverables :
This section contains various deliverables that are due to the client at various points of time. i.e. daily, weekly, start of project, end of project etc. these could include test plans, test procedures, test matrices, status reports, test scripts etc. templates for all these also be attached.
Annexure :
This section contains the embedded documents or links to document which have been/will be used in the course of testing. E.g. Templates used for reports, test cases etc. reference documents can also be attached here.
Sign off :
This section contains the mutual agreement between the client and QA team with both leads/ managers signing off their agreement on the test plan.
7) Types of Testing
SOFTWARE TESTING TYPES:
Acceptance Testing:
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing:
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing :
Ad-hoc testing is the interactive testing process where developers invoke application units explicitly,
and individually compare execution results to expected results.
Agile Testing :
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Alpha Testing: Early testing of a software product conducted by selected customers.
Automated Testing :
• Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
• The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Basis Path Testing:
A white box test case design technique that uses the algorithmic flow of the program to design tests.
Beta Testing:
Testing of a re-release of a software product conducted by customers.
Binary Portability Testing:
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing:
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing:
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing:
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Branch Testing:
Testing in which all branches in the program source code are tested at least once.
Breadth Testing:
A test suite that exercises the full functionality of a product but does not test features in detail.
Compatibility Testing:
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Concurrency Testing:
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single thread code and locking semaphores.
Conversion Testing:
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Data Driven Testing :
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Dependency Testing :
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing :
A test that exercises a feature of a product in full detail.
Dynamic Testing :
Testing software through executing it. See also Static Testing.
Endurance Testing :
Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing :
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Exhaustive Testing :
Testing which covers all combinations of input values and preconditions for an element of the software under test.
Gorilla Testing :
Testing one particular module, functionality heavily.
Gray Box Testing :
A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Integration Testing :
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing :
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Localization Testing :
This term refers to making software specifically designed for a specific locality.
Loop Testing :
A white box testing technique that exercises program loops.
Monkey Testing :
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Negative Testing :
Testing aimed at showing software does not work. Also known as "test to fail".
N+1 Testing :
A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors.
Path Testing :
Testing in which all paths in the program source code are tested at least once.
Performance Testing :
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing :
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
Recovery Testing :
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression Testing :
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Sanity Testing :
Brief test of major functional elements of a piece of software to determine if its basically operational.
Scalability Testing :
Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing :
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing :
A quick-and-dirty test that the major functions of a piece of software work.
Soak Testing :
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Static Testing :
Analysis of a program carried out without executing the program.
Storage Testing :
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing :
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
Structural Testing :
Testing based on an analysis of internal workings and structure of a piece of software.
System Testing :
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
Thread Testing:
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top Down Testing:
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Usability Testing:
Testing the ease with which users can learn and use a product.
User Acceptance Testing:
A formal product evaluation performed by a customer as a condition of purchase.
Unit Testing:
The testing done to show whether a unit satisfies its functional specification or its implemented structure matches the intended design structure.
Volume Testing:
Testing which confirms that any values that may become large over time can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
White Box Testing:
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing .
Workflow Testing:
Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
8) Auto Tools
BlackBox Auto Tools
Nowadays automated testing tools more often used than ever before to ensure that their applications are working properly prior to deployment. That's particularly important today, because more applications are written for use on the Web—the most public of venues. If a browser-based application crashes or performs improperly, it can cause more problems than a smaller, local application. But for many IT and quality assurance managers, the decision of which testing tools to use can cause confusion.
The first decision is which category of tool to use— one that tests specific units of code before the application is fully combined, one that tests how well the code is working as envisioned, or one that tests how well the application performs under stress. And once that decision is made, the team must wade through a variety of choices in each category to determine which tool best meets its needs.
Functional-Testing Tools
Automated Testing is automating the manual testing process currently in use. The real use and purpose of automated test tools is to automate regression testing . This means that you must have or must develop a database of detailed test cases that are repeatable , and this suite of tests is run every time there is a change to the application to ensure that the change does not produce unintended consequences.
At a functional level, they provide record/playback capabilities, which allow developers to record an existing application and modify scripts to meet changes in an upcoming release. Tools in this category include:
WinRunner provides a relatively simple way to design tests and build reusable scripts without extensive programming knowledge. WinRunner captures, verifies, and replays user interactions automatically, so you can identify defects and ensure that business processes work flawlessly upon deployment and remain reliable. WinRunner supports more than 30 environments, including Web, Java, and Visual Basic. It also provides solutions for leading ERP and Customer Relationship Management (CRM) applications.
Astra Quick Test: This functional-testing tool is built specifically to test Web-based applications. It helps ensure that objects, images, and text on Web pages function properly and can test multiple browsers. Astra QuickText provides record and playback support for every ActiveX control in a Web browser and uses checkpoints to verify specific information during a test run.
Silk Test tool is specifically designed for doing regression and FUNCTIONALITY testing. This tool tests both mainframe and client/server applications. This is the leading functional testing product foe e-business applications. It also provides facilities for rapid test customization and automated infrastructure development.
Rational Suite TestStudio is a full suite of testing tools. Its functional-testing component, called Rational Robot uses automated regression as the first step in the functional testing process. The tool records and replays test scripts that recognize objects through point-and-click processes. The tool also tracks, reports, and charts information about the developer's quality assurance testing process and can view and edit test scripts during the recording process. It also enables the developer to use the same script to test an application in multiple platforms without modifications.
QACenter is a full suite of testing tools. One functional tool considered part of QACenter is QARun, which automates the creation and execution of test scripts, verifies tests, and analyzes test results. A second functional tool under the QACenter is TestPartner, which uses visual scripting and automatic wizards to help test applications based on Microsoft, Java, and Web-based technologies. TestPartner offers fast record and playback of application test scripts, and provides facilities for testers without much programming experience to create and execute tests.
1) Load Runner
LOAD RUNNER
Mercury LoadRunner is the performance testing tool for predicting system behavior and performance. Load runner emulates an envioronment in which thousands of users work with client/server system concurrently. For this load runner replaces the human user with virtual user(Vusers). Using limited hardware resources, LoadRunner emulates hundreds or thousands of concurrent users to put the application through the rigors of real-life user loads.
Vugen:
VuGen is also known as Vuser generator that enables to develop Vuser script for a variety of application types and communication. VuGen creates the script by recording the activity between the client and the server. It monitors the client end of the database and traces all the requests sent to, and received from, the database server.
Vusers:
LoadRunner replaces the human users with virtual users or Vusers. The load on the system can be increased by increasing the number of Vusers.
Load testing process:
Step 1: Planning the test.
A clearly defined test plan ensures the test scenarios developed to accomplish load-testing objectives.
Step 2: Creating Vusers.
Vuser scripts are that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions
Step 3: Define the scenario.
A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us.
Step 4: Running the scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.
Step 5: Monitoring the scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors.
Step 6: Analyzing test results.
During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.
A scenario defines the events that occur during each testing session. The action that a Vuser performs during the scenario is described in Vuser Script. The Vuser scripts include functions that measure and record the performance of your application’s components.
A controller reads a single scenario to co-ordinate several host machines which specify the use of different run time setings, running different Vuser script and storing results in different locations.
Vuser types:
The vuser types are divided into the following categories:
E-business: For Web (HTTP,HTML), COM/DCOM, Corba-Java,
General-Java, Java(GUI), Jolt, LDAP, POP3, and FTP protocols.
Middleware: For Jolt, and Tuxedo(6.0, 6.3) protocols.
ERP: For SAP, Baan, Oracle NCA, and Peoplesoft (Tuxedo or Web) protocols.
Client/Server: For Informix, MSSQLServer, ODBC, Oracle (2-tier), Sybase Ctlib, Sybase Dblib, and Windows Sockets protocols.
Legacy: For APPC and Terminal Emulation (RTE).
General: For C template, Java template, and Windows Sockets type scripts.
Creating the Vuser Scripts
Step-1: Record a basic Vuser script
Step-2: Enhance/edit the Vuser script
Step-3: Configure Run-Time settings
Step-4: Run the Vuser script in stand- alone mode
Step-5: Incorporate the Vuser script into a LoadRunner scenario
The process was started by developing a Vuser script by recording a basic script. LoadRunner provides number of tools for recording Vuser scripts. Then enhancement in the basic script is done by adding control-flow structures, and by inserting transactions and rendezvous points into the script. Then configure the run-time settings. The run-time settings include iteration, log, and timing information, and define how the Vuser will behave when it executes the Vuser script. To verify that the script runs correctly, run it in stand-alone mode. When script runs correctly, incorporate it into a LoadRunner scenario.
Vuser Script Sections
Each Vuser script contains at least three sections: vuser_init, one or more Actions, and vuser_end. Before and during recording, you can select the section of the script into which VuGen will insert the recorded functions.
The following table shows what to record into each section, and when each section is executed.
Script Section ...... Used when recording... Is executed when...
vuser_init A login to a server The Vuser is initialized (loaded)
Actions Client activity The Vuser is in “Running” status
vuser_end A logoff procedure The Vuser finishes or is stopped
When you run multiple iterations of a Vuser script, only the Actions sections of the script are repeated—the vuser_init and vuser_end sections are not repeated.
LoadRunner Controller
The controller window has four tabs, correspond to four views.
Script view – Displays a list of all the Vuser scripts that assigned to Vusers.
Host view – Displays the list of machines that can execute Vuser script during the scenario.
Vuser view – Displays the Vuser assigned to the scenario
Online monitor view – Displays online monitor graph showing transactions and server resource information.
Vuser View is the default view and contains the information about the Vuser in the scenario. The tab is divided into 2 sections.
* Summary information.
* Detailed information.
Summary information:
The Status field of each Vuser group displays the current state of each Vuser in the group. The following table describes the possible Vuser states during a scenario.
Status .............. Description
DOWN The Vuser is down.
PENDING The Vuser is ready to be initialized and is
waiting for an available load generator, or is
transferring files to the load generator. The
Vuser will run when the conditions set in its
Scheduling attributes are met.
INITIALIZING The Vuser is being initialized on the remote machine.
READY The Vuser already performed the init section of the
script and is ready to run.
RUNNING The Vuser is running. The Vuser script is being
executed on a load generator.
RENDEZVOUS The Vuser has arrived at the rendezvous and is waiting
to be released by LoadRunner.
DONE . PASSED The Vuser has finished running. The script passed.
DONE . FAILED The Vuser has finished running. The script failed.
ERROR A problem occurred with the Vuser. Check the Status
field on the Vuser dialog box or the output window
for a complete explanation of the error.
GRADUAL EXITING The Vuser is completing the iteration or action it
is running (as defined in Tools > Options > Run-
Time Settings) before exiting.
EXITING The Vuser has finished running or has been
stopped, and is now exiting.
STOPPED The Vuser stopped when the Stop command was
invoked.
Detailed information:
This section provides the following information.
* ID: The Vuser ID
* Status : The current Vuser status
* Script: The script being executed by the Vuser.
* Host : The Vuser host.
* Elapsed : The Time that elapsed from when the Vuser began executing the script.
Transactions:
Transactions are insert into a Web Vuser script to enable the Controller to measure the performance of the Web server under various load conditions. Each transaction measures the time that it takes for the server to respond to one or more tasks submitted by Vusers. LoadRunner allows to create transactions to measure simple tasks, such as accessing a URL, or complex processes, such as submitting several queries and waiting for a response.
To define a transaction, just insert a Start Transaction and End Transaction icon into the Vuser script.
During a scenario execution, the Controller measures the time it takes to perform each transaction. After a scenario run, LoadRunner’s graphs and reports can be used to analyze the server’s performance.
To mark the start of a transaction while recording:
1. Click the Start Transaction button on the VuGen toolbar. The Start Transaction dialog box opens.
2. Type a transaction name in the Transaction Name box.
3. Click OK to accept the transaction name. VuGen inserts an "lr_start_transaction " statement in the Vuser script.
Example.
lr_start_transaction(" transample");
To mark the end of a transaction while recording:
1. Click the End Transaction button on the VuGen toolbar. The End Transaction dialog box opens.
2. Click the arrow in the Transaction Name box to display a list of open transactions. Select the transaction to close.
3. Select the transaction status from the Transaction Status list. You can manually set the status of the transaction, or you can allow LoadRunner to detect it automatically.
* To manually set the status, you perform a manual check within the code of your script, evaluating the return code of a function. For the "succeed" return code, set the status to LR_PASS. For the "fail" return code, set the status to LR_FAIL.
* To instruct LoadRunner to automatically detect the status, specify LR_AUTO. LoadRunner returns the detected status to the Controller.
4. Click OK to accept the transaction name and status. VuGen inserts an
"lr_end_ transaction "statement in the Vuser script.
Rendezvous Points
A rendezvous point creates intense user load on the server and enables LoadRunner to measure server performance under load. Suppose if there is a need to measure how a web-based banking system performs when ten Vusers simultaneously check account information. In order to emulate the required user load on the server, all the Vusers are instruct to check account information at exactly the same time.
Ensure that multiple Vusers act simultaneously by creating a rendezvous point. When a Vuser arrives at a rendezvous point, it is held there by the Controller. The Controller releases the Vusers from the rendezvous either when the required number of Vusers arrives, or when a specified amount of time has passed.
To Insert Rendezvous Point
1. While recording a Vuser script, click the Rendezvous button on the recording toolbar. The Rendezvous Dialog Box opens
2. Type a name for the rendezvous point in the rendezvous name box.
Click OK to accept the rendezvous name. VuGen inserts an lr_rendezvous statement into the Vuser script.
Using Rendezvous Point
Using the Controller, the level of server load can be influenced by selecting- which of the rendezvous points will be active during the scenario how many Vusers will take part in each rendezvous.
Setting the Rendezvous Attributes
The following rendezvous attributes can be set from the Rendezvous Information dialog box :
• Timeout
• Rendezvous Policy
• Enabling and Disabling Rendezvous
• Enabling and Disabling Vusers
In addition, the dialog box displays general information about the rendezvous point: which script is associated with the rendezvous and release history.
Setting Timeout Behavior Attribute
The timeout determines the maximum time (in seconds) that LoadRunner waits for each Vuser to arrive at a rendezvous. After each Vuser arrives at the rendezvous, LoadRunner waits up to timeout seconds for the next Vuser to arrive. If the nest Vuser does not arrive within the timeout period, then the Controller releases all the Vusers from the rendezvous. Each time a new Vuser arrives, the timer is reset to zero. The default timeout is thirty seconds.
Setting the Release Policy Attribute
The policy attribute determines how the Controller releases Vusers from the rendezvous. For each rendezvous the following Policies can be set:
All Arrived Instructs the Controller to release the Vusers from the rendezvous only when all die Vusers included in the rendezvous arrive. All the Vusers are released simultaneously.
The default policy is All Arrived
Quota Sets the number of Vusers that must arrive at a rendezvous point before the Controller releases the Vusers. For instance, suppose that you are testing a scenario of fifty Vusers and that you want a particular operation to be executed simultaneously by ten Vusers. You can designate the entire scenario as participants in the Rendezvous and set a quota of ten Vusers. Every time ten Vusers arrive at the rendezvous, they are released .
Disabling and Enabling Rendezvous Points
It is possible to temporarily disable a rendezvous and exclude it from the scenario. By disabling and enabling a rendezvous, you influence the level of server load. The Disable and Enable buttons on the Rendezvous Information dialog box, are used to change the status of a rendezvous.
Disabling and Enabling Vusers at Rendezvous Points
In addition to disabling a rendezvous for all Vusers in a scenario. LoadRunner lets to disable it for specific Vusers. By disabling Vusers at a rendezvous, they are temporarily excluded from participating in the rendezvous. Enabling disabled Vusers returns them to the rendezvous. Use the Disable and Enable commands to specify which Vusers will take part in a rendezvous.
Monitoring a Scenario
We can monitor scenario execution using the LoadRunner online transaction and server resource monitors.
About online monitoring
Loadrunner provides the following online monitors:
Server resource
Vuser Status
Transaction
Web
The server resource monitor gauges the system resources used during a scenario. It is capable of measuring NT, UNIX, TUXEDO and SNMP resources gathered by custom monitors.
Setting monitor options
Before running scenario, set appropriate monitor options. The options available are
* Sample rate
The sample rate is the period of time (in seconds) between consecutive samples.
* Error handling
Indicate how loadrunner should behave when monitor error occurs...................
* Debug
* Transaction monitor
LoadRunner Analysis:
After running a scenario, the graphs and reports can be used to analyze the performance of the client/server system.
The results can be viewed in several ways.
* Vuser output file
* Controller output window
* Analysis graphs and reports
* Spreadsheet and raw data
9) Download links
WinRunner
Mercury WinRunner is an automated regression testing tool that allows a user to record and play back test scripts. Mercury WinRunner captures, verifies, and replays user interactions automatically. Winrunner facilitates easy test creation by recording our work on the application. As we point and click the GUI object in the application, Winrunner generates a test script in the c like test script language(TSL).
WinRunner provides checkpoints for text, GUI, bitmaps, URL links and the database, allowing testers to compare expected and actual outcomes and identify potential problems with numerous GUI objects and their functionality.
Download winrunner7.6 here
http://www.mercury.com/us/products/quality-center/functional-testing/winrunner
LoadRunner
LoadRunner enables you to test your system under controlled and peak load conditions. To generate load, LoadRunner runs thousands of Virtual Users that are distributed over a network. Using a minimum of hardware resources, these Virtual Users provide consistent, repeatable, and measurable load to exercise your application just as real users would.
LoadRunner’s in-depth reports and graphs provide the information that you need to evaluate the performance of your application.
The advantage of LoadRunner is it reduces the personnel requirements by replacing human users with virtual users or Vusers. These Vusers emulate the behavior of real users— operating real applications.
LoadRunner automatically records the performance of the application during a test. Because LoadRunner tests are fully automated, we can easily repeat them as often as you need.
http://downloads.mercury.com/cgi-bin/portal/download/index.jsp
Download web performance testing tool
http://www.adventnet.com/products/qengine/web-performance-testing.html
Download web application testing tool
http://www.adventnet.com/products/qengine/web-application-testing.html
Quick Test Professional
Mercury QuickTest Professional provides solution for functional test and regression test automation - addressing every major software application and environment.
QuickTest Professional provides an interactive, visual environment for test development. We can create a test script by simply pressing a Record button and using an application to perform a typical business process.
QuickTest Professional is significantly easier for a non-technical person to adapt to and create working test cases
Download QTP
http://downloads.mercury.com/cgi-bin/portal/download/index.jsp
WhiteBox Testing
WhiteBox Testing
A Software testing approach that examine the program structure and derive test data from the program logic. White-box testing strategies include designing tests such that every line of source code is executed at least once, or requiring every function to be individually tested.
Test coverage is an important component of white-box testing. The goal is to try to execute (that is, test) all lines in an application at least once.
Because white-box testing tools can individually or collectively instrument source lines, it is straightforward to determine which lines in a host program have or have not been executed without modifying source.
Synonyms for white box testing:
* Glass Box testing
* Structural testing
* Clear Box testing
* Open Box Testing
The purpose of white box testing :
Build quality throughout the life cycle of a software product or service.
Provide a complementary function to black box testing.
Perform complete coverage at the component level.
Improve quality by optimizing performance.
Code Coverage Analysis:
Basis Path Testing
A testing mechanism that derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. These are test cases that exercise basic set will execute every statement at least once.
Flow Graph Notation : A notation for representing control flow similar to flow charts
and UML activity diagrams.
Cyclomatic Complexity: The Cyclomatic complexity gives a quantitative measure of
4the logical complexity. This value gives the number of independent paths in the
basis set, and an upper bound for the number of tests to ensure that each statement
is executed at least once. An independent path is any path through a program that
introduces at least one new set of processing statements or a new condition.
Cyclomatic complexity provides upper bound for number of tests required
to guarantee coverage of all program statements
Control Structure testing:
Conditions Testing
Condition testing aims to exercise all logical conditions in a program module. They
may define:
Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions.
Simple condition: Boolean variable or relational expression, possibly proceeded by a
NOT operator.
Compound condition: composed of two or more simple conditions, Boolean operators
and parentheses.
Boolean expression: Condition without Relational expressions.
Data Flow Testing
Selects test paths according to the location of definitions and use of variables
Loop Testing
Loops fundamental to many algorithms. Can define loops as simple, concatenated,
nested, and unstructured.
Advantages of White Box Testing
* Forces test developer to reason carefully about implementation
* Approximate the partitioning done by execution equivalence
* Reveals errors in "hidden" code
* Beneficent side-effects
Disadvantages of White Box Testing
* Expensive
* Cases omitted in the code could be missed out.
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software.
The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation.
Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.
Software testing is not a "silver bullet'' that can guarantee the production of high quality software systems. While a "correct'' correctness proof demonstrates that a software system (which exactly meets its specification) will always operate in a given manner, software testing that is not fully exhaustive can only suggest the presence of flaws and cannot prove their absence.
Software Engineering
Software :
* Software is Computer programs that when executed provide function and performance
* Data structures that enable the programs to adequately manipulate the information.
* Documents that describe the operation and use of programs.
* Software Engineering: It is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently and effectively on real machines.
* System engineering defines the role of software in an enterprise to boost the growth and better functionalities in a business.
* System Engineering is a way of solving problems taking long-term needs into consideration.
* System engineering defines the real world problem and allocates functions to computer based system elements such as
* Software
* Hardware
* People
* Database
* Documentation
* When system engineering focuses on a business enterprise to define the context for software, it is called Information engineering.
* When system engineering focuses on building a product to cater the needs of some of common business functions across enterprise , it is treated as product engineering.
* There are three phases in software engineering in general
* Definition phase
* Development phase
* Maintenance phase
* Definition phase of software engineering focuses on what to build? or what is a problem ?
* System/Information Engineering
* Requirement Analysis
* Software Project Planning
* Development phase focuses on how to build a product or how to implement functionality
* Software Program Design
* Software Program Implementation
* Software Testing
* Maintenance focuses on change management. The changes are
* Correction
* Adaptation
* Enhancement
* Prevention
* Corrective maintenance is done when the end users report some bugs in the shipped product.
* Adaptation is needed when the external entities of software such as operation system or database etc. have undergone changes.
* Enhancements are done when the existing software needs additional functionalities because of the changed business needs.
* Preventive maintenance is similar to the business process reengineering which concentrates defining the some or all the functionality of the software to avoid further corrections, adaptations and enhancements.
Black-box Testing
Black-box Testing treats the system as a "black-box", so it doesn't use knowledge of the internal structure explicitly. It is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box.Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but has no idea how the program actually arrives at those outputs.
It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work.
The so-called ``functionality testing'' is central to most testing exercises. Its primary objective is to assess whether the program does what it is supposed to do, i.e. it should meet user specified requirements. There are different approaches to functionality testing. One is the testing of each program feature or function in sequence. The other is to test module by module, i.e. each function where it is called first.
Advantages:
* More effective on larger units of code than glass box testing
* Tester needs no knowledge of implementation, including specific programming languages
* Tester and programmer are independent of each other
* Tests are done from a user's point of view
* Test cases can be designed as soon as the specifications are complete
Disadvantages
* Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever
* Without clear and concise specifications, test cases are hard to design
* There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
* May leave many program paths untested
* Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)
* Most testing related research has been directed toward glass box testing
1) Techniques in BlackBox
Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:
* incorrect or missing functions,
* interface errors,
* errors in data structures or external database access,
* performance errors, and
* initialization and termination errors.
Tests are designed to answer the following questions:
* How is the function's validity tested?
* What classes of input will make good test cases?
* Is the system particularly sensitive to certain input values?
* How are the boundaries of a data class isolated?
* What data rates and data volume can the system tolerate?
* What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which
* reduce the number of additional test cases that must be designed to achieve reasonable testing, and
* tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.
Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
* If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
* If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
* If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
* If an input condition is boolean, then one valid and one invalid equivalence class are defined.
Boundary Value Analysis
This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:
* For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
* If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
* Apply guidelines 1 and 2 to the output.
* If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.
Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:
* Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
* A cause-effect graph is developed.
* The graph is converted to a decision table.
* Decision table rules are converted to test cases.
2) Manual Testing :
BlackBox Manual Testing
SQA team members upon receipt of the Development builds, walk through the GUI and either update existing hard copy of the product Roadmaps, or create new hard copy. This is then passed on to the Tools engineer to automate for new builds and regression testing. Defects are entered into the bugs tracking database, for investigation and resolution.
Features & Functions - SQA test engineers, swearing on the team definition, exercise the product features and functions accordingly. Defects in feature/function capability are entered into the defect tracking system and are communicated to the team. Features are expected to perform as expected and their functionality should be oriented toward ease of use and clarity of objective.
Tests are planned around new features and regression tests are exercised to validate
existing features and functions are enabled and performing in a manner consistent with prior releases. SQA using the exploratory testing method manually tests and then plans more exhaustive testing and automation. Regression tests are exercised which consist of using developed test cases against the product to validate field input, boundary conditions and so on... Automated tests developed for prior releases are also used for regression testing.
Installation - Product is installed on each of the supported operating systems in either default, flat file configuration, or with one of the supported databases. Every operating system and database, supported by the product, are tested, though not in all possible combinations. SQA is committed to executing, during the development life cycle, the combinations most frequently used by the customers. Clean and upgrade installations are the minimum requirements.
Documentation - All documentation, which is reviewed by Development prior to Alpha is reviewed by the SQA
team prior to Beta. SQA not only verifies technical accuracy, clarity and completeness, they also provide editorial input on consistency, style and typographical errors.
1) Functionality Testing
Functional testing is validating an application or web site, conforms to its specifications and correctly performs all its required functions.
This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product's user interface, database management, security, installation, networking, etc.
The purpose of functionality testing is to reveal issues concerning the product’s functionality and conformance to user requirement.
The first step in functionality testing is to become familiar with the program itself, and with the program’s desired behavior. For this the tester should have clear idea about the documentation such as the program’s functional specification or user manual. Once a program’s expected functionality has been defined, test cases or test procedures can be created that exercise the program in order to test actual behavior against expected behavior. Testing the program’s functionality then involves the execution of any test cases that have been created. Certain portions of a functionality testing effort can also be automated, depends on several factors, and should be discussed with a qualified engineer.
1. Range checking- minimum and maximum values should not be exceeded (invalid values should not be accepted)
2. Check whether numeric fields accept only numeric values
3. Check ‘online Help’ feature (including buttons to open Help feature)
4. Check ‘Print’ feature
5. Check ‘Open file’ feature (must open correct file extensions and incorrect file type should give error messages)
6. Check ‘Graph’ features
7. If there are logins, enter invalid login information for each field
8. Check for error messages for clarity and whether they come up when they are supposed to.
9. In the presence of a database, check all connections through application are valid when accessing data (error messages like “could not connect to database” should not appear.
10. Modify data files (like add extra special characters) to make sure the application gives correct error messages
11. For administrative features make sure only administrators of application may access the features
12. Check by adding duplicate records
13. Delete all records to check whether such an action does not crash the application
14. Check for compatibility using MS Office application (like copy and paste)
15. Click all buttons to make sure all of them are functioning appropriately
16. Click ‘save’ feature (should not be able to overwrite existing file without permission), should save to correct directory, must create correct extension)
17. Check options/settings
18. Check international units are converted correctly
19. Make sure no spellings are incorrect
20. Check for valid date formats
21. Make sure windows are properly minimized, maximized and resized
22. Check whether keyboard shortcuts are working properly
23. Check that right mouse clicks show correct pop up menus
24. If hardware/software keys are present check if the application works as intended with and without execution of keys
2) Compatability Testing
A Testing to ensure compatibility of an application or Web site with different browsers, OS and hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the product and introduce costly and embarrassing bugs. We test for compatibility using real test environments. That is testing how will the system performs in the particular software, hardware or network environment. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.
The purpose of compatibility testing is to reveal issues related to the product’s interaction with other software as well as hardware. The product compatibility is evaluated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix. Finally, the script is executed against the matrix, and any anomalies are investigated to determine exactly where the incompatibility lies.
Some typical compatibility tests include testing your application:
* On various client hardware configurations
* Using different memory sizes and hard drive space
* On various Operating Systems
* In different network environments
* With different printers and peripherals (i.e. zip drives, USB’s, etc.)
3) Regression Testing
Regression testing is testing the module in which a bug was identified earlier along with the impacted areas to ensure that this fix has not introduced any further defects.
The purpose of regression testing is to ensure that previously detected and fixed issues really are fixed, they do not reappear, and new issues are not introduced into the program as a result of the changes made to fix the issues.
Regression testing also referred to as verification testing, is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.
Regression Testing is in general a black box testing strategy where test case execution of previously written test cases, that has exposed bugs, is done to check whether previously fixed faults have reemerged. In a test suite, all the tests that has caused bug are written and are re-tested whenever changes are made to the program to fix any bug. But this is a tedious process as after every compilation it is difficult to go through the process of retesting all the test cases repeatedly. To make this process simpler regression testing is automated using some testing tools.
Typically regression testing should be performed on a daily basis. Once an issue in the defect
tracking database has been fixed it is reassigned back for final resolution. Now it can be either reopens the issue, if it has not been satisfactorily addressed, or close the issue if it has, indeed, been fixed.
4) Performance Testing
Performance testing is a rigorous usability evaluation of a working system under realistic conditions to identify usability problems and to compare measures such as success rate, task time and user satisfaction with requirements.
The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing.
To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.
A clearly defined set of expectations is essential for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
* expected load in terms of concurrent users or HTTP connections
* acceptable response time
Load testing:
Load testing is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing
Examples of volume testing:
* testing a word processor by editing a very large document
* testing a printer by sending it a very large job
* testing a mail server with thousands of users mailboxes
Examples of longevity/endurance testing:
* testing a client-server application by running the client in a loop against the server over an extended period of time
Goals of load testing:
* expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
* ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seen similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels whereas load testing operates at a predefined load level, the highest load that the system can accept while still functioning properly.
Stress testing:
Stress testing is a form of testing that is used to determine the stability of a given system or entity. This is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs.
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.
Stress testing does not break the system but instead it allows observing how the system reacts to failure. Stress testing observes for the following.
* Does it save its state or does it crash suddenly?
* Does it just hang and freeze or does it fail gracefully?
* Is it able to recover from the last good state on restart? Etc.
Web Testing
During testing the websites the following scenarios should be considered.
*
Functionality
*
Performance
*
Usability
*
Server side interface
*
Client side compatibility
*
Security
Functionality:
In testing the functionality of the web sites the following should be tested.
* Links
+ Internal links
+ External links
+ Mail links
+ Broken links
* Forms
+ Field validation
+ Functional chart
+ Error message for wrong input
+ Optional and mandatory fields
* Database
+ Testing will be done on the database integrity.
* Cookies
+ Testing will be done on the client system side, on the temporary internet files.
Performance:
Performance testing can be applied to understand the web site's scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase.
* Connection speed:
o Tested over various Networks like Dial up, ISDN etc
* Load
o What is the no. of users per time?
o Check for peak loads & how system behaves.
o Large amount of data accessed by user.
* Stress
o Continuous load
o Performance of memory, cpu, file handling etc.
Usability :
Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction.
Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several criteria:
* Ease of learning
* Navigation
* Subjective user satisfaction
* General appearance
Server side interface:
In web testing the server side interface should be tested. This is done by
Verify that communication is done properly.
Compatibility of server with software, hardware, network and database should be tested.
The client side compatibility is also tested in various platforms, using various browsers etc.
Security:
The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them.
The following types of testing are described in this section:
* Network Scanning
* Vulnerability Scanning
* Password Cracking
* Log Review
* Integrity Checkers
* Virus Detection
3) Testing Skills
BlackBox Testing Skills
Essential Testing Skills needed for Testers:
Test Planning : Analyzing a project to determine the kinds of testing needed, the kinds of people needed, the scope of testing needed, the kinds of people needed, the scope of testing (including what should and should not be tested), the time available for testing activities, the initiation criteria for testing, the completion criteria and the critical success factors of testing.
Test Tool Usage : Knowing which tools are most appropriate in a given testing situation, how to apply the tools to solve testing problems effectively, how to organize automated testing, and how to integrate test tools into an organization
Test Execution : Performing various kinds of tests, such as unit testing, system testing, UAT, stress testing and regression testing. This can also include how to determine which conditions to test and how to evaluate whether the system under test passes or fails. Test execution can often be dependent on your unique environment and project needs, although basic testing principles can be adopted to test most projects
Defect Management : Understanding the nature of defects, how to report defects, how to track defects and how to use the information gained from defects to improve the development and testing processes
Risk analysis: Understanding the nature of risk, how to assess project and software risks, how to use the results of a risk assessment to prioritize and plan testing, and how to use risk analysis to prevent defects and project failure.
Test Measurement: Knowing what to measure during a test, how to use the measurements to reach meaningful conclusions and how to use measurements to improve the testing and development processes
4) Test Approach
BlackBox Test Approach
Design Validation
Statements regarding coverage of the feature design - including both specification and development documents. Will testing review design? Is design an issue on this release? How much concern does testing have regarding design, etc. etc..
Data Validation
What types of data will require validation? What parts of the feature will use what types of data? What are the data types that test cases will address? Etc.
API Testing
What level of API testing will be performed? What is justification for taking this approach (only if none is being taken)?
Content Testing
Is your area/feature/product content based? What is the nature of the content? What strategies will be employed in your feature/area to address content related issues?
Low-Resource Testing
What resources does your feature use? Which are used most, and are most likely to cause problems? What tools/methods will be used in testing to cover low resource (memory, disk, etc.) issues?
Setup Testing
How is your feature affected by setup? What are the necessary requirements for a successful setup of your feature? What is the testing approach that will be employed to confirm valid setup of the feature?
Modes and Runtime Options
What are the different run time modes the program can be in? Are there views that can be turned off and on? Controls that toggle visibility states? Are there options a user can set which will affect the run of the program? List here the different run time states and options the program has available. It may be worthwhile to indicate here which ones demonstrate a need for more testing focus.
Interoperability
How will this product interact with other products? What level of knowledge does it need to have about other programs -- “good neighbor”, program cognizant, program interaction, fundamental system changes? What methods will be used to verify these capabilities?
Integration Testing
Go through each area in the product and determine how it might interact with other aspects of the project. Start with the ones that are obviously connected, but try every area to some degree. There may be subtle connections you do not think about until you start using the features together. The test cases created with this approach may duplicate the modes and objects approaches, but there are some areas which do not fit in those categories and might be missed if you do not check each area.
Compatibility: Clients
Is your feature a server based component that interacts with clients? Is there a standard protocol that many clients are expected to use? How many and which clients are expected to use your feature? How will you approach testing client compatibility? Is your server suited to handle ill-behaved clients? Are there subtleties in the interpretation of standard protocols that might cause incompatibilities? Are there non-standard, but widely practiced use of your protocols that might cause incompatibilities?
Compatibility: Servers
Is your feature a client based component that interacts with servers? Is there a standard protocol supported by many servers that your client speaks? How many different servers will your client program need to support? How will you approach testing server compatibility? Is your client suited to handle ill-behaved or non-standard servers? Are there subtleties in the interpretation of standard protocols that might cause incompatibilities? Are there non-standard, but widely practiced use of protocols that might cause incompatibilities?
Beta Testing
What is the beta schedule? What is the distribution scale of the beta? What is the entry criteria for beta? How is testing planning on utilizing the beta for feedback on this feature? What problems do you anticipate discovering in the beta? Who is coordinating the beta, and how?
Environment/System - General
Are there issues regarding the environment, system, or platform that should get special attention in the test plan? What are the run time modes and options in the environment that may cause difference in the feature? List the components of critical concern here. Are there platform or system specific compliance issues that must be maintained?
Configuration
Are there configuration issues regarding hardware and software in the environment that may get special attention in the test plan? Some of the classical issues are machine and bios types, printers, modems, video cards and drivers, special or popular TSR’s, memory managers, networks, etc. List those types of configurations that will need special attention.
User Interface
List the items in the feature that explicitly require a user interface. Is the user interface designed such that a user will be able to use the feature satisfactorally? Which part of the user interface is most likely to have bugs? How will the interface testing be approached?
Performance & Capacity Testing
How fast and how much can the feature do? Does it do enough fast enough? What testing methodology will be used to determine this information? What criterion will be used to indicate acceptable performance? If modifications of an existing product, what are the current metrics? What are the expected major bottlenecks and performance problem areas on this feature?
Scalability
Is the ability to scale and expand this feature a major requirement? What parts of the feature are most likely to have scalability problems? What approach will testing use to define the scalability issues in the feature?
Stress Testing
How does the feature do when pushed beyond its performance and capacity limits? How is its recovery? What is its breakpoint? What is the user experience when this occurs? What is the expected behavior when the client reaches stress levels? What testing methodology will be used to determine this information? What area is expected to have the most stress related problems?
Volume Testing
Volume testing differs from performance and stress testing in so much as it focuses on doing volumes of work in realistic environments, durations, and configurations. Run the software as expected user will - with certain other components running, or for so many hours, or with data sets of a certain size, or with certain expected number of repetitions.
International Issues
Confirm localized functionality, that strings are localized and that code pages are mapped properly. Assure program works properly on localized builds, and that international settings in the program and environment do not break functionality. How is localization and internationalization being done on this project? List those parts of the feature that are most likely to be affected by localization. State methodology used to verify International sufficiency and localization.
Robustness
How stable is the code base? Does it break easily? Are there memory leaks? Are there portions of code prone to crash, save failure, or data corruption? How good is the program’s recovery when these problems occur? How is the user affected when the program behaves incorrectly? What is the testing approach to find these problem areas? What is the overall robustness goal and criteria?
Error Testing
How does the program handle error conditions? List the possible error conditions. What testing methodology will be used to evoke and determine proper behavior for error conditions? What feedback mechanism is being given to the user, and is it sufficient? What criteria will be used to define sufficient error recovery?
Usability
What are the major usability issues on the feature? What is testing’s approach to discover more problems? What sorts of usability tests and studies have been performed, or will be performed? What is the usability goal and criteria for this feature?
Accessibility
Is the feature designed in compliance with accessibility guidelines? Could a user with special accessibility requirements still be able to utilize this feature? What is the criteria for acceptance on accessibility issues on this feature? What is the testing approach to discover problems and issues? Are there particular parts of the feature that are more problematic than others?
User Scenarios
What real world user activities are you going to try to mimic? What classes of users (i.e. secretaries, artist, writers, animators, construction worker, airline pilot, shoemaker, etc.) are expected to use this program, and doing which activities? How will you attempt to mimic these key scenarios? Are there special niche markets that your product is aimed at (intentionally or unintentionally) where mimic real user scenarios is critical?
Boundaries and Limits
Are there particular boundaries and limits inherent in the feature or area that deserve special mention here? What is the testing methodology to discover problems handling these boundaries and limits?
Operational Issues
If your program is being deployed in a data center, or as part of a customer's operational facility, then testing must, in the very least, mimic the user scenario of performing basic operational tasks with the software.
Backup
Identify all files representing data and machine state, and indicate how those will be backed up. If it is imperative that service remain running, determine whether or not it is possible to backup the data and still keep services or code running.
Recovery
If the program goes down, or must be shut down, are there steps and procedures that will restore program state and get the program or service operational again? Are there holes in this process that may make a service or state deficient? Are there holes that could provide loss of data. Mimic as many states of loss of services that are likely to happen, and go through the process of successfully restoring service.
Archiving
Archival is different from backup. Backup is when data is saved in order to restore service or program state. Archive is when data is saved for retrieval later. Most archival and backup systems piggy-back on each other's processes.
Is archival of data going to be considered a crucial operational issue on your feature? If so, is it possible to archive the data without taking the service down? Is the data, once archived, readily accessible?
Monitoring
Does the service have adequate monitoring messages to indicate status, performance, or error conditions? When something goes wrong, are messages sufficient for operational staff to know what to do to restore proper functionality? Are the "hearbeat" counters that indicate whether or not the program or service is working? Attempt to mimic the scenario of an operational staff trying to keep a service up and running.
Upgrade
Does the customer likely have a previous version of your software, or some other software? Will they be performing an upgrade? Can the upgrade take place without interrupting service? Will anything be lost (functionality, state, data) in the upgrade? Does it take unreasonably long to upgrade the service?
Migration
Is there data, script, code or other artifacts from previous versions that will need to be migrated to a new version? Testing should create an example of installation with an old version, and migrate that example to the new version, moving all data and scripts into the new format.
List here all data files, formats, or code that would be affected by migration, the solution for migration, and how testing will approach each.
Special Code Profiling and Other Metrics
How much focus will be placed on code coverage? What tools and methods will be used to measure the degree to which testing coverage is sufficiently addressing all of the code?
5) Test Metrics
BlackBox Test Metrics
A Metric is a quantitative measure of the degree to which a system, component or process possesses a given attribute. Software metrics are measures that are used to quantify the software, software development resources and software development process. A metric is defined to be the name of a mathematical function used to measure some attribute of a product or process. The actual numerical value produced by a metric is a measure. For example, cyclomatic complexity is a metric; when applied to program code, the number yielded by the formula is the cyclomatic complexity measure.
Two general classes of metrics include the following:
* Management metrics , which assist in the management of the software development process.
* Quality metrics , which are predictors or indicators of the product qualities.
Metrics related to software error detection ("Testing") in the broad sense, grouped into the following categories:
General metrics that may be captured and analysed throughout the product life cycle
Software Requirements metrics , which may give early warning of quality problems in requirements specifications
Software Design metrics , which may be used to assess the status of software designs;
Code metrics reveal properties of the program source code;
Test metrics can be used to control the testing process, to assess its effectiveness, and to set improvement targets;
Software Installation metrics, which are applicable during the installation process;
Software Operation and Maintenance metrics , including those used in providing software product support.
Test Metrics
The following are the metrics collected in the testing process.
1.Defect age .
Defect age is the time from when a defect is introduced to when it is detected (or fixed). Assign the numbers 1 through 6 to each of the software development activities from software requirements to software operation and maintenance. The defect age is computed as shown.
(Activity Detected - Activity Introduced)
Average Defect Age = –——————————————————
Number of Defects
2. Defect response time .
This measure is the time between when a defect is detected to when it is fixed or closed.
3. Defect cost ($ d )
The cost of a defect may be computed as:
$ d = ( cost to analyse the defect) + (cost to fix it)
+ (cost of failures already incurred due to it)
4. Defect removal efficiency (DRE) .
The DRE is the percentage of defects that have been removed during an activity, computed with the equation below. The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g., design inspection, code walkthrough, unit test, 6 month operation, etc.). [SQE]
Number Defects Removed
DRE = –—————————————————— * 100
Number Defects At Start Of Process
5 Mean time to failure (MTTF) .
Gives an estimate of the mean time to the next failure, by accurately recording failure times t i , the elapsed time between the ith and the (i-1)st failures, and computing the average of all the failure times. This metric is the basic parameter required by most software reliability models. High values imply good reliability.
MMTF should be corrected by a weighted scheme similar to that used for computing Fault density (see under Test Metrics).
6 . Fault density (FD) .
This measure is computed by dividing the number of faults by the size (usually in
KLOC, thousands of lines of code).
6) Test Plan
BlackBox Test Plan
Test planning is one of the keys to successful software testing. Test plan can be defined as a document that describes the scope, approach, resources and schedule of intended test activities. The main purpose of preparing test plan is that every one concerned with the project are in synchronized with regards to scope, deliverables, deadlines and response for the project.
The complete document will help the people outside the test group understand the “WHY” and “HOW” of the product validation.
Test planning can and should occur at several levels. The first plan to consider is the Master Test Plan. The purpose of the Master Test Plan is to orchestrate testing at all levels (unit, integration, system, acceptance, beta, etc.). The Master Test Plan is to testing what the Project Plan is to the entire development effort.
The goal of test planning is not to create a long list of test cases, but rather to deal with the important issues of testing strategy, resource utilization, responsibilities, risks, and priorities.
Contents of test plan:
Purpose:
This section should contain the purpose of preparing the test plan.
Scope:
This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of the scope.
Test approach :
This would contain details on how the testing is to performed and whether any specific strategy is to be followed.
Entry criteria:
This section explains the various steps to be performed before the start of test (i.e) pre-requisites.
E.g. Environment setup, starting web server/ application server, successful implementation of latest build etc.
Resources:
This list out the people who would be involved in the project and their designation etc
Tasks and responsibilities:
This talk about the tasks to be performed and the responsibilities assigned to the various members in the project.
Exit criteria:
This contains tasks like bringing down the system or server, restoring system to pre-test environment, database, refresh etc.
Schedules/ Milestones :
This section deals with the final delivery date and the various milestone dates to be met in the course of project.
Hardware/ software requirements :
This section contains the details of system/server required to install the application or perform the testing, specific s/w that needs to be installed on the system to get the application running or to connect to the database, connectivity related issues etc.
Risks and mitigation process :
This section should list out all the possible risks that can arise during the testing and mitigation plans that the QA team plans to implement incase the risk actually turns into a reality.
Tools to be used :
This would list out the testing tools or utilities that are to be used in the project.
E.g. Winrunner, QTP, Test Director PCOM etc.
Deliverables :
This section contains various deliverables that are due to the client at various points of time. i.e. daily, weekly, start of project, end of project etc. these could include test plans, test procedures, test matrices, status reports, test scripts etc. templates for all these also be attached.
Annexure :
This section contains the embedded documents or links to document which have been/will be used in the course of testing. E.g. Templates used for reports, test cases etc. reference documents can also be attached here.
Sign off :
This section contains the mutual agreement between the client and QA team with both leads/ managers signing off their agreement on the test plan.
7) Types of Testing
SOFTWARE TESTING TYPES:
Acceptance Testing:
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing:
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing :
Ad-hoc testing is the interactive testing process where developers invoke application units explicitly,
and individually compare execution results to expected results.
Agile Testing :
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Alpha Testing: Early testing of a software product conducted by selected customers.
Automated Testing :
• Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
• The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Basis Path Testing:
A white box test case design technique that uses the algorithmic flow of the program to design tests.
Beta Testing:
Testing of a re-release of a software product conducted by customers.
Binary Portability Testing:
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing:
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing:
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing:
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Branch Testing:
Testing in which all branches in the program source code are tested at least once.
Breadth Testing:
A test suite that exercises the full functionality of a product but does not test features in detail.
Compatibility Testing:
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Concurrency Testing:
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single thread code and locking semaphores.
Conversion Testing:
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Data Driven Testing :
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Dependency Testing :
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing :
A test that exercises a feature of a product in full detail.
Dynamic Testing :
Testing software through executing it. See also Static Testing.
Endurance Testing :
Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing :
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Exhaustive Testing :
Testing which covers all combinations of input values and preconditions for an element of the software under test.
Gorilla Testing :
Testing one particular module, functionality heavily.
Gray Box Testing :
A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Integration Testing :
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing :
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Localization Testing :
This term refers to making software specifically designed for a specific locality.
Loop Testing :
A white box testing technique that exercises program loops.
Monkey Testing :
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Negative Testing :
Testing aimed at showing software does not work. Also known as "test to fail".
N+1 Testing :
A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors.
Path Testing :
Testing in which all paths in the program source code are tested at least once.
Performance Testing :
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing :
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
Recovery Testing :
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression Testing :
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Sanity Testing :
Brief test of major functional elements of a piece of software to determine if its basically operational.
Scalability Testing :
Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing :
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing :
A quick-and-dirty test that the major functions of a piece of software work.
Soak Testing :
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Static Testing :
Analysis of a program carried out without executing the program.
Storage Testing :
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing :
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
Structural Testing :
Testing based on an analysis of internal workings and structure of a piece of software.
System Testing :
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
Thread Testing:
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top Down Testing:
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Usability Testing:
Testing the ease with which users can learn and use a product.
User Acceptance Testing:
A formal product evaluation performed by a customer as a condition of purchase.
Unit Testing:
The testing done to show whether a unit satisfies its functional specification or its implemented structure matches the intended design structure.
Volume Testing:
Testing which confirms that any values that may become large over time can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
White Box Testing:
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing .
Workflow Testing:
Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
8) Auto Tools
BlackBox Auto Tools
Nowadays automated testing tools more often used than ever before to ensure that their applications are working properly prior to deployment. That's particularly important today, because more applications are written for use on the Web—the most public of venues. If a browser-based application crashes or performs improperly, it can cause more problems than a smaller, local application. But for many IT and quality assurance managers, the decision of which testing tools to use can cause confusion.
The first decision is which category of tool to use— one that tests specific units of code before the application is fully combined, one that tests how well the code is working as envisioned, or one that tests how well the application performs under stress. And once that decision is made, the team must wade through a variety of choices in each category to determine which tool best meets its needs.
Functional-Testing Tools
Automated Testing is automating the manual testing process currently in use. The real use and purpose of automated test tools is to automate regression testing . This means that you must have or must develop a database of detailed test cases that are repeatable , and this suite of tests is run every time there is a change to the application to ensure that the change does not produce unintended consequences.
At a functional level, they provide record/playback capabilities, which allow developers to record an existing application and modify scripts to meet changes in an upcoming release. Tools in this category include:
WinRunner provides a relatively simple way to design tests and build reusable scripts without extensive programming knowledge. WinRunner captures, verifies, and replays user interactions automatically, so you can identify defects and ensure that business processes work flawlessly upon deployment and remain reliable. WinRunner supports more than 30 environments, including Web, Java, and Visual Basic. It also provides solutions for leading ERP and Customer Relationship Management (CRM) applications.
Astra Quick Test: This functional-testing tool is built specifically to test Web-based applications. It helps ensure that objects, images, and text on Web pages function properly and can test multiple browsers. Astra QuickText provides record and playback support for every ActiveX control in a Web browser and uses checkpoints to verify specific information during a test run.
Silk Test tool is specifically designed for doing regression and FUNCTIONALITY testing. This tool tests both mainframe and client/server applications. This is the leading functional testing product foe e-business applications. It also provides facilities for rapid test customization and automated infrastructure development.
Rational Suite TestStudio is a full suite of testing tools. Its functional-testing component, called Rational Robot uses automated regression as the first step in the functional testing process. The tool records and replays test scripts that recognize objects through point-and-click processes. The tool also tracks, reports, and charts information about the developer's quality assurance testing process and can view and edit test scripts during the recording process. It also enables the developer to use the same script to test an application in multiple platforms without modifications.
QACenter is a full suite of testing tools. One functional tool considered part of QACenter is QARun, which automates the creation and execution of test scripts, verifies tests, and analyzes test results. A second functional tool under the QACenter is TestPartner, which uses visual scripting and automatic wizards to help test applications based on Microsoft, Java, and Web-based technologies. TestPartner offers fast record and playback of application test scripts, and provides facilities for testers without much programming experience to create and execute tests.
1) Load Runner
LOAD RUNNER
Mercury LoadRunner is the performance testing tool for predicting system behavior and performance. Load runner emulates an envioronment in which thousands of users work with client/server system concurrently. For this load runner replaces the human user with virtual user(Vusers). Using limited hardware resources, LoadRunner emulates hundreds or thousands of concurrent users to put the application through the rigors of real-life user loads.
Vugen:
VuGen is also known as Vuser generator that enables to develop Vuser script for a variety of application types and communication. VuGen creates the script by recording the activity between the client and the server. It monitors the client end of the database and traces all the requests sent to, and received from, the database server.
Vusers:
LoadRunner replaces the human users with virtual users or Vusers. The load on the system can be increased by increasing the number of Vusers.
Load testing process:
Step 1: Planning the test.
A clearly defined test plan ensures the test scenarios developed to accomplish load-testing objectives.
Step 2: Creating Vusers.
Vuser scripts are that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions
Step 3: Define the scenario.
A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us.
Step 4: Running the scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.
Step 5: Monitoring the scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors.
Step 6: Analyzing test results.
During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.
A scenario defines the events that occur during each testing session. The action that a Vuser performs during the scenario is described in Vuser Script. The Vuser scripts include functions that measure and record the performance of your application’s components.
A controller reads a single scenario to co-ordinate several host machines which specify the use of different run time setings, running different Vuser script and storing results in different locations.
Vuser types:
The vuser types are divided into the following categories:
E-business: For Web (HTTP,HTML), COM/DCOM, Corba-Java,
General-Java, Java(GUI), Jolt, LDAP, POP3, and FTP protocols.
Middleware: For Jolt, and Tuxedo(6.0, 6.3) protocols.
ERP: For SAP, Baan, Oracle NCA, and Peoplesoft (Tuxedo or Web) protocols.
Client/Server: For Informix, MSSQLServer, ODBC, Oracle (2-tier), Sybase Ctlib, Sybase Dblib, and Windows Sockets protocols.
Legacy: For APPC and Terminal Emulation (RTE).
General: For C template, Java template, and Windows Sockets type scripts.
Creating the Vuser Scripts
Step-1: Record a basic Vuser script
Step-2: Enhance/edit the Vuser script
Step-3: Configure Run-Time settings
Step-4: Run the Vuser script in stand- alone mode
Step-5: Incorporate the Vuser script into a LoadRunner scenario
The process was started by developing a Vuser script by recording a basic script. LoadRunner provides number of tools for recording Vuser scripts. Then enhancement in the basic script is done by adding control-flow structures, and by inserting transactions and rendezvous points into the script. Then configure the run-time settings. The run-time settings include iteration, log, and timing information, and define how the Vuser will behave when it executes the Vuser script. To verify that the script runs correctly, run it in stand-alone mode. When script runs correctly, incorporate it into a LoadRunner scenario.
Vuser Script Sections
Each Vuser script contains at least three sections: vuser_init, one or more Actions, and vuser_end. Before and during recording, you can select the section of the script into which VuGen will insert the recorded functions.
The following table shows what to record into each section, and when each section is executed.
Script Section ...... Used when recording... Is executed when...
vuser_init A login to a server The Vuser is initialized (loaded)
Actions Client activity The Vuser is in “Running” status
vuser_end A logoff procedure The Vuser finishes or is stopped
When you run multiple iterations of a Vuser script, only the Actions sections of the script are repeated—the vuser_init and vuser_end sections are not repeated.
LoadRunner Controller
The controller window has four tabs, correspond to four views.
Script view – Displays a list of all the Vuser scripts that assigned to Vusers.
Host view – Displays the list of machines that can execute Vuser script during the scenario.
Vuser view – Displays the Vuser assigned to the scenario
Online monitor view – Displays online monitor graph showing transactions and server resource information.
Vuser View is the default view and contains the information about the Vuser in the scenario. The tab is divided into 2 sections.
* Summary information.
* Detailed information.
Summary information:
The Status field of each Vuser group displays the current state of each Vuser in the group. The following table describes the possible Vuser states during a scenario.
Status .............. Description
DOWN The Vuser is down.
PENDING The Vuser is ready to be initialized and is
waiting for an available load generator, or is
transferring files to the load generator. The
Vuser will run when the conditions set in its
Scheduling attributes are met.
INITIALIZING The Vuser is being initialized on the remote machine.
READY The Vuser already performed the init section of the
script and is ready to run.
RUNNING The Vuser is running. The Vuser script is being
executed on a load generator.
RENDEZVOUS The Vuser has arrived at the rendezvous and is waiting
to be released by LoadRunner.
DONE . PASSED The Vuser has finished running. The script passed.
DONE . FAILED The Vuser has finished running. The script failed.
ERROR A problem occurred with the Vuser. Check the Status
field on the Vuser dialog box or the output window
for a complete explanation of the error.
GRADUAL EXITING The Vuser is completing the iteration or action it
is running (as defined in Tools > Options > Run-
Time Settings) before exiting.
EXITING The Vuser has finished running or has been
stopped, and is now exiting.
STOPPED The Vuser stopped when the Stop command was
invoked.
Detailed information:
This section provides the following information.
* ID: The Vuser ID
* Status : The current Vuser status
* Script: The script being executed by the Vuser.
* Host : The Vuser host.
* Elapsed : The Time that elapsed from when the Vuser began executing the script.
Transactions:
Transactions are insert into a Web Vuser script to enable the Controller to measure the performance of the Web server under various load conditions. Each transaction measures the time that it takes for the server to respond to one or more tasks submitted by Vusers. LoadRunner allows to create transactions to measure simple tasks, such as accessing a URL, or complex processes, such as submitting several queries and waiting for a response.
To define a transaction, just insert a Start Transaction and End Transaction icon into the Vuser script.
During a scenario execution, the Controller measures the time it takes to perform each transaction. After a scenario run, LoadRunner’s graphs and reports can be used to analyze the server’s performance.
To mark the start of a transaction while recording:
1. Click the Start Transaction button on the VuGen toolbar. The Start Transaction dialog box opens.
2. Type a transaction name in the Transaction Name box.
3. Click OK to accept the transaction name. VuGen inserts an "lr_start_transaction " statement in the Vuser script.
Example.
lr_start_transaction(" transample");
To mark the end of a transaction while recording:
1. Click the End Transaction button on the VuGen toolbar. The End Transaction dialog box opens.
2. Click the arrow in the Transaction Name box to display a list of open transactions. Select the transaction to close.
3. Select the transaction status from the Transaction Status list. You can manually set the status of the transaction, or you can allow LoadRunner to detect it automatically.
* To manually set the status, you perform a manual check within the code of your script, evaluating the return code of a function. For the "succeed" return code, set the status to LR_PASS. For the "fail" return code, set the status to LR_FAIL.
* To instruct LoadRunner to automatically detect the status, specify LR_AUTO. LoadRunner returns the detected status to the Controller.
4. Click OK to accept the transaction name and status. VuGen inserts an
"lr_end_ transaction "statement in the Vuser script.
Rendezvous Points
A rendezvous point creates intense user load on the server and enables LoadRunner to measure server performance under load. Suppose if there is a need to measure how a web-based banking system performs when ten Vusers simultaneously check account information. In order to emulate the required user load on the server, all the Vusers are instruct to check account information at exactly the same time.
Ensure that multiple Vusers act simultaneously by creating a rendezvous point. When a Vuser arrives at a rendezvous point, it is held there by the Controller. The Controller releases the Vusers from the rendezvous either when the required number of Vusers arrives, or when a specified amount of time has passed.
To Insert Rendezvous Point
1. While recording a Vuser script, click the Rendezvous button on the recording toolbar. The Rendezvous Dialog Box opens
2. Type a name for the rendezvous point in the rendezvous name box.
Click OK to accept the rendezvous name. VuGen inserts an lr_rendezvous statement into the Vuser script.
Using Rendezvous Point
Using the Controller, the level of server load can be influenced by selecting- which of the rendezvous points will be active during the scenario how many Vusers will take part in each rendezvous.
Setting the Rendezvous Attributes
The following rendezvous attributes can be set from the Rendezvous Information dialog box :
• Timeout
• Rendezvous Policy
• Enabling and Disabling Rendezvous
• Enabling and Disabling Vusers
In addition, the dialog box displays general information about the rendezvous point: which script is associated with the rendezvous and release history.
Setting Timeout Behavior Attribute
The timeout determines the maximum time (in seconds) that LoadRunner waits for each Vuser to arrive at a rendezvous. After each Vuser arrives at the rendezvous, LoadRunner waits up to timeout seconds for the next Vuser to arrive. If the nest Vuser does not arrive within the timeout period, then the Controller releases all the Vusers from the rendezvous. Each time a new Vuser arrives, the timer is reset to zero. The default timeout is thirty seconds.
Setting the Release Policy Attribute
The policy attribute determines how the Controller releases Vusers from the rendezvous. For each rendezvous the following Policies can be set:
All Arrived Instructs the Controller to release the Vusers from the rendezvous only when all die Vusers included in the rendezvous arrive. All the Vusers are released simultaneously.
The default policy is All Arrived
Quota Sets the number of Vusers that must arrive at a rendezvous point before the Controller releases the Vusers. For instance, suppose that you are testing a scenario of fifty Vusers and that you want a particular operation to be executed simultaneously by ten Vusers. You can designate the entire scenario as participants in the Rendezvous and set a quota of ten Vusers. Every time ten Vusers arrive at the rendezvous, they are released .
Disabling and Enabling Rendezvous Points
It is possible to temporarily disable a rendezvous and exclude it from the scenario. By disabling and enabling a rendezvous, you influence the level of server load. The Disable and Enable buttons on the Rendezvous Information dialog box, are used to change the status of a rendezvous.
Disabling and Enabling Vusers at Rendezvous Points
In addition to disabling a rendezvous for all Vusers in a scenario. LoadRunner lets to disable it for specific Vusers. By disabling Vusers at a rendezvous, they are temporarily excluded from participating in the rendezvous. Enabling disabled Vusers returns them to the rendezvous. Use the Disable and Enable commands to specify which Vusers will take part in a rendezvous.
Monitoring a Scenario
We can monitor scenario execution using the LoadRunner online transaction and server resource monitors.
About online monitoring
Loadrunner provides the following online monitors:
Server resource
Vuser Status
Transaction
Web
The server resource monitor gauges the system resources used during a scenario. It is capable of measuring NT, UNIX, TUXEDO and SNMP resources gathered by custom monitors.
Setting monitor options
Before running scenario, set appropriate monitor options. The options available are
* Sample rate
The sample rate is the period of time (in seconds) between consecutive samples.
* Error handling
Indicate how loadrunner should behave when monitor error occurs...................
* Debug
* Transaction monitor
LoadRunner Analysis:
After running a scenario, the graphs and reports can be used to analyze the performance of the client/server system.
The results can be viewed in several ways.
* Vuser output file
* Controller output window
* Analysis graphs and reports
* Spreadsheet and raw data
9) Download links
WinRunner
Mercury WinRunner is an automated regression testing tool that allows a user to record and play back test scripts. Mercury WinRunner captures, verifies, and replays user interactions automatically. Winrunner facilitates easy test creation by recording our work on the application. As we point and click the GUI object in the application, Winrunner generates a test script in the c like test script language(TSL).
WinRunner provides checkpoints for text, GUI, bitmaps, URL links and the database, allowing testers to compare expected and actual outcomes and identify potential problems with numerous GUI objects and their functionality.
Download winrunner7.6 here
http://www.mercury.com/us/products/quality-center/functional-testing/winrunner
LoadRunner
LoadRunner enables you to test your system under controlled and peak load conditions. To generate load, LoadRunner runs thousands of Virtual Users that are distributed over a network. Using a minimum of hardware resources, these Virtual Users provide consistent, repeatable, and measurable load to exercise your application just as real users would.
LoadRunner’s in-depth reports and graphs provide the information that you need to evaluate the performance of your application.
The advantage of LoadRunner is it reduces the personnel requirements by replacing human users with virtual users or Vusers. These Vusers emulate the behavior of real users— operating real applications.
LoadRunner automatically records the performance of the application during a test. Because LoadRunner tests are fully automated, we can easily repeat them as often as you need.
http://downloads.mercury.com/cgi-bin/portal/download/index.jsp
Download web performance testing tool
http://www.adventnet.com/products/qengine/web-performance-testing.html
Download web application testing tool
http://www.adventnet.com/products/qengine/web-application-testing.html
Quick Test Professional
Mercury QuickTest Professional provides solution for functional test and regression test automation - addressing every major software application and environment.
QuickTest Professional provides an interactive, visual environment for test development. We can create a test script by simply pressing a Record button and using an application to perform a typical business process.
QuickTest Professional is significantly easier for a non-technical person to adapt to and create working test cases
Download QTP
http://downloads.mercury.com/cgi-bin/portal/download/index.jsp
WhiteBox Testing
WhiteBox Testing
A Software testing approach that examine the program structure and derive test data from the program logic. White-box testing strategies include designing tests such that every line of source code is executed at least once, or requiring every function to be individually tested.
Test coverage is an important component of white-box testing. The goal is to try to execute (that is, test) all lines in an application at least once.
Because white-box testing tools can individually or collectively instrument source lines, it is straightforward to determine which lines in a host program have or have not been executed without modifying source.
Synonyms for white box testing:
* Glass Box testing
* Structural testing
* Clear Box testing
* Open Box Testing
The purpose of white box testing :
Build quality throughout the life cycle of a software product or service.
Provide a complementary function to black box testing.
Perform complete coverage at the component level.
Improve quality by optimizing performance.
Code Coverage Analysis:
Basis Path Testing
A testing mechanism that derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. These are test cases that exercise basic set will execute every statement at least once.
Flow Graph Notation : A notation for representing control flow similar to flow charts
and UML activity diagrams.
Cyclomatic Complexity: The Cyclomatic complexity gives a quantitative measure of
4the logical complexity. This value gives the number of independent paths in the
basis set, and an upper bound for the number of tests to ensure that each statement
is executed at least once. An independent path is any path through a program that
introduces at least one new set of processing statements or a new condition.
Cyclomatic complexity provides upper bound for number of tests required
to guarantee coverage of all program statements
Control Structure testing:
Conditions Testing
Condition testing aims to exercise all logical conditions in a program module. They
may define:
Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions.
Simple condition: Boolean variable or relational expression, possibly proceeded by a
NOT operator.
Compound condition: composed of two or more simple conditions, Boolean operators
and parentheses.
Boolean expression: Condition without Relational expressions.
Data Flow Testing
Selects test paths according to the location of definitions and use of variables
Loop Testing
Loops fundamental to many algorithms. Can define loops as simple, concatenated,
nested, and unstructured.
Advantages of White Box Testing
* Forces test developer to reason carefully about implementation
* Approximate the partitioning done by execution equivalence
* Reveals errors in "hidden" code
* Beneficent side-effects
Disadvantages of White Box Testing
* Expensive
* Cases omitted in the code could be missed out.
Friday, 26 October 2007
Learning objectives / Levels of knowledge for ISEB Foundation Exam
Learning objectives / levels of knowledge
The following learning objectives are defined as applying to this syllabus. Each topic in the syllabus will be examined according to the learning objective for it.
Level 1: Remember (K1)
The candidate will recognise, remember and recall a term or concept.
Example
Can recognise the definition of "failure" as:
* "non-delivery of service to an end user or any other stakeholder" or
* "actual deviation of the component or system from its expected delivery,
service or result".
Level 2: Understand (K2)
The candidate can select the reasons or explanations for statements related to the topic, and can summarise, compare, classify and give examples for the testing concept.
Examples
Can explain the reason why tests should be designed as early as possible:
* To find defects when they are cheaper to remove.
* To find the most important defects first.
Can explain the similarities and differences between integration and system testing:
* Similarities:
Testing more than one component, and can test non-functional aspects.
* Differences:
Integration testing concentrates on interfaces and interactions, and system testing concentrates on whole-system aspects, such as end to end processing.
Level 3: Apply (K3)
The candidate can select the correct application of a concept or technique and apply it to a given context.
Examples
* Can identify boundary values for valid and invalid partitions.
* Can select test cases from a given state transition diagram in order to cover all transactions.
Level 4: Analyse (K4)
The candidate can separate information related to a concept or technique into its constituent parts for better understanding, and can distinguish between facts and inferences.
Examples
* Can understand the various options available for risk identification.
* Can describe which portions of an incident report are factual and which are inferref from results.
Level 5: Synthesise (K5)
The candidate can identify and build patterns in facts and information related to a concept or technique, and can create new meaning or structure from parts of a concept.
Examples
* Can design a quality risk analysis rocess that includes both rigorous and informal elements.
* Can create a blended test strategy that uses a dynamic strategy to balance an analytical strategy.
* Can combine aspects of different review processes to form an effective process for their organisation.
Level 6: Evaluate (K6)
The candidate can judge the value of information and decide on its applicability in a given situation.
Examples
* Can determine the relative effectiveness and efficiency of different review processes or different testing techniques.
* Can determine the type of information that should be gathered for an incident report.
The following learning objectives are defined as applying to this syllabus. Each topic in the syllabus will be examined according to the learning objective for it.
Level 1: Remember (K1)
The candidate will recognise, remember and recall a term or concept.
Example
Can recognise the definition of "failure" as:
* "non-delivery of service to an end user or any other stakeholder" or
* "actual deviation of the component or system from its expected delivery,
service or result".
Level 2: Understand (K2)
The candidate can select the reasons or explanations for statements related to the topic, and can summarise, compare, classify and give examples for the testing concept.
Examples
Can explain the reason why tests should be designed as early as possible:
* To find defects when they are cheaper to remove.
* To find the most important defects first.
Can explain the similarities and differences between integration and system testing:
* Similarities:
Testing more than one component, and can test non-functional aspects.
* Differences:
Integration testing concentrates on interfaces and interactions, and system testing concentrates on whole-system aspects, such as end to end processing.
Level 3: Apply (K3)
The candidate can select the correct application of a concept or technique and apply it to a given context.
Examples
* Can identify boundary values for valid and invalid partitions.
* Can select test cases from a given state transition diagram in order to cover all transactions.
Level 4: Analyse (K4)
The candidate can separate information related to a concept or technique into its constituent parts for better understanding, and can distinguish between facts and inferences.
Examples
* Can understand the various options available for risk identification.
* Can describe which portions of an incident report are factual and which are inferref from results.
Level 5: Synthesise (K5)
The candidate can identify and build patterns in facts and information related to a concept or technique, and can create new meaning or structure from parts of a concept.
Examples
* Can design a quality risk analysis rocess that includes both rigorous and informal elements.
* Can create a blended test strategy that uses a dynamic strategy to balance an analytical strategy.
* Can combine aspects of different review processes to form an effective process for their organisation.
Level 6: Evaluate (K6)
The candidate can judge the value of information and decide on its applicability in a given situation.
Examples
* Can determine the relative effectiveness and efficiency of different review processes or different testing techniques.
* Can determine the type of information that should be gathered for an incident report.
'Study > Testing' 카테고리의 다른 글
Objectives of reviews (0) | 2008.11.02 |
---|---|
The Benefits of early test design (0) | 2008.11.02 |
Test Management - Intresting Questions (0) | 2008.11.02 |
General Interview Questions!! (0) | 2008.11.02 |
Full Lifecycle Testing Concept (0) | 2008.11.02 |