In the present day world testing plays a significant role especially in software industry. There are various stages in software development namely analysis, design, coding and testing. Testing attains its significance due to customer satisfaction and this in turn reflects on company reputation.
2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two camps.
3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.
4. Develop a comprehensive test plan; it forms the basis for the testing methodology.
5. Use both static and dynamic testing.
6. Define your expected results.
7. Understand the business reason behind the application. You'll write a better application and better testing scripts.
8. Use multiple levels and types of testing (retesting, regression, systems, integration, stress and load).
9. Review and inspect the work, it will lower costs.
10. Don't let your programmers check their own work; they'll miss their own errors.
System Development Life Cycle Model (Waterfall model)
Software Requirements Analysis
Code Generation
Testing
Maintenance
Rapid Application Development (RAD) Model
a. Business modeling
b. Data modeling
c. Process modeling
d. Application generation
e. Testing and turnover
Component Assembly Model
The waterfall model assumes that the only role for users is in specifying requirements, and that all requirements can be specified in advance. Unfortunately, requirements grow and change throughout the process and beyond, calling for considerable feedback and iterative consultation. Thus many other SDLC models have been developed.
The fountain model recognizes that although some activities can't start before others -- such as you need a design before you can start coding -- there's a considerable overlap of activities throughout the development cycle.
The spiral model emphasizes the need to go back and reiterate earlier stages a number of times as the project progresses. It's actually a series of short waterfall cycles, each producing an early prototype representing a part of the entire project. This approach helps demonstrate a proof of concept early in the cycle, and it more accurately reflects the disorderly, even chaotic evolution of technology.
Build and fix is the crudest of the methods. Write some code, then keep modifying it until the customer is happy. Without planning, this is very open-ended and can by risky.
The incremental model divides the product into builds, where sections of the project are created and tested separately. This approach will likely find errors in user requirements quickly, since user feedback is solicited for each stage and because code is tested sooner after it's written.
Big Time, Real Time
Conclusion
Example for backward compatibility
This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. These problems do not impact use of the product in any substantive way.
Medium impact
This is a problem that (a) Effects a more isolated piece of functionality (b) Occurs only at certain boundary conditions. (c) Has a work around (d) Occurs only at one or two customers or (e) Is very intermittent.
High impact
This should be used for only serious problems, affecting many sites, with no workaround. Frequent or reproducible crashes/core dumps/GPFs would fall in this category, as would major functionality not working.
Urgent impact
This should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use the product at almost any site, etc. For released products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to break the system by randomly trying the system's functionality, can include negative testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across different system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing:
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Beta Testing: Testing of a re-release of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for confirmation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested.
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
CAST: Computer Aided Software Testing.
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects, which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether a software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of agile testing that continuously and creatively evaluates the testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Data Dictionary: A database that contains definitions of all data items defined during analysis.
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states, and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it.
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behavior is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Functional Decomposition: A technique used during planning, analysis, and design; creates a functional
hierarchy for the software.
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Gorilla Testing: Testing one particular module , functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
High Order Tests: Black-box tests conducted once the software has been integrated.
Independent Test Group (ITG): A group of people whose primary responsibility is software testing,
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".
N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors.
Path Testing: Testing in which all paths in the program source code are tested at least once.
Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative
Testing.
Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.
Quality Management: That aspect of the overall management function that determines and implements the quality policy.
Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
Ramp Testing: Continuously raising an input signal until the system breaks down.
Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational.
Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
Software Testing: A set of activities conducted with the intent of finding errors in software.
Static Analysis: Analysis of a program carried out without executing the program.
Static Analyzer: A tool that carries out static analysis.
Static Testing: Analysis of a program carried out without executing the program.
Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Is performance testing using a very high level of simulated load.
Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software..
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testing:
Test Case:
Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.
Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.
Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.
Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
Test Procedure: A document providing detailed instructions for the execution of one or more test cases.
Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.
Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.
Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.
Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.
Usability Testing: Testing the ease with which users can learn and use a product.
Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.
Unit Testing: Testing of individual software components.
Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation are testing, inspection and reviewing.
Verification: The process of determining whether of not the products of a given phase of the software development cycle meets the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.
White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing..
Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
Testing came into limelight in the mid of 60’s .IBM company spent lot of resources and time on product installation by sending man power for implementation of systems yet encountered problems in terms of maintenance. Before the release of IBM-360 to its customers it decided to test the product in all aspects before it release.
A meeting was held to decide who should lead the team, then a name was anonymously selected and the name was Mr.Glengard.J.Myres. This person had a set of specific characteristics in him .He came down to office in informal wear when there was a restriction of formal dress code in the company, differently oriented in thinking, rumors that played hex phone after office hours and soon. Myres agreed to lead the testing team if he was given freedom to select people of his choice .The first test manger in software testing. Began to train the team and came forward with some set of principles.
1. Software testing is creative and challenging task.
2. No one should test their programs.
3. There are some parts in programs, which have a more scope of error than others.
4. There must be a difference in the psychology of testers and developers.
Software testing is creative and challenging task
The intent of looking at a program to find error is testing. “If you don’t find errors, your customers will find many errors.” Creative means tracking out the maximum extend of ruling out an error by exhibiting a difference in your way to tackle the execution of code with a purpose to find error in the application, which is “not yet discovered”. Challenge lies to trace out the hidden error before the release of the application.
No one should test their programs
Programming is nothing but logic implementation of design in a specified language on a specific platform .It is quite natural that one can not easily spot out the mistakes made by himself .To err is human but to find an error in it quite difficult .A third party team should be there to do testing and this team should know the functionality of the application.
There are some parts in programs which have a more scope of error than others
Usually programming is classified into modules and the functionality modules are main modules in programs, apart from input domain, error handling, user interface and soon. The program is almost successful if its basic functionally is executing properly and this is crucial part of programming and perhaps the difficult part also. Scope of error is high in this part. If this part is properly tackled there is a high possibility of eliminating error hence more effort has to be spent on this part.
Difference of psychology between developer and tester
Developers thinking is in a constructive way to develop better logic reduce complexity in terms of time and memory. Tester’s orientation is creative criticism tracking out all odd and negative possibility where there could be a chance of error. This destructive nature can be termed as constructive criticism for the improvement of quality of the application.
TESTING
Testing means verification and validation of an Application or Product
Definition: Testing is the process of identifying defects, where a defect is any variance between actual and expected results.
TESTING means "quality control"
QUALITY CONTROL measures the quality of a product
QUALITY ASSURANCE measures the quality of processes used
to create a quality product.
Verification- whether system is right or wrong
Validation - whether system is right system or not
Application: Preparing according to the customer requirements is called application
Application means customer choice.
For applications alpha testing is done
E.g. go to a hotel and order for item of your choice and that is made after you place an order.
Product: Preparing a product according to the market needs is called a product.
For products Beta testing is done
E.g. Go sweet shop and buy sweets of your choice is e.g. for product.
Risk associated is very high when a product is concerned.
An application that meets its requirements totally can be said to exhibit quality. Quality is not based on a subjective assessment but rather on a clearly demonstrable, and measurable, basis. Quality Assurance and Quality Control are not the same. Quality Control is a process directed at validating that a specific deliverable meets standards, is error free, and is the best deliverable that can be produced. It is a responsibility internal to the team. QA, on the other hand, is a review with a goal of improving the process as well as the deliverable. QA is often an external process. QA is an effective approach to producing a high quality product. One aspect is the process of objectively reviewing project deliverables and the processes that produce them (including testing), to identify defects, and then making recommendations for improvement based on the reviews. The end result is the assurance that the system and application is of high quality, and that the process is working. The achievement of quality goals is well within reach when organizational strategies are used in the testing process. From the client's perspective, an application's quality is high if it meets their expectations.
Development life cycle and Testing Life cycle
Analysis Design Coding Testing
BRS SRS High level design Programming
Low-level design
Walk through Walk through White box Black
Inspections Inspections testing box
Review Meeting Review Meeting testing
Desk Checking Desk Checking
Prototype
FISH BONE DIAGRAM
Maintenance
Test software changes
Software Testing Rules
1
2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two camps.
3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.
4. Develop a comprehensive test plan; it forms the basis for the testing methodology.
5. Use both static and dynamic testing.
6. Define your expected results.
7. Understand the business reason behind the application. You'll write a better application and better testing scripts.
8. Use multiple levels and types of testing (retesting, regression, systems, integration, stress and load).
9. Review and inspect the work, it will lower costs.
10. Don't let your programmers check their own work; they'll miss their own errors.
TESTING PROCESS MODELS
There are two popular testing process models
1. V Model
2. PET Model
V- MODEL
V model is model in which testing is done parallel with development. Left side of V model, reflect development input for the corresponding testing activities .The V Model,
was initially defined by the late Paul Rook in the late 1980s, the V was included in the U.K.'s National Computing Centre publications in the 1990s with the aim of improving the efficiency and effectiveness of software development. It's accepted in Europe and the U.K. as a superior alternative to the waterfall model.The V shows the typical sequence of development activities on the left-hand (downhill) side and the corresponding sequence of test execution activities on the right-hand (uphill) side. In fact, the V Model emerged in reaction to some waterfall models that showed testing as a single phase following the traditional development phases of requirements analysis, high-level design, detailed design and coding..
DEVELOPMENT TESTING
Assessment of dev. plan Info. Prepare test plan
gathering Requirement phase
and testing
analysis
Design Design phase testing
Coding White box testing
(Program phase testing)
Install build Functional testing (Black Box)
Testing Management Process
Acceptance testing
Maintenance Installation testing/Port testing
Test Software Changes
Test Efficiency
Defect Removal Efficiency
Defect Removal Efficiency (DRE)
DRE = A / (A+B)
Where A = defects found by tester
B = defects found by client
W MODEL
This testing model—the W-model—further clarifies the priority of the tasks and the dependence between the development and testing activities. Though as simple as the V-model, the W-model makes the importance of testing and the ordering of the individual testing activities clear. It also clarifies that testing and debugging are not the same thing.
PET MODEL (PROCESS EXPERTS TOOLS AND TECHNOLOGY)
This model was developed by quality analyst form HCL Chennai, majority of companies like Wipro, CTS, TCS use this model.
Separate team is used for testing. Here we do not have analysis level testers, design level testers and coding level testers. All these testing is done by development team and only system testing is done separately.
Starts with BRS (Business requirement specification) it is a non-technical document specified by the client which in turn is written as SRS by business analyst into a technical document and this functional specification is used by developers and testers followed by high level design, low-level design, coding and etc
BRS/CRS/URS ----------------------------- Acceptance testing
SRS/Functional Specification -------------------- System testing
High Level Design -------------------------- Integration testing
Low level design -------------------------------- Unit testing
Coding
Acceptance testing is customer oriented, system testing is tester oriented and unit and integration testing is developer oriented.
CRS – Customer requirement specification, URS – User requirement specification
Client ensures whether the product is meeting his specified client stated in BRS at the time of acceptance testing. Testers job is to analyze functional requirement and write all possible test cases to ensure the overall quality of product while system testing.
Coding is based on low level design (LLD) covering all sub modules after unit testing individual modules are
Combined and coupled, integration testing is done at this stage.
Business logic à Functionality à Testers
Programming logic à Programmers
SYSTEM DEVELOPMENT LIFE CYCLE
System Development Life Cycle (SDLC) is the overall process of developing information systems through a multi-step process from investigation of initial requirements through analysis, design, coding/ implementation, testing, and maintenance. There are many different models and methodologies, but each generally consists of a series of defined steps.
Analysis ---Design------Coding -------------testing ---------------Maintenance
The systems development life cycle (SDLC) is a conceptual model, used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application. Various SDLC methodologies have been developed to guide the processes involved, including the waterfall model (which was the original SDLC method); rapid application development (RAD); joint application development (JAD); the fountain model; the spiral model; build and fix; and synchronize-and-stabilize. Frequently, several models are combined into some sort of hybrid methodology. Documentation is crucial regardless of the type of model chosen or devised for any application, and is usually done in parallel with the development process. Some methods work better for specific types of projects, but in the final analysis, the most important factor for the success of a project may be how closely the particular plan was followed.
In general, an SDLC methodology follows the following steps:
- The existing system is evaluated. Deficiencies are identified. This can be done by interviewing users of the system and consulting with support personnel.
- The new system requirements are defined. In particular, the deficiencies in the existing system must be addressed with specific proposals for improvement.
- The proposed system is designed. Plans are laid out concerning the physical construction, hardware, operating systems, programming, communications, and security issues.
- The new system is developed. The new components and programs must be obtained and installed. Users of the system must be trained in its use, and all aspects of performance must be tested. If necessary, adjustments must be made at this stage.
- The system is put into use. This can be done in various ways. The new system can phased in, according to application or location, and the old system gradually replaced. In some cases, it may be more cost-effective to shut down the old system and implement the new system all at once.
- Once the new system is up and running for a while, it should be exhaustively evaluated. Maintenance must be kept up rigorously at all times. Users of the system should be kept up-to-date concerning the latest modifications and procedures.
System development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize.
The oldest of these, and the best known, is the waterfall: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following:
System Development Life Cycle Model (Waterfall model)
This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This has the following activities.
1. System/Information Engineering and Modeling
2. Software Requirements Analysis
3. Design
4. Code Generation
5. Testing
6. Maintenance
System/Information Engineering and Modeling
As software is always of a large system (or business), work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when software must interface with other elements such as hardware, people and other resources. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. In some cases to extract the maximum output, system should be re-engineered and spiced up. Once the ideal system is engineered or tuned up, the development team studies the software requirement for the system.
Software Requirements Analysis
Defines project goals into defined functions and operation of the intended application. Analyzes end-user information needs, focuses on customer needs .Known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, and target dates. The requirements gathering process is intensified and focused specially on software. To understand the nature of the programs to be built, the system analyst must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is define the problem that needs to be solved .
Design
In this phase, the software's overall structure and its nuances are defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc are all defined in this phase. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase.
Code Generation
The design must be translated into a machine-readable form. The code generation step performs this task. If design is performed in a detailed manner, code generation can be accomplished with out much complication. Programming tools like Compilers, Interpreters, and Debuggers are used to generate the code. Different high level programming languages like C, C++, Pascal, and Java are used for coding. With respect to the type of application, the right programming language is chosen.
Testing
Once the code is generated, the program testing begins. Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build there own testing tools that are tailor-made for there own development operations.
Maintenance
Software will definitely undergo change once it is delivered to the customer. There are many reasons for the change. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post-implementation period. Changes, updating, correction, additions, and moves to a different computing are done.
Prototyping Model
This is a cyclic version of the linear model. In this model, once the requirement analysis is done and the design for a prototype is made, the development process gets started. Once the prototype is created, it is given to the customer for evaluation. The customer tests the package and gives his/her feed back to the developer who refines the product according to the customer's exact expectation. After a finite number of iterations, the final software package is given to the customer. In this methodology, the software is evolved as a result of periodic shuttling of information between the customer and developer. This is the most popular development model in the contemporary IT industry. Most of the successful software products have been developed using this model - as it is very difficult (even for a whiz kid!) to comprehend all the requirements of a customer in one shot. There are many variations of this model skewed with respect to the project management styles of the companies. New versions of software product evolve as a result of prototyping.
Rapid Application Development (RAD) Model
The RAD is a linear sequential software development process that emphasizes an extremely short development cycle. Rapid prototyping called rapid application development model, initial emphasis is on creating a prototype that looks and acts like the desired product in order to test its usefulness. The prototype is an essential part of the requirements determination phase, and may be created using tools different from those used for the final product. Once the prototype is approved, it is discarded and the "real" software is written .The RAD model is a "high speed" adaptation of the linear sequential model in which rapid development is achieved by using a component-based construction approach. Used primarily for information systems applications, the RAD approach encompasses the following phases:
a. Business modeling
The information flow among business functions is modeled in a way that answers the following questions:
What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?
b. Data modeling
The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. The characteristic (called attributes) of each object share identified and the relationships between these objects are defined.
c. Process modeling
The data objects defined in the data-modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing the descriptions are created for adding, modifying, deleting, or retrieving a data object.
d. Application generation
RAD assumes the use of the RAD tools like VB, VC++, Delphi etc rather than creating software using conventional third generation programming languages. The RAD works to reuse existing program components (when possible) or create reusable components (when necessary). In all cases, automated tools are used to facilitate construction of the software.
e. Testing and turnover
Since the RAD process emphasizes reuse, many of the program components have already been tested. This minimizes the testing and development time.
Component Assembly Model
Object technologies provide the technical framework for a component-based process model for software engineering. The object oriented paradigm emphasizes the creation of classes that encapsulate both data and the algorithm that are used to manipulate the data. If properly designed and implemented, object oriented classes are reusable across different applications and computer based system architectures. Component Assembly Model leads to software reusability. The integration/assembly of the already existing software components accelerate the development process. Nowadays many component libraries are available on the Internet. If the right components are chosen, the integration aspect is made much simpler.
The waterfall model assumes that the only role for users is in specifying requirements, and that all requirements can be specified in advance. Unfortunately, requirements grow and change throughout the process and beyond, calling for considerable feedback and iterative consultation. Thus many other SDLC models have been developed.
The fountain model recognizes that although some activities can't start before others -- such as you need a design before you can start coding -- there's a considerable overlap of activities throughout the development cycle.
The spiral model emphasizes the need to go back and reiterate earlier stages a number of times as the project progresses. It's actually a series of short waterfall cycles, each producing an early prototype representing a part of the entire project. This approach helps demonstrate a proof of concept early in the cycle, and it more accurately reflects the disorderly, even chaotic evolution of technology.
Build and fix is the crudest of the methods. Write some code, then keep modifying it until the customer is happy. Without planning, this is very open-ended and can by risky.
The incremental model divides the product into builds, where sections of the project are created and tested separately. This approach will likely find errors in user requirements quickly, since user feedback is solicited for each stage and because code is tested sooner after it's written.
Big Time, Real Time
Synchronize and stabilize method combines the advantages of the spiral model with technology for overseeing and managing source code. This method allows many teams to work efficiently in parallel. This approach was defined by David Yoffie of Harvard University and Michael Cusumano of MIT. They studied how Microsoft Corp. developed Internet Explorer and Netscape Communications Corp. developed Communicator, finding common threads in the ways the two companies worked. For example, both companies did a nightly compilation (called a build) of the entire project, bringing together all the current components. They established release dates and expended considerable effort to stabilize the code before it was released. The companies did an alpha release for internal testing; one or more beta releases (usually feature-complete) for wider testing outside the company, and finally a release candidate leading to a gold master, which was released to manufacturing. At some point before each release, specifications would be frozen and the remaining time spent on fixing bugs.
Both Microsoft and Netscape managed millions of lines of code as specifications changed and evolved over time. Design reviews and strategy sessions were frequent, and everything was documented. Both companies built contingency time into their schedules, and when release deadlines got close, both chose to scale back product features rather than let milestone dates slip.
Conclusion
Different models have their own advantages and disadvantages. Commercial software development world, the fusion of all these methodologies is incorporated. Timing is very crucial in software development. If a delay happens in the development phase, the market could be taken over by the competitor. Also if a 'bug' filled product is launched in a short period of time (quicker than the competitors), it may affect the reputation of the company. So, there should be a tradeoff between the development time and the quality of the product. Customers don't expect a bug free product but they expect a user-friendly product. That results in Customer Ecstasy.
SOFTWARE DEVELOPMENT LIFE CYCLE
Software QA involves the entire software development process monitoring and improving the process, making use of standards and ensuring that procedures are followed so that problems are found and dealt by preventing them.
Testing can be broadly classified in
1.Black box testing
2.White box testing
Black-box and white-box are test design methods.
Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Black-box is also called behavioral, functional, opaque-box, and closed-box. Black box testing is to check desired program functionality by the tests that are randomly generated with a distribution corresponding to the expected usage of the program, and then get a reliability estimate.
- Equivalence partitioning: divide possible test sets into equivalence classes, and run one test from each class
- Boundary value analysis: look for test cases on the boundaries of the equivalence classes
- Exhaustive testing: run every possible input this is infeasible in real time
- Functional testing: construct tests directly from the requirements document
White-box test design allows one to peek inside the box, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. White-box is also called structural, glass-box and clear-box. Intension of white box testing is to cover all execution statements at least once, handling every statement in the code so as to ensure that exceptions are handled. White box testing can be broadly classified into
1. Execution testing
- basis path testing
- loop coverage
- programming technique
2. Operations testing
Run on customer expected platforms
White box testing covers
- Adequacy based on control or data flow properties
- Statement coverage: Percentage of statements executed during testing
- Branch coverage: Percentage of decision alternatives exercised
- Condition coverage: Percentage of decision constituents exercised
- Path coverage: Percentage of program paths executed (infeasible or impossible)
Black-box and white-box a mixture of different methods use so that they aren't hindered by the limitations of a particular one. This is called "gray-box" or "translucent-box" test design
UNIT TESTING
Unit testing is a white box testing technique .Unit tests are programs written to run in batches and test classes. Each typically sends a class a fixed message and verifies it returns the predicted answer. Extreme Programming discipline leverages them to permit easy code changes. Developers write tests for every class they produce. The tests are intended to test every aspect of the class that could conceivably not work. All combined into a huge suite of tests, using when developers go to release new code, they run all the unit tests, not just theirs, on the integration machine. Depend on low level design documents. Units focus on an object's behavior in complex interactions with other objects.
Unit testing is the process of testing of a program module by its developer. The developer does this by building a driver for calling the module in test, with data that are sufficiently close to the actual data that the module is likely to come across in actual use. Since the programmer knows the internals of the module under test, test the program in such a way that all or most of the code is exercised. The purpose of unit testing is to establish the robustness and usability.
INTEGRATION TESTING
Depending upon high level design documents, modules are coupled by the development team, specifying the interaction between modules. It is combination testing i.e. activating one program to another.
“Base state of one module is start state for other module”
If main module is under construction and sub modules are already written then a driver is placed.
Stubs and drivers are temporary programs Drivers activate connection to sub modules, stubs send back controller to main module.
Top down: test the main routine plus stubs, then replace a stub by code and integrate
Advantage: always have a complete system
Disadvantages: stub generation; meaningless output
Bottom up: test modules separately, then combine
Advantages: meaningful values generated
Disadvantages: requires harness, hides integration problems until late
Incremental: try to add one piece at a time so that you know what to concentrate on if a failure occurs
SYSTEM TESTING TECHNIQUES USED
USABILITY
1.User Interface testing
2. Manual Support testing
|
FUNCTIONALITY
1. Functionality testing
2. Error handling testing
3. Input domain testing
4. Recovery testing
5. Compatibility testing
6. Configuration testing
7. Intersystem testing
8. Installation testing
9. Parallel testing
10.Sanitation testing
|
PERFORMANCE
1. Scalability testing
2. Load testing
3. Stress testing
4. Volume testing
|
SECURITY
1. Authorization
2. Access Control
3. Encryption/Decryption
|
1. USER INTERFACE TESTING
- The first step of System testing
- Done to make user feel comfortable
- Ensure that in a given application label names ,menu names are all understood to the user and are easy to operate
User interface testing concentrates on
1. Ease to use
Ease to use ensures that all Microsoft 6 rules/rules followed by the company are implemented or not
i. Controls must be well known to the user
E.g. EOF may not be understood by the user so we have to expand it as End of file and mention it clearly
<< used for Next , >> used for Previous such well known symbols can be used.
ii. Spell Checking
iii. System menu
iv. Controls must not be overlapped
|
|
|
v. Accuracy of data displayed
E.g. there is a field amount
|
AMOUNT
Here amount 256 means $ 256 or Rs 256 or what?
This must be clearly specified as
|
AMOUNT Specifying symbols enhances clarity
vi. Controls must be visible
|
This creates ambiguity
|
Here control is visible and clear
vii. Controls must be initcap
2. Speed
Less number of events to complete a task
E.g. Website based project which is used many times a day by the user
We can design it in such a way that we need not go from
Mail à Login à Inbox
Instead we can make it to
Login à Inbox
3. Look and feel (Pleasentness)
This covers
- Colour
- Font
- Allignment
- Attractive Screens etc
Some companies are doing it simultanoeusly while development , this is done by
developer and tester interaction.
2. MANUAL SUPPORT TESTING
- Ending days of job is manual support testing
- Here help documents and project is opened simultaneously, project is operated and
help is read any mismatch between the two is reported to a development team.
- Testing team conducts this testing after finalisation of master build also called as
golden build.
- After all the testing is over, test engineer verifies the context sensitiveness in the
manual documents by help documents and then the product is released to the
user.
- Help documents provide training to the customer
E.g. Purchase a T.V and you are given a manual how to operate it.
2. FUNCTIONALITY TESTING
Functionality testing involves following a set of procedures designed to ensure that localized software and on-line help operate in the same manner as the source version. We can either run test scripts originally designed for the source version, or we can develop new procedures reflecting our clients' specific needs.
- The important testing out all is the functionality testing
- If 100 days are there for testing then 80 days concentrated on functionality testing
- Functionality testing is major part of black box testing
- The functionality is verified with respect to SRS
- Application functionality concentrates on customer requirement
- During this testing a test engineer executes the below subtests
a.) Object Properties Checking
b.) Error handling
c.) Input domain
d.) Calculation coverage
e.) Back end /database testing
E.g. Insert, update alter, delete do all these in front end and check whether
it is done correctly in the back end or not. Back end testing is easy using
tools where in operations are done from front end and database check point
is used to see what alterations are done and these are seen in excel sheets.
In front end if 10.782 is given and in back end it is 10.78 it is wrong.
Trimming was done and the value in front end and back end is not the
same in this case.
f..) URL testing and soon are done.
4. ERROR HANDLING TESTING
Part of functionality testing
During this test, a test engineer concentrates on error handling with meaningful messages
The intention of testing is finding out error by performing negative navigate
Here we are testing whether we are getting a proper message when incorrect, missing and empty values are given and therefore it is called as negative testing.
5. INPUT DOMAIN TESTING
Part of functionality testing .During this testing a test engineer concentrates on input domain of each input object.
Boundary value analysis covers the size of the object
Equivalence class partition covers the type of object
Here we have two inputs user id and password.
Expected:
User id is only alpha numeric 4 t o 16 characters only
Password is alphabets with 4 to 8 characters only.
Object 1 : user id
Boundary value analysis (4 to 16)
Range is 4 to 16, Minimum = 4, Maximum = 16
Minimum
|
4
|
Pass
|
Minimum+1
|
5
|
Pass
|
Minimum-1
|
3
|
Fail
|
Maximum
|
16
|
Pass
|
Maximum-1
|
15
|
Pass
|
Maximum+1
|
17
|
Fail
|
And then we randomly check for some other inputs like 0, three digit no., Negative numbers, Decimals.
Equivalence Class Partitioning (Alpha Numeric)
Valid
|
Invalid
|
A-Z
a-z
0-9
|
Blank
Special characters
Decimal
|
Object 2 : Password
Boundary Value Analysis (4 to 8)
Minimum
|
4
|
Pass
|
Minimum+1
|
5
|
Pass
|
Minimum-1
|
3
|
Fail
|
Maximum
|
8
|
Pass
|
Maximum-1
|
7
|
Pass
|
Maximum+1
|
9
|
Fail
|
Equivalence Class Partition (Alphabets)
Valid
|
Invalid
|
A-Z
a-z
|
Blank
Special Characters
Numeric
Decimal
|
- RECOVERY TESTING
Whether our application changes from abnormal state to normal state implicitly.
Recovery can be done from back up and recovery procedures to execute programs.
E.g. Power is down, router down due to illegal operation, server down
NOTE: Error handling - Possible Application - Internal Factors
Recovery testing - Accidental Disclosure - External Threats
- COMPATABILITY TESTING
Testing whether our application runs on customer expected platforms are not?
Platforms mean operating systems like Windows 98, 95, NT, 2000 etc. Application executing on Windows 98 change and see whether it is working on other operating systems or not.
E.g. Java runs on all operating systems like UNIX, LINUX, DOS, Macintosh, Windows etc. Where as VB runs on Windows only.
Compatibility is of two types:-
- Forward Compatibility
- Backward Compatibility
Forward Compatibility is Operating system related.
Backward Compatibility is Application process/Project related.
Forward compatibility Backward compatibility
Supporting Not Supporting
Build Operating Build Operating
Not supporting System Supporting System
E.g. VB project UNIX operating E.g. VB supports WINDOWS NT
system . UNIX does not support GUI Here we check whether it is technically
There is no problem with forward supporting or not .Project structure is good compatibility because developer knows or not.
compatibility and it is mentioned in
SRS.
Backward compatibility is the degree to which a software update does not negatively
affect the user or client.
Example for forward compatibility
A system hangs, three tasks were executing. Now we did Control Alter delete end task window appears. Let us call three tasks X, Y, Z.
Click X and end task dead lock not removed, click Y and end task deadlock not removed
when Z is clicked deadlock is removed Z may be operating system or application process for which it has occurred.
Example for backward compatibility
Problems during development: E.g. project is going on for four years. It is a three-tier design consisting of a Visual Basic client application running on Windows 95, a Windows NT middle tier developed in Visual C++ and Visual Basic, and an Oracle database for the back end. The project was initially developed under Visual Studio 5, and several minor updates were also released. For the next significant update, we migrated development to Visual Studio 6, and we have just released a minor patch under this environment.
Our first brush with lack of backward compatibility issues came toward the end of our initial major release under Visual Studio 5. Our client requested a metric listing the number of lines of code in each software module we developed. One of our developers found a Visual Basic analysis program, so we ordered it, installed it, and found it quite helpful in creating a set of software metrics for our client. Afterward, however, we began experiencing intermittent failures on our test client. The developers would retest the software on their development machines, and all went well. Finally, someone would simply rebuild the application, and it would run without problems. After several iterations, we realized that software built on the machine with the analyzer program would not operate on our test client machine. Uninstalling the program did not resolve the problem, so, as we were at the end of the development cycle, we simply got a yellow Post-It note and labeled the machine "Do Not Use for Production Builds."
Our first brush with lack of backward compatibility issues came toward the end of our initial major release under Visual Studio 5. Our client requested a metric listing the number of lines of code in each software module we developed. One of our developers found a Visual Basic analysis program, so we ordered it, installed it, and found it quite helpful in creating a set of software metrics for our client. Afterward, however, we began experiencing intermittent failures on our test client. The developers would retest the software on their development machines, and all went well. Finally, someone would simply rebuild the application, and it would run without problems. After several iterations, we realized that software built on the machine with the analyzer program would not operate on our test client machine. Uninstalling the program did not resolve the problem, so, as we were at the end of the development cycle, we simply got a yellow Post-It note and labeled the machine "Do Not Use for Production Builds."
8. CONFIGURATION TESTING / HARDWARE TESTING
Whether our build co-existence with different technology hardware devices or not?
E.g. an application may be supporting different operating systems but not printer. Previously dot matrix printer was used now we have gone for laser printer it is not supporting therefore also called as hardware compatibility testing.
Environment & compatibility testing, also known as platform testing or configuration testing, verifies that an application functions the same, or in an appropriately similar manner, across all supported platforms or configurations.
For websites, environment & compatibility testing verifies that a website looks and functions the same across all supported Web browsers and browser versions / patch levels -- Netscape, Internet Explorer, AOL, and others. Other variables considered besides the Web browser, especially for multimedia and graphics-rich sites, include operating system, processor type, processor speed, installed RAM and video display & resolution settings.
Environment & compatibility testing for non-Web systems (client/server or standalone applications) verifies an application functions the same across all supported hardware and software configurations. The variables considered here usually include operating system, processor type and speed, memory - RAM and video display & resolution settings. Depending on the exact nature and architecture of the application, variables such as client database type, server database type and server hardware configuration might be included.
The typical environment & compatibility testing process is to:
· Determine what variables are relevant to the application or website and
prioritize them according to risk.
· Identify the total number of configurations that could be tested.
- Based on time, budget and risk considerations, determine how many configurations should and will be tested.
- Determine what application or website features and functions should be tested on each configuration. In most situations, a subset of the functionality is tested on each. In high-risk situations, a complete functional regression test is executed.
- Execute tests on each configuration, leveraging test library of operating system and software configurations and automated configuration setup tools to keep setup time to a minimum and maximize testing time.
9. INTERSYSTEM TESTING/APPLICATION SOFTWARE COMPATABILITY
TESTING
Whether our applications build properly coexists with other customer site application software or not.
Data Transmission
Data Retrieving
For an independent application no need inter system testing .Inter system testing is required when applications are clubbed.
E.g. 1. Let us assume e-seva project it has four modules
Income tax, water tax, electricity and telephone .2 applications developed by one company and 2 applications by some other company .For successful execution of e-seva these four modules must co exist with each other.
E.g. 2. Let us take a web based project which supports two browsers
Internet Explorer
Netscape Navigator then these two browsers inter system testing must be done.
10. INSTALLATION TESTING
Whether our application build is fully installed to corresponding environment or not.
During this testing a test engineer concentrates on
1. Setup programs execution to activate installation
2. Ease of use during installation
3. Occupied memory size after installation
Developers computer Testers Computer
Installation
|
testing
General System
During installation testing occupied memory is known .Some applications like games are easy to install.Some applications require hardware engineers.While installation of application system software required are installed.
Problems faced at installation testing E.g. Customer expected is 2 GB our build is 2.5 GB
informed development team to compress it .While installation build + device drivers + system software has been installed and compressed into 2 GB , now while installation testing we have to see whether it is properly zipped or not, properly installed and uninstalled.
11. PARALLEL TESTING
To estimate the competitiveness of two products.
This testing is done by test engineer to estimate competitiveness of two products and comparing their features. This is done by running both of them on a single computer simultaneously.
E.g.
Rediff mail Yahoo mail
Login Login
Inbox Inbox
Signout Signout
Time taken for these each of these events is calculated and compared.
12. SANITATION TESTING
This testing is used to find the extra features /functionality in application with respect to
SRS i.e. extra ware or garbage.
E.g. SRS
Forgot password is not in SRS therefore it is extra ware .It need not be designed and given. Next time when client will ask for it then the client will pay the company.
13. PERFORMANCE TESTING
Execution of our application under predetermined levels of resources to estimate performance is called performance testing.
Resources refer to RAM size, processor size, cables etc.
Predetermined level of resources means customer expected configuration (or)environment or “performance estimation”.
Our configuration has to nearing to customer environment 1and it has to be created at our company i.e. simulation environment .This improves performance and eliminates chances of installation failures when change control board goes for port testing.
E.g. There are two statements Select X , Select Y executed them individually takes 5 seconds then now we make a statement Select X,Y executed in 4 sec means that performance is improved .Usually for performance testing especially in web based project tools like load runner are used.
14. LOAD TESTING
Execution of our application on predetermined levels of resources and customer expected load to estimate the performance is called load testing.
Load testing is subjecting a system to load. The two main reasons for using such loads is in support of software reliability testing and in performance testing. Load is varied from a minimum to the maximum level defined by the user so that ,the system can sustain without running out of resources or having, transactions application-specific excessive delay .Load testing is highest transaction arrival in performance testing.
Here there is a crucial problem while doing load testing for e.g. tester wants to do load test for testing a website for which customer expected load is 2000,nowhere there are two issues either the company which is developing web site should have 2000 systems ,then started with load testing the actual problem raises here when all the user have to login at the same time to do this testing because when this site is put on world wide web there is a possibility that different users place the request at the same time. Load testing is difficult when it is done manually .Using tools on a single system we can test for any number of user by creating virtual users and this can be tested same time .For e.g. in load runner we have rendezvous point where in all the virtual users come and then testing after that ensures that full load is tested i.e. 2000 users clicking for the website at the same instance of time making a request. Multiple accesses at the same time
15. STRESS TESTING
Execution of our application under predetermined level of resources under maximum peak load to which the system can sustain.
E.g. If the user asks to teat load for 100 systems we test for 110 ,120 and determine the breaking point.
Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, MIPS, interrupts, etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.
16. VOLUME TESTING
Execute application under huge amount of resources and estimate a movement in performance is called as volume testing.
Threshold resources are increased E.g. increase RAM size, increase cable width, increase memory(at times more resources can also decrease the performance).This testing is not usually done.
E.g. If user has P3 system then we test on P4 system
For e.g. MS Access has maximum capacity of 2GB only after that it crashes.
17. SECURITY TESTING
Whether our application provides privacy to customer operations or not?
During this testing test engineer concentrates on login process for authorization authorities of authorized users for access control and encryption/decryption to prevent hacking.
Tester’s duty - Authorization are access control are checked during functionality testing.
Developers duty - 1. Authorization
2. Access Control
3. Encryption and decryption
( Developers require hacking knowledge)
4. Firewalls (Levels of testing for unauthorized users)
E.g. Rediff mail opened inbox, compose etc , logged out now if back button is clicked
It should not go to inbox it must ask for login password.
TESTING TERMINOLOGY
1. MONKEY TESTING/GORILLA TESTING
Coverage of basic functionality of our application during testing is called monkey or chimpanzee testing.
It covers important activities only. Many projects have failed due to monkey testing.
Testing requires 2 month due to some reasons only 10 days time is there then monkey testing is done. Only main activities are done.95% of applications failed due to monkey testing.
This result in
- lack of quality
- no proper standards
- improper structure
- maintenance is very complex
E.g. In web-based project testing inbox, login, mail, reply only
2. SANITY TESTING
To test whether build received is stable to start testing or not is called tester acceptance testing.
Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. Includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
Received build from development Unit testing
Sanity testing Integration testing
System testing Sanity testing
System testing
After receiving build the preliminary testing done is to inspect functioning of build or stability of build, sanity testing.
E.g. Purchased iron box first we check whether power is coming or not i.e. is stability is
Verified.
3. SMOKE TESTING
Possibility of finding defect during basic functionality coverage and estimation is called smoke testing.
Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
E.g. We have not sent a mail but we know that mail will not go as there is a defect in its opening.
4. EXPLORATORY TESETING
Level by level of functionalities coverage during our test execution is called exploratory testing.
Exploratory testing is test design and test execution at the same time. This is the opposite of scripted testing where test procedures, whether manual or automated. Exploratory tests are not defined in advance and are carried out precisely according to plan.
Level 1 Test case
Level 2 Test case Build Test case
E.g. exploratory testing is like a chess game with a computer. You revise your plans after seeing the move of your opponent. All your plans can be changed after one unpredictable move of your opponent.
5. BEBUGGING/ERROR SEEDING
Include and insert defects into program to improve and increase defect detection capacity of test engineer is called debugging.
Build released by development team is to be practiced by tester or programmer to improve skill set during training period. The defects are purposely kept by the developer to increase efficiency of testers.
6. MUTATION TESTIING
Mutation means change. A white box tester performs these types of changes on current programs to estimate completeness of testing. Mutation testing involves modifying actual statements of the program. Mutation analysis is a technique for detecting errors in a program and for determining the thoroughness with which the program has been tested. It measures test data adequacy or the ability of the data to ensure that certain errors are not present in the program under testing. The method entails studying the behavior of a large collection of programs that have been systematically devised from the original program.
7. MANUAL TESTING
Conducting a test without any software tool help is called as manual testing.
Manual testing involves from testing analysis, test planning, test design, test cases and test execution.
Testing
8. AUTOMATED TESTING
Conducting a test with the help of software tools is called as automation testing.
Automation testing is not a substitute for manual. Manual testing is must. Automation is a feature which increases speed and saves time.
Automated testing is done for impact and critical test cases.
Impact indicates repetition or frequent .
Criticality indicates complexity to execute manually.
In an ideal world, software testing is 100% automated. Usually 60% manual testing is done and 40% automation. This is called as selective automation.
Test automation alleviates this tedium of manual testing by automatically executing a battery of tests using an automated testing tool. The tool acts just as a user would, interacting with an application to input data and verify expected responses. An automated regression battery can be run unattended and overnight, freeing up testers to concentrate on testing new features and functionality.
Test automation can be particularly useful in regression testing. Regression testing involves executing a predefined battery of tests against successive builds of an application to verify that bugs are being fixed and features / functions that were working in the previous build haven't been broken. Regression testing is an essential part of testing, but is very repetitive and can become tedious when manually executed build after build after build.
Factors to select a tool
Select the right tool - Automated testing and regression tests for client/server and web applications. We have to select the automated tool best suited to your current and future development efforts.
Reusable automation - An automated testing system that's flexible enough to adapt to changes in your application, not one that must be rewritten for each new build.
Rapid implementation and return on investment - Quickly roll out an automated testing solution for your application.
Avoid shelf ware - Many organizations spend thousands of dollars on automated tools without considering training issues, software release deadlines and other factors. Automation experts will ensure your tool investment is put to good use and will build automation skills in your organization through training and mentoring.
For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable.
A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task..
Automated tools can include:
Code analyzers - monitor code complexity, adherence to standards, etc.
Coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.
Memory analyzers - such as bounds-checkers and leak detectors.
Load/performance test tools - for testing client/server and web applications under various load levels.
Web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure.
Other tools - for test case management, documentation management, bug reporting, and configuration management.
Multiple stages of testing is called as formal testing.
This is done phase by phase
Reviews à Unit testing à Integration testing à System testing à Acceptance testing
à Testing during maintenance
9. INFORMAL TESTING/ BIG BANG TESTING
A single stage of testing after completion of entire coding is big bangs testing.
10. RETESTING
Re execution of our tests on same application build with different input values is called retesting.
Retesting à same test à same build
Regression testing à same test à modified build
11. REGRESSION TESTING
Re executing of our tests on modified form of build is called regression testing.
Regression testing is a technique that detects spurious errors caused by software modifications or corrections. The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. Also called as verification testing, regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.
For retesting and regression testing test automation is extensively use
12. CONFORMANCE TESTING/COMPLIANCE TESTING
Conformance testing is the process of verifying whether a product meets the standard product specifications it was designed to meet. Identifies product bugs, and once those bugs are eliminated, conformance testing can verify that the fixes were successful and within the applicable standards. Conformance /compliance testing results to prove to your customers that your product can co-exist in their complex environment.
Conformance testing can
- Reduce the possibility of interoperability problems
- Reduce the number of support calls and therefore your costs of product support
- Satisfy demands for an independent test of your product
- Increase sales by showing customers that your product has successfully passed testing conducted by an independent, unbiased, third-party testing lab
- Keep your product competitive with and even help it to surpass products offered by the competition
- Win trademark licenses and certification recognition for your product
13. ACCEPTANCE TESTING
Acceptance testing is often treated as a phase located between programming and implementation. Testing is an activity necessary in all phases of a development or maintenance project. Responsibility for testing at different points in the project has to be clearly assigned with adequate resources committed to the testing effort. Analysts and designers must work closely with users in the acceptance testing of a system to ensure a quality product.
14 . ALPHA TESTING
Alpha test: informal, in-house, early
15. BETA TESTING
Beta test: formal, out-of-house, pre-release
TESTING HIERARCHY
Test Policy
Test Strategy
Test Methodology
Test Plan
Test case
Test Procedure
Test Script
Test Log
Defect Report
Need for automation
Cent percent manual testing was done and then automation came into existence.100 % automation is not done only selective automation is done i.e. 40% automation is usually done and this is done only for impact and critical test cases only.
Impact test cases are repeatable test cases where lot of time can be saved avoiding repetition.
Critical test cases are those test cases involved with high priority, crucial and important test cases which are again test just before the release of the project.
Automation plays a very important role especially in Retesting and Regression testing.
Reasons for automation
1. Type of external interface
Interfaces can be categorized in two ways
(a) Character user interface
For character user interfaces only manual testing can be done.
E.g. C, C++, COBOL, UNIX and DOS
(b) Graphical user interface
Automation is suitable for graphical based interface
Java, VB, VC++, HTML, Web based, SAP, Siebel, XML
2. Size of interface
Application having
Less interface - No need automation
More interfaces - Automation is necessary
3. No of expected releases
Expecting more releases - Go for automation
Not expecting releases - No need automation
4.Expecting maturity in future releases
Expecting less maturity - Automation is needed
Expecting more maturity - No need automation
E.g. Microsoft Windows 95 requires automation
Upcoming version Windows 98 Ã Windows 95 + extra features
No need to conduct complete testing because
80% features are in common
20% features only needs to be tested which are actually the extra features
E.g. If Windows XP is upcoming version then no need for automation of Windows 95
Note:
E.g. Java 1.2.2 i.e. X.Y.Z
X Ã Architectural Design
Y Ã Enhancement/Conceptual Design
Z Ã Customisation (or) Modification
Z is purely based on testing team involving
1. Testers performance
2. Ability to fix bugs
5.Testing Team effort
Knowledge of automation in testing team à Automation is done
No knowledge of automation à Train the team about automation and then go for
automation
6.Support from senior management
As automation is involved with economic factor support has to e extended from senior management.
These are the factors to opt for automation
Advantages of automation
1. Lot of time is saved
2. Quality and Consistency of the testing is high.
3. Effort is optimized
TEST POLICY
Test Policy is a company level document.
Written by Quality Control and high level management.
Defines milestones and goals.
E.g. It is like BCCI for Cricket.
|
Usually in companies test policies are display in a paragraph form.
In small companies testing standards is 1 defect per 250 LOC
In large companies like Microsoft testing standards are 1 defect per 2500 LOC
QAM (Quality Assessment Measurement)
- Project Level
- To ensure whether quality is reached or not
TMM (Test Management Measurement)
- Testers Level
- How many test cases are over
- How many test cases are pending
- How many test cases are going on (work in progress)
PCM (Process Capability Measurement)
- Process Level
- Where a process has failed and how to overcome it
- Monitoring old project failure and taking precautions in that area for the new project
- Defect Removal Efficiency
TEST STRATEGY
Test Strategy covers all tests required at project level testing. It tests entire life cycle.
Need of test on a particular phase can be called as test strategy.
Test Strategies are documented by Quality Analyst.
Test strategy defines and concentrates on
1. Objectives (testing standards and goals)
2. Timelines (deadlines for delivery, release date)
3. Approach for testing (how to follow)
To reach company goals test strategy is prepared.
It actually defines a test process.
TRM – Test Responsibility Matrix
Mapping of various phases of testing with test factors is called Test Responsibility matrix.
Phases of testing/Test factors
|
Analysis
|
Design
|
Coding
|
Testing
|
Ease to use
|
||||
Access Control
|
||||
Authorization
|
Test Strategy consists of
1.Scope and Objective
Purpose of testing and its importance is emphasized
2. Business Issues
Cost and time budget
E.g. How much for development and how much for testing
It is usually decided by the Quality analyst and C.E.O of the company
In U.S. common ratio is
33% Ã Testing
66% Ã Development and Maintenance
3. Testing Approach
Microsoft Six Rules
Ease of use implemented in programs by programmers
Interface
4. Test Environment/Test Bed
Quality Analyst tells the required testing documents like test plan, test cases,
test script, test log, defect report.
5. Test Automation
Need of automation for current project
Analyze the need for automation of the project
Factors like
- more versions
- more screens
- GUI
- maturity
- type of external interfaces
- test efficiency and soon
6.Test Metrics
QAM, TMM, PCM
QAM (Quality Assessment Measurement)
- Project Level
- To ensure whether quality is reached or not
TMM (Test Management Measurement)
- Testers Level
- How many test cases are over
- How many test cases are pending
- How many test cases are going on (work in progress)
PCM (Process Capability Measurement)
- Process Level
- Where a process has failed and how to overcome it
- Monitoring old project failure and taking precautions in that area for the new project
- Defect Removal Efficiency
7. Roles and Responsibilities
Communication between test manager, test lead, tester
Repoting levels differ from company to company.
8. Defect Tracking
Specifies the severity of defects
1. High
Not able to move to next work (unable to continue)
E.g. function missing, system hangs, not taking input
2. Medium
Defect is there but able to do tasks
E.g. 1 to 20 is input only taking from 1 to 10
Boundary value exceeding etc.
3. Low
Like user interface, alignment etc
9. Communication and status reporting
Communication between people
Hierarchy between role
It could be via mail, phone, fax, personally and soon.
It varies from company to company
10. Risk and Mitigation Plan
Risk is expecting future failure and mitigation is to plan how to avoid it.
Quality analyst predicts risk and takes steps to avoid risk and how to recover the risk
E.g. 2 months work has to done in 1 month then what has to be done.
11. Change and configuration Management
Updations (change in version)
Enhancement (change in configuration is based on change in customer requirements)
12. Automation and Testing Tools
Test Level Selection
Based upon requirement of project we go for automation
E.g.
Win Runner / Rational Robot /Silk - Functionality
Load Runner /Rational Manger - Performance
Test Director - Management
13. Training Plan
Who has to give training to know about the project?
There are two types
1. Internal
2. External
E.g. Internal is done by functional lead / developers of the project
External E.g. Banking project call banker and get training domain expert
Test Factors
1. Authorization
- prevent unauthorized user
- Security purposes
- E.g. password
- keeping login for any type of software is must
2. Access Control
- Authorization to specific users
- Here authorized users are also categorized into
users and administrators
- End user is pertaining and can use some services only
- Administrator can use all services
3. Audit Trail
- maintaining metadata (data about data)
- It gives data regarding like connecting time, connected data and soon
- E.g. Yahoo administrator wants to know how many people have connected on a
particular day the details can be obtained
4. Ease to use
- User friendliness
- GUI based technology
- Companies providing a convenient solution for the user
5. Ease to operate
- During operating time interface is usable or not
- For installation, uninstallation, dumping, maintenance
- Easy operation of the software in all ways for copying software from one disk to
another and soon
6. Portable
- Run on customer expected platforms
7. Performance
- Speed of processing to complete the tasks
8. File Integrity
- Internal back up creation
- Whether back ups are properly created or not
- Internal files are properly created or not
9. Reliability
- recover from abnormal situations like crash
- Required back up is used or not
- E.g. run a C program
. obj
. bak (file integrity)
Reliability is dependent on file integrity
10. Correctness
- If time is not there then we go for basic functionality testing called correctness
- Validation of functionality
- Output is obtained or not
- Project working
11. Coupling
- inter system testing
- Coexistence with other software’s
12. Continuity of processing
- inter process communication (process to process communication)
- Project is a combination of program all programs must synchronize each
other to avoid hanging and other problems
13. Service Level
- Order or hierarchy of services in application
- E.g. reply, compose, login, sign out order is wrong as first we have to login
and then others
14. Methodology
- follow standards
- Weekly reports, week end reviews
- filling up of sheets in the given format and submission
15. Maintainable
- Project should be long term serviceable to customer size , customer project
software working, updations and enhancements made
- E.g. after 1 year the software should not be outdated , not satisfying the customer
at that point of time no scope of the project after that should not be done
- Development should view future needs and develop project keeping that into
consideration
1 to 13 Ã Testor oriented
14 & 15 Ã Management oriented
These factors are told by quality analyst and are mapped into test responsibilty matrix which gives a clear cut idea to the testing team.
TEST METHODOLOGY
Test Strategy is Company level
Test Methodology is Project level
(this in turn is derived from test strategy)
Def: Test Methodology is a refinement form of test strategy for current project
E.g.
Test methodology
Company 1
Test Strategy are same E.g. Infosys
Test methodology
Company2
Test Strategy are different E.g. Wipro
To develop methodology for the current project QA follows below approach.
Step1. Acquire test strategy
Step2. Determine the type of project
1. Traditional
Entire work from analysis to maintenance is done in the company.
2. Off the shelf / Off shore
Outsourcing (using other company, for some other company’s work which
don’t have that the resource) is also called as body shopping.
(Involved in some stages of company’s project)
3. Maintenance
E.g. IBM Mainframes has given maintenance contact for 15 years to our
company.
Reverse engineering and reengineering are done.
PROJECT TYPE
|
ANALYSIS
|
DESIGN
|
CODING
|
TESTING
|
MAINATANANCE
|
Traditional
|
yes
|
yes
|
yes
|
yes
|
yes
|
Off the self
|
yes
|
yes
|
|||
Maintenance
|
yes
|
Step3. Determine the type of application and requirement
Type of application means client server, web based, ERP, Embedded software
E.g. Web based application with no login then no need access control and authorization.
E.g. If it is on Windows NT only then no need portability factor in this way rows in TRM alter.
Based on requirement Quality analyst decreases the no. of rows in TRM.
Step4. Determine the scope of application
E.g. We are creating a shopping web site with out login, and then releasing it.
After 15 days web site gets good response customer asks us to add credit card and other enhancements .All such things must be focused before by the quality analyst to view future.
Step5. Identify tactical risks
Risks are hindrance to the work in progress so risks have to identified .Risk management
is a vast area .Risks can be categorized into known risks and unknown risks also called as predictable risks and unpredictable risks.
Here we can classify into testable factors and non testable factors with respect to environment.
E.g1. You are in the middle of project server crashes it is unknown risk.
E.g2. You have 25 systems only don’t have any tool but have to test for 10000 users it is known risk.
Known risks should be predicted in advance and steps should be taken to the at most maximum level possible.
In Eg2. Use load runner and create a virtual environment and test for 10000 users.
A well known application which affected due to tactical risk is Intermediate results developed by CMC due to improper low testing the application is very slow.
Step6. Finalize the TRM for that project
The final TRM for the project is prepared
Step7. Prepare System test plan
The over all test plan is prepared here.
Step8. Prepare Module test plans
Divide it and give it to test engineers
IEEE Standard for Software Test Documentation
(ANSI/IEEE Standard 829-1983)
This is a summary of the ANSI/IEEE Standard 829-1983. It describes a test plan as:
“A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.”
This standard specifies the following test plan outline:
Test Plan Identifier:
§ A unique identifier
Introduction
§ Summary of the items and features to be tested
§ Need for and history of each item (optional)
§ References to related documents such as project authorization, project plan, QA plan, configuration management plan, relevant policies, relevant standards
§ References to lower level test plans
Test Items
§ Test items and their version
§ Characteristics of their transmittal media
§ References to related documents such as requirements specification, design specification, users guide, operations guide, installation guide
§ References to bug reports related to test items
§ Items which are specifically not going to be tested (optional)
Features to be Tested
§ All software features and combinations of features to be tested
§ References to test-design specifications associated with each feature and combination of features
Features Not to Be Tested
§ All features and significant combinations of features which will not be tested
§ The reasons these features won’t be tested
Approach
§ Overall approach to testing
§ For each major group of features of combinations of features, specify the approach
§ Specify major activities, techniques, and tools which are to be used to test the groups
§ Specify a minimum degree of comprehensiveness required
§ Identify which techniques will be used to judge comprehensiveness
§ Specify any additional completion criteria
§ Specify techniques which are to be used to trace requirements
§ Identify significant constraints on testing, such as test-item availability, testing-resource availability, and deadline
Item Pass/Fail Criteria
§ Specify the criteria to be used to determine whether each test item has passed or failed testing
Suspension Criteria and Resumption Requirements
§ Specify criteria to be used to suspend the testing activity
§ Specify testing activities which must be redone when testing is resumed
Test Deliverables
§ Identify the deliverable documents: test plan, test design specifications, test case specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, test summary reports
§ Identify test input and output data
§ Identify test tools (optional)
Testing Tasks
§ Identify tasks necessary to prepare for and perform testing
§ Identify all task interdependencies
§ Identify any special skills required
Environmental Needs
§ Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system software, the mode of usage (i.e., stand-alone), and any other software or supplies needed
§ Specify the level of security required
§ Identify special test tools needed
§ Identify any other testing needs
§ Identify the source for all needs which are not currently available
Responsibilities
§ Identify groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving
§ Identify groups responsible for providing the test items identified in the Test Items section
§ Identify groups responsible for providing the environmental needs identified in the Environmental Needs section
Staffing and Training Needs
§ Specify staffing needs by skill level
§ Identify training options for providing necessary skills
Schedule
§ Specify test milestones
§ Specify all item transmittal events
§ Estimate time required to do each testing task
§ Schedule all testing tasks and test milestones
§ For each testing resource, specify its periods of use
Risks and Contingencies
§ Identify the high-risk assumptions of the test plan
§ Specify contingency plans for each
Approvals
§ Specify the names and titles of all persons who must approve the plan
§ Provide space for signatures and dates
DEFECT REPORT CONTAINS
1. Report number
1. Report number
Unique number given to a bug.
2. Program / module being tested:
The name of a program or module that being tested
3.Version & release number
The version of the product that you are testing.
4. Problem Summary
Data entry field that's one line, precise to what the problem is.
5. Report Type
2. Program / module being tested:
The name of a program or module that being tested
3.Version & release number
The version of the product that you are testing.
4. Problem Summary
Data entry field that's one line, precise to what the problem is.
5. Report Type
Describes the type of problem found, for example, it could be software or hardware bug.
6. Severity
Normally, how you view the bug.
Various levels of severity: Low - Medium - High - Urgent
7. Environment
Environment in which the bug is found.
8. Detailed Description
Detailed description of the bug that is found
9. How to reproduce
Detailed description of how to reproduce the bug.
10. Reported by
The name of person who writes the report.
11. Assigned to developer
The name of developer who assigned to fix the bug.
12. Status
Open:
The status of bug when it entered.
Fixed / feedback:
The status of the bug when it fixed.
Closed:
The status of the bug when verified.
(Bug can be only closed by QA person. Usually, the problem is closed by QA manager.)
Deferred:
The status of the bug when it postponed.
User error:
The status of the bug when user made an error.
Not a bug:
The status of the bug when it is not a bug.
13. Priority
Assigned by the project manager who asks the programmers to fix bugs in priority order.
14. Resolution
Defines the current status of the problem. There are four types of resolution such as deferred,
6. Severity
Normally, how you view the bug.
Various levels of severity: Low - Medium - High - Urgent
7. Environment
Environment in which the bug is found.
8. Detailed Description
Detailed description of the bug that is found
9. How to reproduce
Detailed description of how to reproduce the bug.
10. Reported by
The name of person who writes the report.
11. Assigned to developer
The name of developer who assigned to fix the bug.
12. Status
Open:
The status of bug when it entered.
Fixed / feedback:
The status of the bug when it fixed.
Closed:
The status of the bug when verified.
(Bug can be only closed by QA person. Usually, the problem is closed by QA manager.)
Deferred:
The status of the bug when it postponed.
User error:
The status of the bug when user made an error.
Not a bug:
The status of the bug when it is not a bug.
13. Priority
Assigned by the project manager who asks the programmers to fix bugs in priority order.
14. Resolution
Defines the current status of the problem. There are four types of resolution such as deferred,
The test case specifications should be developed from the test plan and are the second phase of the test development life cycle. The test specification should explain "how" to implement the test cases described in the test plan.
TEST SPECIFICATION ITEMS
Each test specification should contain the following items:
1. Case No.: The test case number should be a three digit identifier of the following form: c.s.t, where: c- is the chapter number, s- is the section number, and t- is the test case number.
2. Title: is the title of the test.
3. Program Name: is the program name containing the test.
4. Author: is the person who wrote the test specification.
5. Date: is the date of the last revision to the test case.
6. Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test.
7. Expected Errors: Describes any errors expected
8. References: Lists reference documententation used to design the specification.
9. Data: Describes the data flows between the Implementation Under Test (IUT) and the test engine.
6. Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test.
7. Expected Errors: Describes any errors expected
8. References: Lists reference documententation used to design the specification.
9. Data: Describes the data flows between the Implementation Under Test (IUT) and the test engine.
10. Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.
Example Test Specification
Test Specification
Case No. 7.6.3 Title: Invalid Sequence Number (TC)
Program Name: UTEP221 Author: B.C.G. Date: 07/06/2000
Background: (Objectives, Assumptions, References, Success Criteria)
Validate that the IUT will reject a normal flow PIU with a transmission header that has an invalid sequence number.
Expected Sense Code: $2001, Sequence Number Error
Reference - SNA Format and Protocols Appendix G/p. 380
Data: (Tx Data, Predicted Rx Data)
IUT
<-------- DATA FIS, OIC, DR1 SNF=20
<-------- DATA LIS, SNF=20
--------> -RSP $2001
Script: (Pseudo Code for Coding Tests)
SEND_PIU FIS, OIC, DR1, DRI SNF=20
SEND_PIU LIS, SNF=20
R_RSP $2001
Case No. 7.6.3 Title: Invalid Sequence Number (TC)
Program Name: UTEP221 Author: B.C.G. Date: 07/06/2000
Background: (Objectives, Assumptions, References, Success Criteria)
Validate that the IUT will reject a normal flow PIU with a transmission header that has an invalid sequence number.
Expected Sense Code: $2001, Sequence Number Error
Reference - SNA Format and Protocols Appendix G/p. 380
Data: (Tx Data, Predicted Rx Data)
IUT
<-------- DATA FIS, OIC, DR1 SNF=20
<-------- DATA LIS, SNF=20
--------> -RSP $2001
Script: (Pseudo Code for Coding Tests)
SEND_PIU FIS, OIC, DR1, DRI SNF=20
SEND_PIU LIS, SNF=20
R_RSP $2001
TEST RESULTS ANALYSIS REPORT
The Test Results Analysis Report is an analysis of the results of running tests. The results analysis provides management and the development team with a read out of the product quality.
The following sections should be included in the results analysis:
- Management Summary
- Test Results Analysis
- Test Logs/Traces
BUG IMPACT
Low impact
This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. These problems do not impact use of the product in any substantive way.
Medium impact
This is a problem that (a) Effects a more isolated piece of functionality (b) Occurs only at certain boundary conditions. (c) Has a work around (d) Occurs only at one or two customers or (e) Is very intermittent.
High impact
This should be used for only serious problems, affecting many sites, with no workaround. Frequent or reproducible crashes/core dumps/GPFs would fall in this category, as would major functionality not working.
Urgent impact
This should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use the product at almost any site, etc. For released products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved
Problems encountered with test cases
1. Many test cases are used to test a complex software system. But most of the test cases are useless, because they duplicate the code test coverage result obtained already.
2. After testing, the testers know the total test result without knowing the test contribution of each test case and the relationship between test cases and code, so that after the code is changed, testers do not know what test cases should be used to re-test it.
3. Software engineers need to modify the code to add new features or to fix bugs. But software systems are hard to change: a) when the users want to change a module, they must remember in what file that module is written in, and in what directory the corresponding file is stored; b) changing one module often introduces inconsistencies in other modules.
4. There are three levels of code modification: branch level, class / function level, and system level. They should be handled in different ways efficiently. But in current software re-testing practices, they are handled in the same way inefficiently.
5. Existing play-back tools are blind in case of automation; they play-back almost everything, without knowing the efficiency of each test case and the efficiency of the combination of the test cases.
6. Software retesting after code modification is very time-consuming and expensive.
TESTING TOOLS
Tool: Tool is software equipment that provides easy environment to complete a task
A company manufactures a tool.
E.g. Screw driver
To repair a chair a screw driver is required to finish the task in an effective and simpler way.
Testing tool:
Testing tool is used for testing an application.
It facilitates the tester and developer but not the client directly.
WINRUNNER 7.0
Winrunner is developed by Mercury Interactive and is a functionality testing tool.
- Released in 2002 before that the version is 6.0
- Developed by Mercury Interactive
- Functionality testing tool
- Runs on windows
- Supports client / server and web technologies like
VB, Active X, D2K, VC++, Power Builder, Delphi, HTML, XML, DHTML, Java
Script, Siebel
- Winrunner supports WINDOWS only (95, 98, 2000, XP, NT)
- Record business operations in TSL(Test Script Language) which is like C language
Navigation
Program files à WinRunner à WinRunner
Winrunner testing process
i. Learning
ii. Recording
iii. Edit Script
iv. Run script
v. Analyze result
vi.
1. Learning
Introducing tool and project
Reorganization of objects and windows in our build by testing tool is called as learning
2. Recording
A test engineer record business process in Winrunner. This is recorded
There are two types of modes in WinRunner
1. Context Sensitive Mode
2. Analog mode
To switch from Context Sensitive mode to Analog mode or vice versa press F2.
There are four types of check points in WinRunner
1. GUI Checkpoint
2. Bitmap Checkpoint
3. Database Checkpoint
4. Text Checkpoint
Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to break the system by randomly trying the system's functionality, can include negative testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across different system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing:
- Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
- The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Beta Testing: Testing of a re-release of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for confirmation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested.
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
CAST: Computer Aided Software Testing.
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects, which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether a software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of agile testing that continuously and creatively evaluates the testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Data Dictionary: A database that contains definitions of all data items defined during analysis.
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states, and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it.
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behavior is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Functional Decomposition: A technique used during planning, analysis, and design; creates a functional
hierarchy for the software.
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Gorilla Testing: Testing one particular module , functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
High Order Tests: Black-box tests conducted once the software has been integrated.
Independent Test Group (ITG): A group of people whose primary responsibility is software testing,
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".
N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors.
Path Testing: Testing in which all paths in the program source code are tested at least once.
Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative
Testing.
Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.
Quality Management: That aspect of the overall management function that determines and implements the quality policy.
Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
Ramp Testing: Continuously raising an input signal until the system breaks down.
Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational.
Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
Software Testing: A set of activities conducted with the intent of finding errors in software.
Static Analysis: Analysis of a program carried out without executing the program.
Static Analyzer: A tool that carries out static analysis.
Static Testing: Analysis of a program carried out without executing the program.
Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Is performance testing using a very high level of simulated load.
Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software..
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testing:
- The process of exercising software to verify that it satisfies specified requirements and to detect errors.
- The process of analyzing a software item to detect the differences between existing and required conditions , bugs and to evaluate the features of the software item
- The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
Test Case:
- Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
- A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.
Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.
Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.
Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
Test Procedure: A document providing detailed instructions for the execution of one or more test cases.
Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.
Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.
Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.
Usability Testing: Testing the ease with which users can learn and use a product.
Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.
Unit Testing: Testing of individual software components.
Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation are testing, inspection and reviewing.
Verification: The process of determining whether of not the products of a given phase of the software development cycle meets the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.
White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing..
Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
No comments:
Post a Comment