Pages

Thursday, March 1, 2012

Testing Stuff

Sample testcases link : http://www.softwaretestinghelp.com/sample-test-cases-testing-web-desktop-applications/

Software Development: Is a process in which a software product is developed to fulfill the needs of a customer and delivered them with specified cost and time period.

Process: Particular method of doing something in a number of steps is a process.

Software Development Life Cycle: Software development life cycle is a process of the problem solving; the software life cycle begins when an application first conceived and ends when it is no longer time.

Identification definition of the problem to solve through the process of the consulting with the client.
Formalize the description into specification
Implement that specification.
Feasibility study
Analysis
Design
Coding
Testing
Installation & Maintenance

Feasibility Study: analyst conducts an initial study of the problem. In this we gathered information from the client. Feasibility report contains application areas, cost estimation, system requirements and time scale for implementation and expected benefits etc.,

Analysis: in this process of investigating a business with a view to determine how best to manage the various procedures and information processing tasks that it involves. System analysis contains Business Requirements Specification (BRS) document, Functional Requirements Specification (FRS) document, use cases (user action and system response), system flow charts, data flow diagrams and organization charts. Depends on feasibility SRS document will be prepared by business analyst. Based on SRS document use cases prepare here.

Design: planning the structure of the information system to be implemented. System analysis determines how it should be done. High Level Design and Low Level Design prepared here. Design report contains user interface design, design of output reports, input screens, database tables, files, system security, backups, validation, passwords and test plan.

Coding: coding is nothing but set of programs. Translate the design of the system into code in a given programming language. It contains all the programs, functions, reports that related to the coding.

Testing: Testing is a process of executing a program with an intention of finding the defects.

Software testing is a process of evaluating a system by manual or automation means to verify that it satisfies the customer requirements or identifies differences between expected and actual results.

Installation & Maintenance: it includes file conversion system testing, staff training, corrective maintenance, perfective maintenance and adaptive maintenance.

Software Development Life Cycle Models:
1. Waterfall Model: It is the simplest software process model; in this phases are organized in a linear order. It is well suited for routine type of project where all the requirements are understood.
Project begins with feasibility analysis, after the requirement analysis project planning begins. Design starts after the requirement analysis is complete and coding starts after the design completes. Once the coding is completed, the code is integrated and testing starts. On successful completion of testing the system is installed. After this operation and maintenance takes place.

Limitations:
Real projects do not follow a sequential flow.
It takes a long time and product release only at the end.
This model is suitable only where all the requirements are known before the design starts.
If we change any thing in any phase, we will check all the phases.

2. Prototype Model: It is used for new systems where the requirements are hard to determine. Prototype begins with requirements gathering. Developer and Customer meet and identify the areas that are not clear, and then a quick throwaway prototype is built and released. Collecting the feedback i.e., what is correct, areas to be modified features to be added etc.. Based on these feedbacks Prototype is modified. This cycle continues until no more changes are required.

3. Iterative Model: It is useful for product development in which developers themselves provide specifications. First version of a product is released with minimal and essential features. Based on these feedback and experience with this version, a list of additional requirements added. This model is useful if the core of the application is well understood and increments can be easily defined and negotiated.

4. Spiral Model: Adding some extra features to replace the older functionality is called spiral model. The spiral model is a version-by-version development model. For product side it is best suitable, for application side it is not suitable

5. V Model: V-model is a verification and validation model. Which focuses testing efforts at each stage of a development cycle. Development and testing is going on simultaneously in this model.

Software testing: is a process of evaluating a system by manual or automatic means to verify that is satisfies the customer requirements or identify the deference b/w expected and actual results.

Why software testing: it is important as it may cause mission failure, impact on operational performance and reliability not done properly. If done poorly it leads to high maintenance cost and user dissatisfaction.
To discover defects
To avoid user detecting problems
To prove that the software has no faults
To ensure that product works as user expects
To learn about the reliability of the software
To detect defects early, which helping in reducing the cost of defect fixing.

Objective of Testing: is to find defects in requirements, design and coding phases as early as possible.

Objective of Tester: to find the defects as early as possible and make sure they get fixed.

Quality: Quality means customer satisfaction through bug free, delivered product on time, within the budget, meets the customer requirements and expectations and is
Maintainable.

Why Quality: Quality is the most important factor affecting an organization long-term performance. It is the solution to the problem. Quality saves it doesn’t cost.

Testing Life Cycle:
System study
Scope/approach/estimation
Test plan design
Test case design
Test case review
Test case execution
Defect handling
Gap analysis
Deliverables

System Study:
Domain knowledge: used to know about the client business
Software: front end and back end
Hardware: Internet, intranet, and servers
Functional points: ten lines of code= 1 functional point
No. Of pages document
No. Of resources: Programmers, designers, manages
No. Of days: completion of the project
No. Of modules
Priority: high, medium, low importance for module

Scope: what to be tested, what not to be tested.
Approach: testing life cycle
Estimation: test case review, test case execution, test plan time.

Test plan design:
About the client and company
Reference document (BRS, FRS, and UI etc.,)
Scope
Overview of application
Testing approach (Test strategy)
Testing definition
Testing technique
Start criteria
Stop criteria
Resources and their roles and responsibilities
Defect definition
Risk/migration plan
Training required
Schedule
Deliverables

Test Case Design:
Test case number
Pre condition
Test case description
Expected results
Actual results/ Status (pass/fail)
Remarks

Test Case Review:
Peer to peer review
Team lead review
Team manager review

Test Case Execution:
No. Of test cases executed
No. Of defect found
Screen shots of successful and failure execution taken
Time taken to execution
Time wasted due to unavailability of the system

Defect Handling:
Report the defects to testers/ QA Engineer, Developers, Technical support and End Users.

Gap Analysis:
BRS vs. SRS
SRS vs. TC
TC vs. Defects

TESTING LEVELS:
Unit testing
Integration testing
System testing
Acceptance testing

Unit testing: in this testing each module is tested independently. This testing checks the date formats, input conditions, combo box, list box, option buttons and checkboxes. It uses boundary value analysis, equivalence partition and error guessing techniques.

Integration testing: in this software units of an application are combined and tested for a communication interfaces between them. Three types of integrations.

1. Big bang testing: in this testing every module is first unit tested after that modules are combined all at once tested.

2. Bottom-up testing: in this all the modules are added from lower level hierarchy to higher-level hierarchy. The lower level module is tested first then the next level modules are tested. This is also called ‘ drivers’.
Low-level components are combined into clusters that perform a specific software subfunction. Using the driver’s cluster is tested, drivers are removed and clusters are combined moving upward in the program structure.

3. Top down testing: in this testing all the modules are added from higher hierarchy to lower level hierarchy. The higher-level module is first tested, and then the next level modules are tested. This is also called as a “ Stub”.
The main control module is used as a test driver and stubs are substituted for all components directly sub ordinate to the main control module. Depending on the integration approach selected subordinate stubs are replaced one at a time with actual components. Test is conducted as each component is integrated. On completion of each set of tests another stub is replaced with a real component.

System testing: in this testing all the modules are combined and whole system test at once. The goal is to see if the software meets its requirements. This is also called end-to-end testing.
The following tests can be categorized under system testing.
Recovery testing
Security testing
Stress testing
Performance testing

Acceptance testing: this testing conducting performed using real data of the client to demonstrate that the software is satisfactory. This test occurs just before the software product released. The main objective of this testing is to get the acceptance from the client.

Testing Types (or) Testing Methods:
White box testing
Block box testing
Grey box testing
Incremental testing
Thread testing

White box testing: is a logical functionality of the application. In this the software tester has the full knowledge about how the software works and he got full rights to interact with the scripts. This is also called structural testing or glass box testing. In this, all the statements and conditions executed at least once. This technique is usually used development team.

Black box testing: black box testing methods focus on the functional requirement of the software. It finds incorrect or missing functions, errors in data structures, performance errors, initialization and termination errors. In this testing we test the correctness of the functionality with the help of input and outputs. Tester doesn’t know the knowledge of the software code.

Grey box testing: this is the combination of both white box and black box testing. The tester should have knowledge of both the internals and externals of the function. Grey box testing is especially important with web and Internet applications. Because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces.

Incremental testing: it involves adding unit tested programs to a given module one by one testing each result and combination. Two types of incremental testing

Top null approach
Bottom null approach

Thread testing: a combination of two or more modules to perform a functional testing.

Testing Techniques:

White box testing techniques:
Statement coverage: in this, each and every statement execute at least once.
Decision coverage: in this, all the decision statements execute at least once.
Condition coverage: in this all decision statements as well as possible outcome executed at least once.
Multiple condition coverage: it invokes each point of entry at least once.

Black box testing techniques:
Equivalence partitioning: For each piece of specification generate one or more equivalence class.
Label the classes as valid or invalid.
Generate one test case for each invalid class
Generate a test case that covers as many valid classes as possible.

For example
100 to 1000
                <100 = -ve
>100 =-ve
100 to 1000 = +ve

Boundary value analysis:
Generate test cases for the boundary values.
Minimum value, maximum value+1, minimum value-1
Maximum value, maximum value+1, maximum value-1,

For example:
100 to 1000
Low b boundary = 99(-ve) & 101(+ve)
On the boundary =100 to 1000 (+ve)
Upper boundary =999 (+ve) & 1001 (-ve)

Error guessing: generating test cases against to the specification.

When should we start writing test cases?
V-model is the most suitable way to start writing

Based on BRS document acceptance test cases are writing to do the acceptance testing.
Based on software requirements documents system test cases are writing to do the system testing.
Based on design requirements documents integration test cases are writing to do the integration testing.
Based on code unit test cases are writing to done the unit testing.
  
Test plan: test plan is a document, which describes the objectives, scope, approach and focus of a testing effort. Some of the items include in a test plan are title, id of software, version number, objective of testing effort, software product overview requirement documents, design documents, overall software project organization and responsibilities, project risk analysis, testing priorities, test environment, database setup requirements, test tools, test scripts, problem tracking tools, metrics, security and licensing issues etc.,

Test team, test strategy, test risk analysis, test schedule, communication approach and Functionality list of test cases.

Test case: is a description of what to be tested, what data to be given and what actions to be done to check the actual result against the expected results. It contains test case id, precondition, description, expected values, status, and remarks.
Test cases are reusable, test cases develop for functionality testing can be used for integration, system, regression testing and performance testing with few modifications.

Characteristics of a good test case
Test case should start with ‘ what you are testing’
Test case should be independent
Test case should not contain ‘if ‘ statements
Test case should be uniform
All the test cases should be traceable

Baseline Documents
Construction of an application and testing are done using certain documents. These documents are written in sequence, each of it derived from the previous document.

Defect Handling:
The defect is a coding error in a computer program. If we find any defect, or error or any mismatches occur during the test we can report that defect details to the team lead or developers or project managers.

Types of defects
Cosmetic flow, system crash, data corruption, slow performance, data loss, missing feature, documentation issue, installation problem, incorrect operation, unexpected behavior, unfriendly behavior.

Severity: How the bug affects the product or bug serious. Severity levels are high, medium and low.

The general rule for fixing the defects will depend on the severity. All the high severity defects should be fixed first.

Priority: How fast we fix the bug or how fast we rectify the bug.

 When to stop testing
Release deadlines, testing deadlines
Test cases completed with certain percentage passed
Test budget depleted
Bug rate falls below a certain level
Beta or alpha testing period ends
Coverage of code/functionality/requirements reaches a specified point.

What if there isn’t enough time for thorough testing:
Which functionality is most important to the project?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact?
Which parts of the code are most complex?
Which parts of the requirements and design are unclear?
Which aspects of similar/ related previous caused problems
What kinds of tests could easily cover multiple functionalities?
What tests will have the best high-risk coverage to time required ratio?
Take the help of customer in understanding what is most important to him?

Testing imitates
We can only test against system requirements.
May not detect errors in the requirements
Incomplete requirements may lead to inadequate or incorrect testing.
Exhausting testing is impossible
Compromise between thoroughness and budget

Testing principles
Testing cannot show the absence of defects, only their presence
The earlier an error is made, the costlier it is.
The later an error is detected, the costlier it is.

Why software has bugs:
Different members at different levels develop a software application.
Miscommunication or no communication
Software complexity
Programming errors
Changing requirements
Time pressure
Egos
Poorly documented code.

Common problems in the software development process:
Poor requirements
Unrealistic schedule
Inadequate testing
Miscommunication

Each test case has three stages
It drives the application from the initial state to the state you want to test
It verifies that the actual state matches the expected state. This stage is the heart of the test case.
It cleans up the application, in preparation for the next test case.

Life cycle of Automation
Analyze the application
Select the tool
Identify the scenarios
Design/record test scripts
Modify the test scripts
Run the test scripts
Viewing result
Reporting defect

Automation: a software program that is used to test another software program, this is referred to as automated software testing.

Why Automation
Avoid the errors that humans make when they get tired.
Each future test cycle takes less time and require less human intervention.
Required for Regression testing.

Will automated testing tools make testing easier?
Yes, for small projects, the time needed to learn and implement them may not be worth it. For larger projects or ongoing long-term projects, they can be valuable.
  
Benefits of test automation
Allows more testing to happen
Tightens the test life cycle
Testing is consistent, repeatable.
Useful when new patches released.
Makes configuration testing easier
Makes regression testing easier
Test battery can be continuously improved.

False benefits
Fewer tests will be needed
It will be easier if it is automate
Compensate for poor design
No more manual testing

Different automated tools
Rational robot
Win runner
Silk test
QA run
Web Ft

Tester responsibilities
Follow the test plans, scripts etc as documented.
Reports faults objectively and factually
Check tests are correct before reporting defects
Assess risk objectively
Prioritize what you report
Communicate the truth
Maintain good relationship with developers or QAS

Waterfall model: is the simplest software process model, in this phases are organized in a linear order. This model is most widely used process model. It is well suited for the routine type of projects where all the requirements are well understood.

Limitations
This model is suitable only where all the requirements are known before the design starts.
But real projects do not follow a sequential flow.
It takes a long time and product release only at end
If we change anything in any phase, we will check all the phases

Prototype model
Is used for new systems where it is not clear that the constraints can be met or that algorithms can be developed to implement the requirements. It is well suited for where the requirements are hard to determine. It is an excellent technique for reducing risks associated with a project.

Iterative model
Is useful for product development in which developers themselves provide specifications. The first version of a product with minimal and essential features is bunched. Based on the feedback and experience with this version, a list of additional requirements added. This model is useful if the core of the application is well understood and increments can be easily defined and negotiated.

Different types of testing
Alpha testing Testing of a software product at the developer’s site by the end user
Adhoc testing Testing of a software application without using test case technique.
Adding value: adding something that the customer wants that was not there before
Application: a single software product that may or may not fully support a business function
Beta testing Testing of a developed software product at the end user site by the end user.
Batch test is the execution of more than one test scripts run at a time.
Boundary value testing is a test case design technique in which test cases are designed using the boundary values.
Back end testing: testing how the values are saved, retrieved or deleted in the database.

Bug tracking tools: are used to track the bugs found in software applications. When a bug is found tester records and send it to developer. Developer will fix it and send to tester it has been fixed. The tester will check the bug has really fixed and close it.

Benchmarking: comparing our product to the best product or best competitors

Baseline: quantitative measure of the current level of performance.

Benefits realization testing: testing conducted after an application is moved into production to determine whether it is likely to meet the originating business case.

Compatibility testing Testing how well the software works in a particular hardware /software/operating system/network environment.

Condition testing It is a kind of path testing aims to exercise each of the logical conditions in the program.

Component testing Testing of individual software components of a system.

Conversion testing Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Configuration testing Is a system level test to determine the ideal and minimum configurations that the software works well on.

Data flow testing in which test cases are designed based on variable usage within the code

Dirty testing Testing aim is showing the software does not work.

Documentation testing Testing the application with the use of requirement documents.

Design phase testing: reviews, inspections, and prototypes.

Digital signature testing: testing the bitmaps in developed software.

Entry point: the first executable statement within a component.

Exit point: the last executable statement within a component.

Equivalence partitions testing: is a test case design technique for a component in which test cases are designed to execute representation of equivalence class.

Exhaustive testing: executing the program with all possible combination of values for program variables.

Exploratory testing: testing the application without using the recognize test case technique.

Functional testing: testing the application with specified functional requirements without regard to the final program structure also known as Black Box Testing.

Hybrid Testing: combination of Top-Down and Bottom-up Testing.

Interface Testing: Testing the interfaces between the components of software.

Inspection: subject of the inspection is typically a document such a requirements specification or a test plan. Purpose of inspection is to find problems and see what is missing, not to fix anything. Most problems found during this inspection. Result of the inspection meeting should be in a written report.

Install / uninstall testing: testing of full, partial or upgrade install / uninstall processes.

Input domain: the set of all possible inputs.

Known bugs developer know the bugs but delivery date is arrived at that time delivers the projects to the customer, after rectifying the bugs.

Load Testing: aim is to exposing the bugs typically occurring in loop structure.

Life cycle testing: testing at all development stages i.e., requirements, analysis, and design and code phases.

Maintainability Testing: Testing whether the system meets its specified objectives for maintainability.

Mutation Testing: small changes occurred in existing test cases, that test case or requirements-executed.

Monkey testing: coverage of main activates during is called monkey testing. There is no time, the release date comes near we can use this test.

Memory leakage: improper allocation and deallocation of memory during execution time.

Negative Testing: testing aim is to show that the software does not work.

Non-functional requirements testing: testing of those requirements that do not relate to functionality. i.e., performance, usability.

Output domain: the set of all possible outputs.

Operational testing: testing conducted to evaluate a system or component in its operational environment.

Performance testing: testing aim is to determine the actual performance of the system against the performance objectives under peak and normal conditions.

Portability testing: testing aimed that developed software can be ported to specified hardware /software platforms.

Progressive testing: testing of new features after regression testing of previous features.

Program phase testing: is also called white box testing.

Recovery testing: testing how well a system recovers from crasher, hardware failures and other failures. It is a system level testing.

Regression testing: execution of test cases on new version of application or modified version. If we change anything verify that modifications have not caused unintended adverse effects or retesting of the modified application.

Requirement phase testing: Walkthrough, reviews and inspections

Security testing: testing how well the system protects against unauthorized internal or external access or testing whether the system meets its specified security objectives.

Static testing: testing of an object without execution on a computer.

Storage testing: testing whether the system meets its specified storage objectives

Stress testing: Is used to find the bugs when an application working at maximum volumes of resources.

Sanity Testing: Initially test the software that is ready to accept for a major testing effort.

Smoke testing: testing with wrong criteria.

Software Testing: is a set of activities conducted with the indented of finding errors in software.

Scalability testing: objective is to find the maximum number of users system can handle.

Stub: is a software module implemented to special purpose. It is used to develop or test a component

Six Sigma Quality: 99.99997% perfect, only 3.4 defects in a million.

Transaction flow testing: aims to test the transactions done by the software.

User-interface Testing: testing the interface issues like ease of use, navigation, clarity, and screen etc.,

Usability Testing: testing the user friendliness, techniques like user interviews, surveys, and video recording of user sessions.

Verification: it checks whether we are building the product right. Or to check the application based on corresponding development documents.

Verification typically involves reviews and meetings to evaluate documents, p0lans, code, requirements and specification. It determines consistency, correctness and completeness of a program at each stage.

Validation: It checks whether we are building the right product. Or to check the application functionality equals to customer requirements.

Validation typically involves testing, and takes place after verification is completed. It determines the correctness of a final program with respect to its requirements.

Version control: Maturity in new version compares to the previous version.

Walkthrough: is an informal meeting for evaluation of informational purposes. Little or no preparation is usually required.

No comments:

Post a Comment