A TESTER'S OATH
----------------------
I shall develop negative attitude
I shall strive to look for issues in everything i see. hear and touch
I shall look at the world through my customer's eyes
I shall strive to look at anything i do from my customer's eyes, I shall strive to empathize with my customer
I shall Develop a curious and questioning mind
I shall assume nothing, nor take anything for granted, i will constantly question, reason and understand।
*************************************************************************************
A place where you can find daily updates on testing - openings, job details, workshop details, everything about testing and its importance
''Software testing''' is the process used to measure the [[software quality|quality]] of developed [[computer software]]. Usually, quality is constrained to such topics as [[correctness]], completeness, [[computer security audit|security]], but can also include more technical requirements as described under the [[International Organization for Standardization|ISO]] standard [[ISO 9126]], such as capability, [[reliability]], [[algorithmic efficiency|efficiency]], [[Porting|portability]], [[maintainability]], compatibility, and [[usability]]. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a ''''''criticism'''''' or '''comparison''' that compares the state and behaviour of the product against a specification. An important point is that ''software testing'' should be distinguished from the separate discipline of ''[[Software Quality Assurance]]'' ([[SQA]]), which encompasses all business process areas, not just testing.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester{{Fact|date=February 2007}}. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also connoted to mean the dynamic analysis of the product—putting the product through its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as "static testing", whereas actually running the program with a given set of test cases in a given development stage is often referred to as "dynamic testing", to emphasise the fact that formal review processes form part of the overall testing scope. A good test is sometimes described as one which reveals an error; however, more recent thinking suggests that a good test is one which reveals information of interest to someone who matters within the project community, as a test will measure quality and therefore may well be unable to find any error, because the software works the way it was specified.
== Introduction ==
In general, [[software engineering|software engineers]] distinguish software [[Fault (technology)|fault]]s from software [[failure]]s. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the [[correctness]] of the [[semantic]] of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer [[software]] executes on the [[Central processing unit|CPU]]. A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.
Software testing may be viewed as a sub-field of [[Software Quality Assurance]] but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.
Regardless of the methods used or level of formality involved, the desired result of testing is a level of [[confidence]] in the software so that the organization is confident that the software has an acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner.
A problem with software testing is that the number of [[Software bug|defect]]s in a software product can be very large, and the number of [[computer configuration|configuration]]s of the product larger still. Bugs that occur infrequently are difficult to find in testing. A [[rule of thumb]] is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software, since it is not usually commercially viable to test over the proposed length of time unless this is a relatively short period. A few days or a week would normally be acceptable, but any longer period would usually have to be simulated according to carefully prescribed start and end conditions.
A common practice of software testing is that it is performed by an independent group of testers after the functionality is developed but before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.
This is highly problematic in terms of controlling changes to software: if faults or failures are found part way into the project, the decision to correct the software needs to be taken on the basis of whether or not these defects will delay the remainder of the project. If the software does need correction, this needs to be rigorously controlled using a version numbering system, and software testers need to be accurate in knowing that they are testing the correct version, and will need to re-test the part of the software wherein the defects were found. The correct start point needs to be identified for retesting. There are added risks in that new defects may be introduced as part of the corrections, and the original requirement can also change part way through, in which instance previous successful tests may no longer meet the requirement and will need to be re-specified and redone (part of [[regression testing]]). Clearly the possibilities for projects being delayed and running over budget are significant.
Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in [[regression test]]ing suites to ensure that future updates to the software don't repeat any of the known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is reasonable based on the risk of any given defect contributing to or being confused with further defects later in the system or process. In particular, if a defect erroneously changes the state of the data on which the software is operating, that data is no longer reliable and therefore any testing after that point cannot be relied on even if there are no further actual software defects.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester{{Fact|date=February 2007}}. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also connoted to mean the dynamic analysis of the product—putting the product through its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as "static testing", whereas actually running the program with a given set of test cases in a given development stage is often referred to as "dynamic testing", to emphasise the fact that formal review processes form part of the overall testing scope. A good test is sometimes described as one which reveals an error; however, more recent thinking suggests that a good test is one which reveals information of interest to someone who matters within the project community, as a test will measure quality and therefore may well be unable to find any error, because the software works the way it was specified.
== Introduction ==
In general, [[software engineering|software engineers]] distinguish software [[Fault (technology)|fault]]s from software [[failure]]s. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the [[correctness]] of the [[semantic]] of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer [[software]] executes on the [[Central processing unit|CPU]]. A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.
Software testing may be viewed as a sub-field of [[Software Quality Assurance]] but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.
Regardless of the methods used or level of formality involved, the desired result of testing is a level of [[confidence]] in the software so that the organization is confident that the software has an acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner.
A problem with software testing is that the number of [[Software bug|defect]]s in a software product can be very large, and the number of [[computer configuration|configuration]]s of the product larger still. Bugs that occur infrequently are difficult to find in testing. A [[rule of thumb]] is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software, since it is not usually commercially viable to test over the proposed length of time unless this is a relatively short period. A few days or a week would normally be acceptable, but any longer period would usually have to be simulated according to carefully prescribed start and end conditions.
A common practice of software testing is that it is performed by an independent group of testers after the functionality is developed but before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.
This is highly problematic in terms of controlling changes to software: if faults or failures are found part way into the project, the decision to correct the software needs to be taken on the basis of whether or not these defects will delay the remainder of the project. If the software does need correction, this needs to be rigorously controlled using a version numbering system, and software testers need to be accurate in knowing that they are testing the correct version, and will need to re-test the part of the software wherein the defects were found. The correct start point needs to be identified for retesting. There are added risks in that new defects may be introduced as part of the corrections, and the original requirement can also change part way through, in which instance previous successful tests may no longer meet the requirement and will need to be re-specified and redone (part of [[regression testing]]). Clearly the possibilities for projects being delayed and running over budget are significant.
Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in [[regression test]]ing suites to ensure that future updates to the software don't repeat any of the known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is reasonable based on the risk of any given defect contributing to or being confused with further defects later in the system or process. In particular, if a defect erroneously changes the state of the data on which the software is operating, that data is no longer reliable and therefore any testing after that point cannot be relied on even if there are no further actual software defects.