Thursday, June 26, 2008

Software testing Questions

Test Automation job interview questions:

1. What automating testing tools are you familiar with?
2. How did you use automating testing tools in your job?
3. Describe some problem that you had with automating testing tool.
4. How do you plan test automation?
5. Can test automation improve test effectiveness?
6. What is data - driven automation?
7. What are the main attributes of test automation?
8. Does automation replace manual testing?
9. How will you choose a tool for test automation?
10. How you will evaluate the tool for test automation?
11. What are main benefits of test automation?
12. What could go wrong with test automation?
13. How you will describe testing activities?
14. What testing activities you may want to automate?
15. Describe common problems of test automation.
16. What types of scripting techniques for test automation do you know?
17. What are principles of good testing scripts for automation?
18. What tools are available for support of testing during software development life cycle?
19. Can the activities of test case design be automated?
20. What are the limitations of automating software testing?
21. What skills needed to be a good software test automator?
22. How to find that tools work well with your existing system?
23. Describe some problem that you had with automating testing tool.
24. What are the main attributes of test automation?
25. What testing activities you may want to automate in a project?
26. How to find that tools work well with your existing system?
27. What are some of the common misconceptions during implementation of an automated testing tools for the first time?

Basic SQA testing questions
1. What is the Difference between Project and Product testing? What difference you have observed while
testing the Clint/Server application and web server application
2. What are the differences between interface and integration testing? Are system specification and functional
specification the same? What are the differences between system and functional testing?
3. What is Multi Unit testing?
4. What are the different types, methodologies, approaches, methods in software testing
5. What is the difference between test techniques and test methodology?

Interview questions on WinRunner

1. How you used WinRunner in your project? - Yes, I have been using WinRunner for creating automated
scripts for GUI, functional and regression testing of the AUT.
2. Explain WinRunner testing process? - WinRunner testing process involves six main stages
o Create GUI Map File so that WinRunner can recognize the GUI objects in the application being
tested
o Create test scripts by recording, programming, or a combination of both. While recording tests,
insert checkpoints where you want to check the response of the application being tested.

WinRunner interview questions
1. Describe the process of planning a test in WinRunner?
2. How do you record a new script? Can you e-mail a WinRunner script? How can a person run a previously
saved WinRunner script?
3. How can you synchronize WinRunner scripts?
4. What is a GUI map? How does it work?
5. How can you verify application behavior?
Other
1. What programming language are you using?
2. What C++ libraries are you proficient with?
3. Which argorithm do you like the most? Why?
4. How do you debug SSH?
5. What is the QA process?
6. How do you train another QA engineer?
7. What bug tracking tools you have used? Have you used any free tools?
8. How do you start your QA if there are no system requirements?
9. Have you used MSVC? What do you think of it?
10. There are 3 lights (in one room) and 3 swtiches (in another room), one for each, if you only enter into the
light room once. How can you find out which switch corresponds to which light?
11. What is your weakness?
12. Why do you think you are suited for this job?
13. If there is a day, when you find yourself not fitting in our team, what will you do?
14. What makes you think you are qualified for this job?
15. Do you like music? Which composers are your favourite?
16. What kind of PC games you like most? Why?
17. Are you familiar with collboration tools? Which communication method do you prefer for talk, email and
chat?
18. When will you be available to start work?
19. What security tools have you used?
20. Tell me about yourself.
21. Tell me about your experience with this type of work
22. What do you like and dislike about our company?
23. Why do you want to work for us?
24. What should we hire you? What can you do for us? What can you do that others can not?
25. What is the job’s most attractive and least attractive factor?
26. What do you look for in a job?
27. Please give me your definition of software test engineer.
28. How long would it take you to make a meaningful contribution to our firm?
29. How long would you stay with us?
30. Are you thinking of going back to school or college?
31. What kind of programs/machines or equipment have you worked with?
32. You may be overqualified for this position we have to offer.
33. Give me an example of a project you handled from start to finish.
34. What was your last employer’s opinion of you?
35. Can you work under pressure, deadline etc?
36. Do you have any questions?
37. What is it you liked and disliked about your last job?

Monday, December 03, 2007

looking for testing jobs

hello Testers!

we have started a new service for people those who are all searching for jobs!

If you are looking for a testing jobs post us an email with your phone number (optional) we will help you out to get a better future.

email us at : comments@allaboutads.in website : (www.allaboutads.in)

Friday, June 22, 2007

Today's take

Testers,

Here you will get to know everything you wanted to know about Software Testing. Most importantly, you will get to know this information from the practitioners, from the people working in the field, from people like you and me.

todays testing pick is given below:

1. http://testinggeek.com
2. http://testerqa.com

visit again for latest information about testing and its related articles
if you want anything related to testing - please post a comment so that we can help you

Thursday, June 14, 2007

Test - - the rest !


A TESTER'S OATH
----------------------

I shall develop negative attitude
I shall strive to look for issues in everything i see. hear and touch

I shall look at the world through my customer's eyes
I shall strive to look at anything i do from my customer's eyes, I shall strive to empathize with my customer

I shall Develop a curious and questioning mind
I shall assume nothing, nor take anything for granted, i will constantly question, reason and understand।
*************************************************************************************

A place where you can find daily updates on testing - openings, job details, workshop details, everything about testing and its importance


''Software testing''' is the process used to measure the [[software quality|quality]] of developed [[computer software]]. Usually, quality is constrained to such topics as [[correctness]], completeness, [[computer security audit|security]], but can also include more technical requirements as described under the [[International Organization for Standardization|ISO]] standard [[ISO 9126]], such as capability, [[reliability]], [[algorithmic efficiency|efficiency]], [[Porting|portability]], [[maintainability]], compatibility, and [[usability]]. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a ''''''criticism'''''' or '''comparison''' that compares the state and behaviour of the product against a specification. An important point is that ''software testing'' should be distinguished from the separate discipline of ''[[Software Quality Assurance]]'' ([[SQA]]), which encompasses all business process areas, not just testing.

There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester{{Fact|date=February 2007}}. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also connoted to mean the dynamic analysis of the product—putting the product through its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as "static testing", whereas actually running the program with a given set of test cases in a given development stage is often referred to as "dynamic testing", to emphasise the fact that formal review processes form part of the overall testing scope. A good test is sometimes described as one which reveals an error; however, more recent thinking suggests that a good test is one which reveals information of interest to someone who matters within the project community, as a test will measure quality and therefore may well be unable to find any error, because the software works the way it was specified.

== Introduction ==
In general, [[software engineering|software engineers]] distinguish software [[Fault (technology)|fault]]s from software [[failure]]s. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the [[correctness]] of the [[semantic]] of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer [[software]] executes on the [[Central processing unit|CPU]]. A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.

Software testing may be viewed as a sub-field of [[Software Quality Assurance]] but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.

Regardless of the methods used or level of formality involved, the desired result of testing is a level of [[confidence]] in the software so that the organization is confident that the software has an acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner.

A problem with software testing is that the number of [[Software bug|defect]]s in a software product can be very large, and the number of [[computer configuration|configuration]]s of the product larger still. Bugs that occur infrequently are difficult to find in testing. A [[rule of thumb]] is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software, since it is not usually commercially viable to test over the proposed length of time unless this is a relatively short period. A few days or a week would normally be acceptable, but any longer period would usually have to be simulated according to carefully prescribed start and end conditions.

A common practice of software testing is that it is performed by an independent group of testers after the functionality is developed but before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.

This is highly problematic in terms of controlling changes to software: if faults or failures are found part way into the project, the decision to correct the software needs to be taken on the basis of whether or not these defects will delay the remainder of the project. If the software does need correction, this needs to be rigorously controlled using a version numbering system, and software testers need to be accurate in knowing that they are testing the correct version, and will need to re-test the part of the software wherein the defects were found. The correct start point needs to be identified for retesting. There are added risks in that new defects may be introduced as part of the corrections, and the original requirement can also change part way through, in which instance previous successful tests may no longer meet the requirement and will need to be re-specified and redone (part of [[regression testing]]). Clearly the possibilities for projects being delayed and running over budget are significant.

Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in [[regression test]]ing suites to ensure that future updates to the software don't repeat any of the known mistakes.

It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is reasonable based on the risk of any given defect contributing to or being confused with further defects later in the system or process. In particular, if a defect erroneously changes the state of the data on which the software is operating, that data is no longer reliable and therefore any testing after that point cannot be relied on even if there are no further actual software defects.