Manual Testing

Manual Testing

Top Interview Questions

About Manual Testing

 

Manual Testing: An In-Depth Overview

In the world of software development, ensuring the quality of software products is of paramount importance. Software bugs, errors, and inconsistencies can lead to costly mistakes, poor user experience, and reputational damage. This is where software testing comes into play, serving as a critical phase in the Software Development Life Cycle (SDLC). Among the various testing techniques available, Manual Testing remains a fundamental approach, even in the era of automation.

What is Manual Testing?

Manual Testing is the process of manually executing test cases without the use of any automated tools. Testers perform the role of an end-user by interacting with the application to identify any defects, inconsistencies, or unexpected behaviors. This testing technique is performed throughout the software development lifecycle, especially during the early stages, to ensure the application meets the specified requirements and works as intended.

Unlike automated testing, where scripts and tools are used to perform repetitive tasks, manual testing relies entirely on the skill, experience, and observation of the tester. It is particularly useful for applications with dynamic requirements, frequently changing interfaces, or where human observation is crucial.

Objectives of Manual Testing

The primary goals of manual testing include:

  1. Identifying Defects: The foremost objective is to uncover errors, bugs, or defects in the software application that could affect its functionality, usability, or performance.

  2. Ensuring Quality: Manual testing ensures the software meets the quality standards defined by the organization or project.

  3. Validating Functionality: It helps verify that the application behaves as expected according to the requirements specification.

  4. Enhancing User Experience: By simulating end-user scenarios, manual testing helps ensure that the application is user-friendly and intuitive.

  5. Providing Feedback: Testers provide critical feedback to developers, enabling them to fix issues and improve the software’s overall quality.

Types of Manual Testing

Manual testing encompasses several types, each serving a specific purpose. Some common types include:

  1. Black Box Testing: Testers examine the functionality of an application without knowing its internal code structure. It focuses on input and output validation.

  2. White Box Testing: Also known as structural testing, it involves understanding the internal logic and code structure of the application to design test cases.

  3. Functional Testing: Ensures that the application’s features function as per the requirements. Testers validate individual functions and workflows.

  4. Non-Functional Testing: This tests aspects such as performance, usability, reliability, and security rather than specific functionalities.

  5. Regression Testing: Conducted to ensure that recent code changes have not adversely affected existing functionalities.

  6. Sanity and Smoke Testing: Quick checks performed to verify whether the software build is stable enough for further testing.

  7. Acceptance Testing: Performed to determine whether the software meets business requirements and is ready for deployment.

  8. Exploratory Testing: Testers actively explore the application without predefined test cases, discovering potential issues through intuition and experience.

Manual Testing Process

Manual testing is a structured process that ensures systematic evaluation of the software. The typical steps include:

  1. Requirement Analysis: Testers study the project requirements and specifications to understand the expected behavior of the software.

  2. Test Planning: This phase involves defining the scope, objectives, resources, timelines, and strategies for testing.

  3. Test Case Design: Testers create detailed test cases, including preconditions, inputs, expected results, and postconditions.

  4. Test Environment Setup: The necessary hardware, software, network configurations, and databases are prepared to replicate the real user environment.

  5. Test Execution: Testers manually execute the test cases, comparing actual outcomes with expected results.

  6. Defect Reporting: Any discrepancies or defects identified during testing are logged in a defect tracking tool for developers to fix.

  7. Retesting and Regression Testing: After defects are fixed, testers re-execute the test cases and perform regression testing to ensure no new issues are introduced.

  8. Test Closure: Once testing is complete, a summary report is prepared, including metrics, defects, lessons learned, and recommendations.

Advantages of Manual Testing

Manual testing has several advantages that make it indispensable in software quality assurance:

  1. Human Observation: Testers can detect subtle issues, such as UI inconsistencies or user experience problems, which automated tools might overlook.

  2. Flexibility: Testers can adapt to changes in requirements and explore new functionalities without waiting for automation scripts to be updated.

  3. Early Bug Detection: Manual testing can start early in the SDLC, helping identify defects in the initial stages of development.

  4. Cost-Effective for Small Projects: For small-scale applications with limited functionality, manual testing is often more practical than investing in automation.

  5. Exploratory Testing: Allows testers to discover unexpected defects by exploring the software creatively.

Challenges of Manual Testing

Despite its advantages, manual testing also has limitations:

  1. Time-Consuming: Manual execution of test cases, especially for large and complex applications, can be slow and labor-intensive.

  2. Human Error: Testers may overlook defects due to fatigue or lack of attention.

  3. Repetitive Nature: Repeated execution of test cases for regression testing can become monotonous.

  4. Scalability Issues: Manual testing may not be feasible for large-scale projects with frequent releases and extensive test coverage.

  5. Limited Coverage: Some scenarios, especially performance or load testing, are difficult to achieve manually.

Manual Testing Tools

While manual testing itself does not involve automation, several tools can assist testers in managing test cases and tracking defects:

  1. Test Management Tools: Tools like TestRail, Quality Center (ALM), and Zephyr help organize test cases and plan test execution.

  2. Bug Tracking Tools: Tools like JIRA, Bugzilla, and Mantis allow testers to log, track, and manage defects efficiently.

  3. Documentation Tools: Tools such as Confluence help in creating test documentation and sharing knowledge with the team.

Best Practices in Manual Testing

To maximize the effectiveness of manual testing, the following best practices are recommended:

  1. Understand Requirements Thoroughly: Testers should have a clear understanding of business and functional requirements before designing test cases.

  2. Prioritize Test Cases: Focus on high-risk areas and critical functionalities first.

  3. Maintain Test Documentation: Keep test cases, scenarios, and defect reports well-documented for future reference.

  4. Regular Communication: Collaborate closely with developers, business analysts, and stakeholders.

  5. Continuous Learning: Stay updated with industry trends, new testing techniques, and tools.

Manual Testing vs Automated Testing

While automated testing uses scripts and tools to perform repetitive tasks efficiently, manual testing remains essential for areas where human judgment is critical. Automation is ideal for regression, performance, and repetitive tests, whereas manual testing excels in exploratory testing, UI/UX validation, and scenarios with frequent changes. Often, a hybrid approach combining manual and automated testing provides the best results.

Fresher Interview Questions

 

1. What is Software Testing?

Answer:
Software Testing is the process of evaluating a software application to identify defects, ensure it meets the requirements, and verify that it works as intended. The goal is to deliver a quality product.

Key Points:

  • Testing can be manual or automated.

  • Ensures correctness, completeness, and reliability of software.


2. What is Manual Testing?

Answer:
Manual Testing is the process of manually executing test cases without using any automated tools. Testers follow predefined steps to check if the software behaves as expected.

Advantages:

  • No need for programming knowledge.

  • Helps find UI/UX issues.

  • Flexible for exploratory testing.

Disadvantages:

  • Time-consuming for large projects.

  • Prone to human error.


3. What are the different types of Manual Testing?

Answer:

  1. Functional Testing – Verifies software against functional requirements.

    • Example: Checking login functionality.

  2. Non-Functional Testing – Focuses on performance, usability, and security.

    • Example: Testing load time of a webpage.

  3. Regression Testing – Ensures new changes don’t break existing functionality.

  4. Smoke Testing – Basic tests to check if the build is stable enough for further testing.

  5. Sanity Testing – Checks specific functionality after minor changes.

  6. Integration Testing – Tests interactions between modules.

  7. System Testing – Tests the complete system as a whole.

  8. User Acceptance Testing (UAT) – Performed by the end-users to verify the software meets their needs.


4. What is a Test Case?

Answer:
A test case is a documented set of conditions, inputs, and expected results used to verify if a software application works correctly.

Components of a Test Case:

  • Test Case ID

  • Test Description

  • Pre-conditions

  • Test Steps

  • Expected Result

  • Actual Result

  • Status (Pass/Fail)


5. What is a Test Plan?

Answer:
A Test Plan is a formal document describing the strategy, scope, resources, and schedule of testing activities. It defines the objectives and deliverables of the testing process.

Key Components:

  • Test objectives

  • Scope of testing

  • Test resources

  • Test environment

  • Test schedule

  • Risks and mitigation


6. What is the difference between Verification and Validation?

Verification Validation
Checks if the product is built correctly Checks if the right product is built
Static process (reviews, inspections) Dynamic process (executing the software)
Done during development Done after development

7. What is the difference between a Bug, Defect, and Error?

  • Error: Mistake made by a developer during coding.

  • Defect/Bug: Flaw in the software that causes it to fail.

  • Failure: When the software does not perform as expected due to a bug.


8. What is the Software Testing Life Cycle (STLC)?

Answer:
STLC is the series of steps followed to ensure the quality of software.

Phases:

  1. Requirement Analysis – Understand what needs to be tested.

  2. Test Planning – Define testing strategy and resources.

  3. Test Case Development – Write detailed test cases.

  4. Test Environment Setup – Prepare hardware, software, and network.

  5. Test Execution – Run the tests and record results.

  6. Defect Reporting – Log any defects found.

  7. Test Closure – Summarize testing and lessons learned.


9. What is a Bug Life Cycle?

Answer:
The Bug Life Cycle is the journey of a defect from identification to closure.

Stages:

  1. New – Bug is reported.

  2. Assigned – Developer is assigned.

  3. Open – Developer starts analyzing.

  4. Fixed – Developer fixes it.

  5. Retest – Tester verifies the fix.

  6. Closed – Bug is resolved.

  7. Reopen – If not fixed properly.


10. What is the difference between Severity and Priority?

Severity Priority
Indicates the impact of the bug on the system Indicates the urgency to fix the bug
Critical, Major, Minor High, Medium, Low
Example: Application crashes on login Example: Typo on homepage

11. What is Exploratory Testing?

Answer:
Exploratory Testing is an unscripted testing technique where testers explore the application without predefined test cases to find defects. It relies on experience and intuition.


12. What is Ad-hoc Testing?

Answer:
Ad-hoc Testing is informal testing without planning or documentation. It is done to quickly find defects and is often used when time is limited.


13. What is Black Box, White Box, and Gray Box Testing?

  • Black Box Testing: Tester tests functionality without knowing the internal code.

  • White Box Testing: Tester checks internal code, logic, and structure.

  • Gray Box Testing: Tester has partial knowledge of the code and tests both functionality and logic.


14. What is Regression Testing and When is it done?

Answer:
Regression Testing is performed to ensure that new code changes do not affect existing functionality.

When:

  • After bug fixes

  • After new feature implementation

  • After changes in configuration


15. What is the difference between Smoke Testing and Sanity Testing?

Smoke Testing Sanity Testing
Shallow and wide testing Narrow and deep testing
Performed on initial build Performed after minor changes
Checks stability of build Checks functionality after changes

16. What is Test Data?

Answer:
Test Data is the set of input values used to execute test cases. It can be valid or invalid data to check application behavior.


17. What is the difference between Static Testing and Dynamic Testing?

Static Testing Dynamic Testing
Done without executing code (reviews, walkthroughs) Done by executing the software
Finds defects early Finds functional defects
Faster and cost-effective Time-consuming

18. What is Boundary Value Analysis (BVA) and Equivalence Partitioning?

  • Boundary Value Analysis: Tests values at the edges of input ranges.

    • Example: If input range is 1–10, test 0, 1, 10, 11.

  • Equivalence Partitioning: Divides input data into valid and invalid partitions to reduce test cases.

    • Example: For ages 18–60, partitions: <18 (invalid), 18–60 (valid), >60 (invalid).


19. What is the difference between Alpha and Beta Testing?

Alpha Testing Beta Testing
Done by internal employees Done by real users
Performed before release Performed after release in limited environment
Focuses on finding bugs Focuses on usability and feedback

20. Common Manual Testing Tools for Freshers

Even though manual testing is mostly without tools, some tools help manage testing efficiently:

  • JIRA: Bug tracking and test management.

  • TestLink: Test case management.

  • Bugzilla: Bug tracking.

  • Quality Center/ALM: Test planning and execution.


21. How do you test a login page manually?

Answer:
Testing a login page involves checking both functional and non-functional aspects. Steps:

  1. Valid Input Testing:

    • Enter valid username and password → should log in successfully.

  2. Invalid Input Testing:

    • Wrong username → show error.

    • Wrong password → show error.

    • Empty fields → show validation message.

  3. Boundary Testing:

    • Max/min length of username/password fields.

  4. Special Characters:

    • Check if special characters are allowed/disallowed.

  5. Security Testing:

    • SQL injection, XSS, or password masking.

  6. UI Testing:

    • Check alignment, font, buttons, and links.


22. What is the difference between Test Scenario and Test Case?

Test Scenario Test Case
High-level functionality to be tested Step-by-step instructions to test a functionality
Derived from requirements Derived from test scenarios
Example: "Test login functionality" Example: "Enter valid username/password → Click login → Verify dashboard"

23. What is the difference between Priority and Severity with Example?

Answer:

  • Severity: Impact of the bug on the system.

  • Priority: Urgency to fix the bug.

Example:

  • Bug: Application crashes on login → Severity: Critical, Priority: High

  • Bug: Typo on homepage → Severity: Minor, Priority: Medium


24. What are the common challenges in Manual Testing?

Answer:

  • Time-consuming for large projects.

  • Difficult to test repetitive tasks.

  • Human error and missing defects.

  • Requires good understanding of requirements.

  • Hard to maintain large test cases manually.


25. What is the difference between Functional and Non-Functional Testing?

Functional Testing Non-Functional Testing
Checks what the system does Checks how the system performs
Example: Login, forms, CRUD operations Example: Performance, usability, security, compatibility

26. Explain the difference between System Testing and Integration Testing.

System Testing Integration Testing
Tests the complete system Tests combined modules
Performed after integration Performed after unit/module testing
Checks end-to-end functionality Checks data flow and interaction between modules

27. What is the difference between Smoke Testing and Sanity Testing with Example?

Smoke Testing:

  • Performed on a new build to check basic functionality.

  • Example: After build deployment, verify login, homepage, and navigation work.

Sanity Testing:

  • Performed after minor changes to ensure specific functionality works.

  • Example: If a bug in login is fixed, test only login and related pages.


28. What is a Test Summary Report?

Answer:
A Test Summary Report is a document that provides overall testing results, coverage, and quality metrics after the testing phase.

Contents:

  • Total test cases executed

  • Passed/Failed test cases

  • Defects summary (open/closed/reopened)

  • Testing coverage

  • Risks and observations


29. What is the difference between Verification and Validation with Example?

Verification Validation
Checks if the product is built correctly Checks if the right product is built
Static process (reviews, walkthroughs) Dynamic process (execute the software)
Example: Review of SRS document Example: Running test cases on application

30. What is a Defect Life Cycle?

Answer:
A defect moves through multiple stages from discovery to closure.

Stages:

  1. New: Bug reported.

  2. Assigned: Developer assigned.

  3. Open: Developer starts analyzing.

  4. Fixed: Bug is fixed.

  5. Retest: Tester verifies the fix.

  6. Closed: Bug resolved.

  7. Reopened: If the bug persists after retest.


31. What is the difference between Alpha and Beta Testing?

Alpha Testing Beta Testing
Conducted by internal team Conducted by end users
Done in development environment Done in real-world environment
Focus on finding bugs Focus on usability and feedback

32. What is the difference between Black Box, White Box, and Gray Box Testing?

Type Definition Who Performs
Black Box Tests functionality without knowing internal code Manual tester
White Box Tests internal code logic Developer / QA
Gray Box Partial knowledge of code; tests functional + internal logic QA with coding knowledge

33. What is Boundary Value Analysis (BVA) with Example?

Answer:
Boundary Value Analysis is a technique where test cases are designed to include boundary values.

Example:

  • Input range: 1–100

  • Test values: 0, 1, 100, 101


34. What is Equivalence Partitioning with Example?

Answer:
Equivalence Partitioning divides input data into valid and invalid partitions to reduce the number of test cases.

Example:

  • Age input field: 18–60

  • Partitions: <18 (invalid), 18–60 (valid), >60 (invalid)


35. What are some common Manual Testing Tools?

Even in manual testing, tools help manage and track testing efficiently:

  • JIRA: Bug tracking and reporting

  • Bugzilla: Bug tracking

  • TestLink: Test case management

  • Quality Center/ALM: Test planning and execution


36. Scenario-Based Question: How will you test a calculator application?

Answer:

  1. Functional Testing:

    • Addition, subtraction, multiplication, division

    • Negative numbers, decimal numbers, zero

  2. Boundary Testing:

    • Max/min input values

  3. Error Handling:

    • Divide by zero → show error

    • Invalid characters → show error

  4. UI Testing:

    • Check buttons, layout, font, and alignment

  5. Usability Testing:

    • Ease of use for end users


37. How do you prioritize test cases?

Answer:

  • High Priority: Critical business functionality, core features

  • Medium Priority: Features not critical but used frequently

  • Low Priority: Minor features, rarely used modules


38. How do you handle incomplete or changing requirements?

Answer:

  • Communicate with stakeholders to clarify requirements.

  • Maintain flexible test cases to accommodate changes.

  • Perform exploratory testing when requirements are unclear.

  • Update test documentation regularly.


39. What is Ad-hoc Testing?

Answer:
Ad-hoc Testing is informal testing without planning or documentation. It is done to quickly find defects and is often used when time is limited.


40. What is Exploratory Testing?

Answer:
Exploratory Testing is unscripted and simultaneous learning, test design, and execution. Testers use intuition and experience to explore the application.


41. What is the difference between Static Testing and Dynamic Testing?

Static Testing Dynamic Testing
Done without executing the code Done by executing the code
Examples: Reviews, walkthroughs, inspections Examples: Functional testing, performance testing
Early defect detection Detects runtime errors
Cost-effective and faster Time-consuming

42. What is a Test Environment?

Answer:
A Test Environment is a setup consisting of hardware, software, network configurations, and tools required to execute test cases.

Example:

  • Web server, database server, client machines, browsers, OS versions, and test data.


43. What is the difference between Load Testing and Stress Testing?

Load Testing Stress Testing
Checks performance under expected user load Checks behavior under extreme load beyond capacity
Goal: Ensure system can handle normal load Goal: Identify breaking point
Example: 1000 users logging in simultaneously Example: 5000 users logging in simultaneously

44. What is the difference between Severity and Priority with examples?

Severity Priority
Impact of the bug on the system Urgency to fix the bug
Critical, Major, Minor High, Medium, Low
Example: App crashes on login → Critical Example: Typo on homepage → Medium

45. What are the different levels of Manual Testing?

  1. Unit Testing: Tests individual modules or functions (usually by developers).

  2. Integration Testing: Tests combined modules for correct interaction.

  3. System Testing: Tests complete application end-to-end.

  4. User Acceptance Testing (UAT): Conducted by end users to validate requirements.


46. What is End-to-End Testing?

Answer:
End-to-End Testing verifies the flow of an application from start to finish. It checks system integration, database, network, and interfaces to ensure everything works together.

Example:

  • In an e-commerce site: search → add to cart → payment → order confirmation → email notification.


47. How do you perform Negative Testing?

Answer:
Negative Testing ensures the system behaves correctly with invalid inputs or unexpected actions.

Example:

  • Enter letters in a numeric field → error message.

  • Enter invalid email format → validation message.

  • Leave mandatory fields empty → error message.


48. What is the difference between Retesting and Regression Testing?

Retesting Regression Testing
Verify specific defect after fixing Ensure existing functionality is not broken
Done with same test case that failed earlier Done with a set of related test cases
Focused Broad coverage

49. What is the difference between Test Strategy and Test Plan?

Test Strategy Test Plan
High-level document defining approach and goals Detailed document defining scope, resources, schedule, and activities
Static and generic Specific and project-dependent
Example: “We will use manual testing for functional testing” Example: “We will execute 500 test cases in 2 weeks using Chrome and Firefox browsers”

50. What is Configuration Testing?

Answer:
Configuration Testing checks if the application works correctly in different environments, such as operating systems, browsers, devices, and network settings.

Example:

  • Test a web app on Windows, macOS, Android, iOS, and different browsers like Chrome, Firefox, Safari.


51. How do you prioritize test cases in Manual Testing?

Answer:
Test cases are prioritized based on business impact, critical functionality, and risk:

  • High Priority: Core features (login, payment, search).

  • Medium Priority: Important features but not critical (profile update, wishlist).

  • Low Priority: Minor features or rarely used functionality (help, FAQ pages).


52. What is the difference between Functional and Non-Functional Testing?

Functional Testing Non-Functional Testing
Checks what the system does Checks how the system performs
Examples: Login, registration, CRUD operations Examples: Load testing, usability, security, compatibility

53. What is the difference between Alpha, Beta, and Pilot Testing?

Alpha Testing Beta Testing Pilot Testing
Conducted by internal team Conducted by end users Conducted on a limited release to test deployment
Done in development environment Done in real-world environment Done in real environment with limited users
Focus on defect identification Focus on usability and feedback Focus on feasibility and readiness

54. What is the difference between Test Data and Test Case?

Test Data Test Case
Input values used to execute test cases Step-by-step instructions for testing
Can be valid or invalid Includes test steps, expected results, and status
Example: Username = “abc123”, Password = “Pass@123” Example: Step 1: Open login page, Step 2: Enter credentials, Step 3: Click login, Step 4: Verify dashboard

55. How do you report a defect in Manual Testing?

Steps to Report a Defect:

  1. Log in the defect tracking tool (JIRA, Bugzilla).

  2. Provide a clear title and description.

  3. Include steps to reproduce.

  4. Attach screenshots or logs if applicable.

  5. Specify Severity, Priority, Module, and Environment.

  6. Assign it to the responsible developer.


56. Scenario-Based Question: How will you test a dropdown menu?

Answer:

  • Check all options are displayed.

  • Verify selection works and the correct value is submitted.

  • Test boundary values if applicable.

  • Check default option.

  • Test with invalid inputs (if any).

  • Check UI consistency across browsers.


57. Scenario-Based Question: How will you test an e-commerce checkout process?

Answer:

  1. Functional Testing: Add to cart, remove items, apply coupons, make payment.

  2. Boundary Testing: Maximum quantity allowed, minimum order.

  3. Negative Testing: Invalid card number, expired coupons.

  4. Integration Testing: Payment gateway, email notifications.

  5. Usability Testing: Easy navigation and user-friendly interface.

  6. Security Testing: SSL, encryption of payment info.


58. What is the difference between Usability Testing and User Acceptance Testing (UAT)?

Usability Testing User Acceptance Testing (UAT)
Focuses on user-friendliness Focuses on requirements fulfillment
Done by QA testers Done by end-users or clients
Checks ease of use, navigation, layout Checks workflow, business scenarios, and system behavior

59. What is a Test Coverage?

Answer:
Test Coverage measures the percentage of application requirements or code that has been tested.

Example:

  • If there are 100 requirements and 80 are tested → Test Coverage = 80%.


60. What is the difference between Monkey Testing and Ad-hoc Testing?

Monkey Testing Ad-hoc Testing
Random testing without any planning Informal testing without scripts but may have experience-based approach
Goal: Crash the application Goal: Find defects quickly
Mostly automated Mostly manual

 

Experienced Interview Questions

 

1. What is Manual Testing? How is it different from Automation Testing?

Answer:

Manual Testing is the process of manually executing test cases without using any automated tools. The tester acts as an end-user to verify that the software behaves as expected.

Key Differences:

Aspect Manual Testing Automation Testing
Execution By human tester By automated scripts/tools
Time Efficiency Slower for large projects Faster for repetitive tasks
Accuracy Prone to human error Highly accurate if scripts are correct
Cost Low initial cost High initial cost for tools/scripts
Best For Exploratory, usability, ad-hoc testing Regression, repetitive, large datasets

Example: Testing a signup form manually involves entering data, submitting, and verifying errors. Automated testing would use a script to input data and check validation messages.


2. What are the different types of Manual Testing?

Answer:

  1. Functional Testing: Verifies that the software functions according to requirements.

    • Example: Checking login functionality with valid/invalid credentials.

  2. Non-Functional Testing: Checks performance, usability, reliability, etc.

    • Example: Testing the website load time.

  3. Smoke Testing: Quick tests to ensure basic functionality works.

    • Example: Open the application and verify the main pages load.

  4. Sanity Testing: Ensures that a specific functionality works after changes.

    • Example: After fixing a bug in search, check search works.

  5. Regression Testing: Confirms that new changes don’t break existing functionality.

  6. User Acceptance Testing (UAT): Final testing by the client to ensure requirements are met.

  7. Exploratory Testing: Testing without predefined test cases to find defects.


3. What is a Test Case? How do you write an effective Test Case?

Answer:

A Test Case is a set of conditions, actions, and expected results to verify a feature of the application.

Steps to Write Effective Test Cases:

  1. Test case ID

  2. Test scenario

  3. Preconditions

  4. Test steps

  5. Test data

  6. Expected result

  7. Actual result

  8. Pass/Fail status

Example:

Test Case ID TC_001
Test Scenario Verify login with valid credentials
Preconditions User must be registered
Test Steps 1. Open login page2. Enter valid username/password3. Click login
Expected Result User should be redirected to the dashboard
Actual Result -
Status Pass/Fail

4. Explain the Bug Life Cycle in detail.

Answer:

The Bug Life Cycle describes the stages a defect goes through:

  1. New: Bug is logged.

  2. Assigned: Assigned to a developer.

  3. Open: Developer starts analyzing/fixing the bug.

  4. Fixed: Developer fixes the bug.

  5. Retest: Tester verifies the fix.

  6. Reopen: If the bug persists, it’s reopened.

  7. Closed: Bug is resolved and verified.

  8. Deferred/Rejected: Bug may not be fixed due to low priority or invalid issues.

Tools: JIRA, Bugzilla, Quality Center.


5. What is the difference between Severity and Priority?

Aspect Severity (Impact) Priority (Urgency)
Definition How serious the bug is How soon it should be fixed
Example Crash of application (High severity) Typo in UI (Low severity, High priority if client noticed)
Controlled By QA Team Product Owner / Client

6. What are the different types of Testing Techniques?

Answer:

  1. Black Box Testing: Testing without knowledge of internal code.

    • Functional, Non-functional, Regression, UAT.

  2. White Box Testing: Testing with knowledge of internal code.

    • Code coverage, path testing, unit testing.

  3. Gray Box Testing: Combination of black box and white box.

    • Focus on testing from a user perspective but with limited code knowledge.


7. What is the difference between Verification and Validation?

Aspect Verification Validation
Definition Checks if software is built correctly Checks if the software meets user needs
Performed By QA Team QA Team / End Users
Type Static (review, walkthrough, inspection) Dynamic (executing the code)
Example Reviewing requirement documents Testing login functionality

8. What is Exploratory Testing? How is it done?

Answer:

Exploratory Testing is simultaneous learning, test design, and test execution. Testers explore the application without predefined test cases to find hidden bugs.

Steps:

  1. Understand the application.

  2. Define objectives.

  3. Test areas based on priority.

  4. Document defects.

  5. Report findings.

Example: Open a new feature in an application and try random combinations of actions to see if it breaks.


9. What is a Test Plan? What does it include?

Answer:

A Test Plan is a document outlining testing strategy, scope, resources, and schedule.

Contents:

  • Test Plan ID & Title

  • Introduction / Objectives

  • Scope (In-scope / Out-of-scope)

  • Testing Strategy (Functional / Non-functional)

  • Resource allocation

  • Risk and Mitigation

  • Deliverables

  • Entry/Exit criteria

  • Test Environment


10. What are the common challenges faced in Manual Testing?

Answer:

  • Repetitive tasks are time-consuming.

  • Human errors can lead to missed defects.

  • Limited coverage for complex applications.

  • Difficulty in regression testing for large systems.

  • Keeping up with frequent changes in requirements.


11. Explain Boundary Value Analysis (BVA) and Equivalence Partitioning (EP).

Answer:

  • Boundary Value Analysis (BVA): Focus on values at the edges of input ranges.
    Example: For input 1–100, test 0, 1, 2, 99, 100, 101.

  • Equivalence Partitioning (EP): Divides inputs into valid/invalid groups.
    Example: Input age 18–60, test one value from each group (valid: 18, 30, 60; invalid: 17, 61).


12. What is the difference between Retesting and Regression Testing?

Aspect Retesting Regression Testing
Purpose Verify specific defect is fixed Ensure new changes didn’t break existing functionality
Scope Limited to fixed defect Broad; multiple areas may be affected
Test Case Same as failed defect New + existing test cases

13. What is a Test Scenario? How is it different from a Test Case?

Answer:

  • Test Scenario: High-level description of what to test.

    • Example: Verify login functionality.

  • Test Case: Step-by-step instructions to execute a scenario.

    • Example: Enter valid credentials → Click login → Verify dashboard.


14. What is the difference between Functional and Non-Functional Testing?

Aspect Functional Testing Non-Functional Testing
Purpose Verify software functions as expected Verify software performance, usability, security
Example Login, Signup, Search Load testing, stress testing
Measurement Pass/Fail Metrics like response time, throughput

15. Tools used in Manual Testing:

Even though manual testing doesn’t require scripting, tools help in managing test cases, defects, and reporting:

  • Test Management Tools: TestRail, Quality Center, Zephyr

  • Bug Tracking Tools: JIRA, Bugzilla, Mantis

  • Documentation Tools: MS Excel, Confluence

  • Collaboration Tools: Slack, MS Teams


16. How do you handle high severity bugs in production?

Answer:

  1. Immediately inform the development team and stakeholders.

  2. Create a detailed bug report with steps to reproduce.

  3. Reproduce the issue in a test environment if possible.

  4. Prioritize fixing as production critical.

  5. Coordinate retesting once fixed.

  6. Document lessons learned to prevent recurrence.


17. What are the best practices for Manual Testing?

  • Understand requirements clearly before testing.

  • Create detailed test cases and scenarios.

  • Prioritize testing based on risk and impact.

  • Report defects with clear steps and screenshots.

  • Perform exploratory testing alongside structured testing.

  • Maintain proper documentation for audit and review.

  • Collaborate with developers to clarify requirements.


18. What is the difference between Alpha and Beta Testing?

Aspect Alpha Testing Beta Testing
Performed By Internal QA/Test team Actual end users
Environment Controlled (lab/test environment) Real-time production environment
Purpose Identify bugs before release Collect feedback from real users
Frequency Before product release After alpha testing, just before final release

Example:
A software company releases a web application first internally (Alpha) and then provides it to a limited set of users outside the company (Beta) for feedback.


19. Explain Test Metrics and Types used in Manual Testing.

Answer:
Test metrics are quantitative measures to monitor and improve testing processes.

Common Types:

  1. Test Case Execution Metrics: Number of test cases executed, pass/fail percentage.

  2. Defect Metrics: Number of defects, severity distribution, defect density.

  3. Requirement Coverage Metrics: Percentage of requirements covered by test cases.

  4. Productivity Metrics: Number of test cases executed per day per tester.

  5. Test Progress Metrics: Percentage of test completion, open/closed defects.


20. What is the difference between Static and Dynamic Testing?

Aspect Static Testing Dynamic Testing
Definition Testing without executing code Testing by executing code
Performed By QA team, developers QA team
Techniques Reviews, walkthroughs, inspections Functional, regression, performance testing
Example Reviewing requirement documents Executing login functionality to check output

21. What is a Traceability Matrix (RTM)?

Answer:
A Requirement Traceability Matrix (RTM) is a document that maps requirements to test cases, ensuring every requirement is tested.

Purpose:

  • Ensures all requirements are covered.

  • Helps in impact analysis for requirement changes.

  • Useful in audits.

Example:

Requirement ID Requirement Description Test Case ID Status
RQ_01 User should login with valid credentials TC_01 Pass
RQ_02 User should receive error for invalid login TC_02 Fail

22. What is the difference between Ad-Hoc Testing and Exploratory Testing?

Aspect Ad-Hoc Testing Exploratory Testing
Documentation Minimal or none Can be documented based on findings
Purpose Find defects quickly Understand application and find hidden defects
Planning No formal planning Requires testing experience & strategy
Example Randomly click on features Explore new module systematically while noting defects

23. How do you perform Risk-Based Testing?

Answer:
Risk-Based Testing prioritizes testing based on risk of failure and impact on business.

Steps:

  1. Identify high-risk areas (critical functionality, frequently used features).

  2. Estimate probability and impact of defects.

  3. Allocate testing effort to high-risk areas first.

  4. Perform detailed testing on critical features, basic testing on low-risk areas.

Example:
In an online payment app, testing payment flow and security is high risk and tested thoroughly; minor UI color changes are low risk.


24. What is Severity vs. Priority Matrix?

Answer:
A matrix helps decide which bugs to fix first based on severity (impact) and priority (urgency).

Severity \ Priority High Priority Medium Priority Low Priority
Critical Fix immediately Fix soon Fix later
Major Fix soon Fix when convenient Optional
Minor Optional Optional Can defer

25. What is the difference between Positive and Negative Testing?

Aspect Positive Testing Negative Testing
Purpose Checks system works with valid inputs Checks system handles invalid inputs
Focus Expected behavior Unexpected behavior
Example Enter valid username/password for login Enter invalid username/password and check error

26. What is Session-Based Testing?

Answer:
Session-Based Testing is a time-boxed exploratory testing technique where testers test a part of the application for a defined time, document findings, and then review.

Example:

  • 90-minute session for testing the checkout feature in an e-commerce app.

  • Tester explores all functionalities and reports defects with notes.


27. How do you handle changing requirements during testing?

Answer:

  1. Review new requirements and understand impact.

  2. Update test cases or create new ones.

  3. Re-prioritize testing based on risk and criticality.

  4. Communicate changes to stakeholders.

  5. Perform regression testing to ensure stability.


28. Difference between Load Testing, Stress Testing, and Performance Testing

Type Definition Example
Load Testing Checks system performance under expected load 1000 users accessing the website
Stress Testing Checks system under extreme load 5000 users to see when system crashes
Performance Testing Checks responsiveness, speed, stability Measure page load time for 100 users

29. How do you prioritize test cases?

Factors to consider:

  • Criticality of the feature to the business.

  • Frequency of feature usage by end users.

  • Probability of defect occurrence.

  • Dependency on other modules.

  • Past defect history.

Example:
Payment module is high priority, profile update module is medium priority, color change in UI is low priority.


30. Scenario-Based Question:

Q: You found a critical bug in production, but the developer says it’s not reproducible. What will you do?

Answer:

  1. Reproduce the bug in test/staging environment.

  2. Collect detailed information: screenshots, logs, steps to reproduce.

  3. Communicate clearly with the developer.

  4. If still critical and blocking, escalate to project manager or team lead.

  5. Perform regression testing once fixed.


31. How do you perform Compatibility Testing?

Answer:
Compatibility Testing ensures the application works across different browsers, devices, OS, and resolutions.

Steps:

  1. Identify target platforms (Windows, Mac, iOS, Android).

  2. Identify browsers (Chrome, Edge, Safari, Firefox).

  3. Execute test cases on each platform.

  4. Log discrepancies as defects.

Example: A website may work in Chrome but UI breaks in Safari.


32. Explain the difference between Error, Defect, and Failure.

Term Definition
Error Mistake made by developer in code
Defect/Bug Flaw in software due to an error
Failure Deviation from expected behavior during execution

Example:

  • Developer writes wrong SQL → Error

  • Application shows wrong data → Defect

  • User cannot retrieve records → Failure


33. What is the difference between Defect Density and Defect Leakage?

Answer:

  • Defect Density: Measures the number of defects per size of the software (e.g., per 1000 lines of code or per module).
    Formula:
    [
    \text{Defect Density} = \frac{\text{Total Defects Found}}{\text{Size of Software Module}}
    ]
    Example: If 20 defects are found in a module of 5000 LOC, defect density = 20/5 = 4 defects/KLOC.

  • Defect Leakage: Defects that were missed during testing but found by end users or in production.
    Formula:
    [
    \text{Defect Leakage} = \frac{\text{Defects found post-release}}{\text{Defects found pre-release + post-release}} \times 100
    ]
    Example: If 2 defects are found in production out of 50 total defects, leakage = (2/50)*100 = 4%.


34. What are Test Artifacts? Give examples.

Answer:
Test Artifacts are documents or deliverables created during the testing process to support test execution, tracking, and reporting.

Examples:

  • Test Plan

  • Test Scenarios

  • Test Cases

  • Test Data

  • Traceability Matrix (RTM)

  • Defect Reports

  • Test Summary Reports

  • Checklists


35. What is the difference between Build Verification Testing (BVT) and Smoke Testing?

Aspect Build Verification Testing (BVT) Smoke Testing
Purpose Ensure new build is stable enough for detailed testing Quick check of critical functionalities
Scope Limited but covers critical modules Very limited, high-level only
Frequency Every new build Usually on every build
Outcome Determines if detailed testing can proceed Determines if build is testable

Example:
After a nightly build, QA performs BVT to check login, dashboard, and key workflows.


36. What is Monkey Testing?

Answer:
Monkey Testing is random testing where inputs are generated without any predefined test cases. It is usually done to check application stability and crash scenarios.

Example:
Clicking random buttons, entering random values, or randomly navigating pages to check if the application crashes.

Types:

  • Smart Monkey Testing (some knowledge of application)

  • Dumb Monkey Testing (completely random inputs)


37. Difference between Verification and Validation with examples.

Aspect Verification Validation
Definition Checks if the product is built correctly Checks if the product meets user requirements
Activity Type Static (reviewing, inspection) Dynamic (executing the application)
Example Reviewing requirement documents Testing login functionality

38. Explain the difference between Requirement Analysis and Requirement Gathering.

Aspect Requirement Gathering Requirement Analysis
Purpose Collect requirements from stakeholders Understand and prioritize requirements
Activity Meetings, interviews, questionnaires Identify gaps, feasibility, risk assessment
Output Raw requirement documents Refined and validated requirements

39. How do you perform Regression Testing manually?

Steps:

  1. Identify impacted modules due to changes.

  2. Prioritize test cases based on critical functionality and risk.

  3. Execute existing test cases to ensure no breakage.

  4. Log defects if any.

  5. Re-execute failed test cases after fixes.

Example:
After fixing a bug in the checkout module of an e-commerce app, QA tests the entire checkout flow and payment options to ensure nothing else is broken.


40. What is the difference between Static and Dynamic Testing?

Aspect Static Testing Dynamic Testing
Definition Testing without executing code Testing by executing code
Techniques Reviews, inspections, walkthroughs Functional, regression, performance tests
Goal Detect defects early in artifacts Detect defects during execution
Example Reviewing test cases Executing login test case

41. Explain End-to-End Testing with an example.

Answer:
End-to-End Testing verifies the complete flow of an application from start to finish, including integration with external systems.

Example:
In an e-commerce website:

  1. User logs in.

  2. Searches and selects a product.

  3. Adds product to cart.

  4. Makes payment via payment gateway.

  5. Receives confirmation email.

QA tests the entire flow to ensure seamless operation.


42. What is the difference between Usability Testing and User Acceptance Testing (UAT)?

Aspect Usability Testing UAT
Purpose Checks application ease of use Checks application meets business requirements
Performed By QA team or UX testers End users or client
Example Check if navigation is intuitive Client verifies order placement workflow

43. Explain Test Estimation Techniques in Manual Testing.

Answer:
Test estimation predicts the effort required for testing.

Techniques:

  1. Expert Judgment: Based on experience of senior QA.

  2. Work Breakdown Approach: Break tasks and estimate each task.

  3. Test Point Analysis (TPA): Calculate points for each module based on complexity.

  4. Function Point Analysis (FPA): Based on requirements and functionality.

  5. Use Case Points: Estimation based on use cases.


44. What are Test Deliverables in Manual Testing?

Answer:
Test Deliverables are artifacts produced during the testing lifecycle.

Examples:

  • Test Plan Document

  • Test Cases and Test Scenarios

  • Traceability Matrix (RTM)

  • Defect Reports

  • Test Summary Report

  • Test Closure Report


45. How do you handle incomplete or ambiguous requirements?

Answer:

  1. Communicate with stakeholders for clarification.

  2. Document assumptions clearly for reference.

  3. Perform exploratory testing based on business knowledge.

  4. Highlight gaps in the requirement document.

  5. Update test cases after finalizing requirements.


46. Explain the difference between Severity, Priority, and Risk.

Aspect Severity Priority Risk
Definition Impact of a defect Urgency to fix a defect Probability of defect occurrence and its impact
Controlled By QA Team Product Owner/Client QA and Project Manager
Example App crash (High) Typo in banner (High Priority) Payment failure may occur (High Risk)

47. How do you perform Security Testing manually?

Answer:
Manual security testing identifies vulnerabilities without automation.

Steps:

  1. Validate input fields for SQL injection, XSS, CSRF.

  2. Check authentication and authorization.

  3. Verify password policies and data encryption.

  4. Ensure session management and logout functionality.

  5. Document security loopholes and report defects.

Example:
Entering ' OR '1'='1 in login fields to check for SQL injection vulnerability.


48. Explain the difference between Black Box, White Box, and Gray Box Testing.

Type Knowledge of Code Focus Example
Black Box No Functionality, UI Login testing with valid/invalid credentials
White Box Yes Code logic, paths Unit testing a function or algorithm
Gray Box Partial Functionality + code logic Testing API response based on DB knowledge

49. What is the difference between Load Testing, Stress Testing, and Spike Testing?

Testing Type Definition Example
Load Testing Test system under expected load 1000 users on a website
Stress Testing Test system beyond maximum capacity to check stability 5000 users, system may crash
Spike Testing Sudden increase/decrease in load Sudden 5000 users logging in within 1 min

50. Scenario-Based Question:

Q: You are testing a new feature, and your test cases passed but a critical issue is reported by a user in production. What steps do you take?

Answer:

  1. Reproduce the issue in test/staging environment.

  2. Collect logs, screenshots, and test data.

  3. Report defect with full details in bug tracking tool.

  4. Communicate severity and impact to the team.

  5. Perform retesting and regression testing after fix.

  6. Update test cases if necessary.