Software Testing

Software Testing

Top Interview Questions

About Software Testing

 

Software Testing: Ensuring Quality in the Digital World

Software testing is a critical process in the field of software development that ensures applications work as intended, are reliable, and meet user expectations. In today’s digital era, software is everywhere—from mobile apps and web applications to embedded systems in cars and medical devices. As software becomes increasingly complex, testing has become essential to ensure functionality, security, and performance. Without proper testing, even a small error can lead to significant financial loss, security breaches, or damage to an organization’s reputation.

At its core, software testing is the process of evaluating a software application or system to identify defects, ensure quality, and verify that the software meets specified requirements. It is not just about finding bugs; it is also about validating that the software behaves as expected in different scenarios. Testing can be done manually by human testers or automatically using specialized software tools. Both approaches have advantages and are often used together to create a comprehensive testing strategy.

One of the main goals of software testing is defect detection. Defects, or bugs, are errors, flaws, or inconsistencies in a program that prevent it from functioning correctly. Defects can arise from mistakes in code, misunderstandings of requirements, or unforeseen interactions between different components. By detecting defects early in the development process, testing helps reduce the cost and effort required to fix problems. Research shows that fixing defects during the initial stages of development is significantly cheaper than addressing them after deployment.

Software testing can be broadly categorized into manual testing and automated testing. Manual testing involves human testers executing test cases without the help of scripts or automation tools. Testers interact with the software as end users would, checking for functionality, usability, and user experience. Manual testing is particularly effective for exploratory testing, where the tester examines the software to uncover unexpected behaviors or edge cases.

Automated testing, on the other hand, uses tools and scripts to perform predefined tests on the software automatically. Automation is highly efficient for repetitive tasks, regression testing, and large-scale applications where manual testing would be time-consuming. Popular automation tools include Selenium, JUnit, TestNG, and Cypress. Automated testing ensures consistency, increases speed, and allows tests to be run frequently, such as during continuous integration and delivery processes.

Software testing also includes several levels, each with a specific purpose. Unit testing is the first level, where individual components or modules of a program are tested independently. The goal is to verify that each module works correctly in isolation. Unit tests are often automated and written by developers during the coding process. Integration testing comes next, focusing on interactions between modules. It ensures that combined components work together as expected.

System testing evaluates the complete application as a whole to verify that it meets the requirements. This level of testing checks both functional and non-functional aspects, including performance, security, and usability. Acceptance testing is the final level, usually performed by end users or clients, to ensure the software satisfies business requirements and is ready for deployment.

Another important aspect of software testing is the distinction between functional and non-functional testing. Functional testing examines whether the software performs its intended functions correctly. This includes testing features, user interfaces, APIs, and workflows. Non-functional testing, on the other hand, evaluates qualities such as performance, scalability, reliability, and security. For example, performance testing measures how the application behaves under heavy load, while security testing ensures that sensitive data is protected against unauthorized access.

Software testing methodologies also play a critical role in shaping the testing process. Black-box testing focuses on testing software without knowledge of the internal code. Testers provide inputs and observe outputs to verify functionality. White-box testing, in contrast, examines the internal structure of the software, including code logic, paths, and conditions. There is also gray-box testing, which combines both approaches, giving testers partial knowledge of the system to design more effective test cases.

One of the most widely adopted approaches in modern software testing is agile testing. In agile development, testing is integrated throughout the software development lifecycle rather than being a separate phase. Testers work alongside developers to perform continuous testing, ensuring that new features and updates do not introduce defects. Agile testing emphasizes collaboration, rapid feedback, and adaptability, which are essential in fast-paced development environments.

Software testing is closely linked to quality assurance (QA), but it is not the same. Quality assurance encompasses broader practices to ensure the overall quality of software, including processes, standards, documentation, and development practices. Testing is a part of QA that specifically focuses on detecting defects and validating functionality. Together, testing and QA ensure that software is reliable, maintainable, and meets user expectations.

With the rise of complex software systems, specialized testing has also become increasingly important. Security testing identifies vulnerabilities that could be exploited by hackers, such as SQL injection or cross-site scripting. Performance testing evaluates speed, responsiveness, and stability under various conditions. Usability testing ensures that software is easy to use and intuitive, while compatibility testing checks that applications work across different devices, operating systems, and browsers. Each type of testing addresses specific risks and improves the overall quality of the software.

In addition to technical benefits, software testing has economic and business advantages. High-quality software reduces customer complaints, enhances user satisfaction, and builds trust in the brand. Detecting defects early prevents costly post-release fixes, downtime, or loss of business. In competitive industries, reliable software can become a key differentiator, providing a better user experience and increasing market success.

Despite its importance, software testing can be challenging. It requires careful planning, comprehensive test coverage, and skilled testers who understand both technical and business requirements. Testers must anticipate edge cases, simulate real-world scenarios, and balance thoroughness with efficiency. They must also adapt to changing software requirements, new technologies, and evolving user expectations. However, these challenges are outweighed by the benefits of delivering high-quality, reliable, and secure software.

In conclusion, software testing is a vital component of software development that ensures applications function correctly, meet requirements, and provide a positive user experience. It involves defect detection, validation, and verification through manual and automated methods, across multiple testing levels and methodologies. Testing enhances software quality, reduces costs, improves security, and builds trust among users. In today’s fast-paced, technology-driven world, software testing is not just a technical activity—it is an essential practice that underpins the success of digital products.

Fresher Interview Questions

 

1. What is Software Testing?

Answer:
Software testing is the process of evaluating a software application to ensure that it meets the specified requirements and works as expected. It helps detect defects, improve quality, and ensure reliability.

Key Points for Freshers:

  • Detects bugs before the software is delivered.

  • Ensures the software meets functional and non-functional requirements.

  • Types of testing: Manual Testing and Automation Testing.

Example:
If a calculator app multiplies numbers incorrectly, testing will detect this defect before release.


2. What are the different levels of Software Testing?

Answer:
There are mainly four levels of testing:

  1. Unit Testing:

    • Tests individual modules or components.

    • Usually done by developers.

  2. Integration Testing:

    • Tests the interaction between integrated modules.

    • Ensures combined modules work correctly together.

  3. System Testing:

    • Tests the complete system as per requirements.

    • Done by testers.

  4. Acceptance Testing:

    • Checks if the software meets business requirements.

    • Done by the client or end-users.

Example:
For an online shopping website:

  • Unit Testing → Testing the login module separately.

  • Integration Testing → Login + Shopping Cart integration.

  • System Testing → Test the full website.

  • Acceptance Testing → End-users verify if site meets their needs.


3. What are different types of Software Testing?

Answer:

Functional Testing: Tests the software against functional requirements.

  • Examples: Unit testing, Integration testing, System testing, Acceptance testing.

Non-Functional Testing: Tests aspects like performance, usability, security.

  • Examples: Performance testing, Load testing, Security testing, Compatibility testing.

Key Difference:

  • Functional = “Does it do what it should?”

  • Non-Functional = “How well does it do it?”


4. What is the difference between Verification and Validation?

Aspect Verification Validation
Purpose Ensures product is built correctly Ensures correct product is built
Focus Process Product
Performed By Developers/QA Testers/Users
Example Review of design docs Running the application

Memory Tip:

  • Verification = “Are we building it right?”

  • Validation = “Are we building the right thing?”


5. What is a Test Case?

Answer:
A test case is a set of steps, input data, and expected results used to test a particular feature or functionality of the software.

Components of a Test Case:

  1. Test Case ID

  2. Description

  3. Precondition

  4. Test Steps

  5. Expected Result

  6. Actual Result

  7. Status (Pass/Fail)

Example:
Test Case: Verify login with valid credentials.

  • Steps: Enter username & password → Click login

  • Expected Result: User should be logged in successfully


6. What is a Bug or Defect?

Answer:
A bug (or defect) is an error, flaw, or failure in software that causes it to produce incorrect or unexpected results or behave differently than intended.

Example:

  • Application crashes when the user enters a special character in a text field.

Severity vs Priority:

  • Severity: How serious the bug is (Critical, Major, Minor).

  • Priority: How soon it should be fixed (High, Medium, Low).


7. What is the difference between Manual and Automation Testing?

Aspect Manual Testing Automation Testing
Definition Testing done manually without tools Testing using automation tools/scripts
Time Time-consuming Faster execution
Best for Exploratory testing, UI testing Repetitive testing, regression testing
Examples Functional testing Selenium, QTP, JUnit

8. What is Smoke Testing and Sanity Testing?

Type Definition
Smoke Testing Checks basic functionality of the application. Done on initial build. “Build Verification Testing.”
Sanity Testing Checks specific functionality after minor changes or bug fixes. Focused and narrow.

Example:

  • Smoke Testing → Open app, login, check main page loads.

  • Sanity Testing → After fixing login issue, just check login functionality.


9. What is Regression Testing?

Answer:
Regression testing ensures that recent changes or bug fixes haven’t affected existing functionality.

Example:
If a developer fixes a payment bug, regression testing ensures the login, cart, and checkout still work correctly.


10. What is the difference between Alpha and Beta Testing?

Type Definition
Alpha Testing Done by internal staff before releasing to real users.
Beta Testing Done by real users in a real environment before final release.

11. What is Test Plan and its contents?

Answer:
A test plan is a document describing the scope, approach, resources, and schedule of testing activities.

Contents:

  1. Test Plan ID

  2. Scope

  3. Testing Strategy

  4. Testing Tools

  5. Resources & Roles

  6. Test Schedule

  7. Risks & Mitigation


12. What is the difference between Severity and Priority?

Aspect Severity Priority
Meaning How serious the bug is How soon it should be fixed
Decided By Tester Project Manager / Client
Example App crash (High) UI typo (Low priority)

13. Common Software Testing Tools for Freshers

Manual Testing:

  • TestRail, Jira, Bugzilla

Automation Testing:

  • Selenium, QTP/UFT, Katalon Studio

Performance Testing:

  • JMeter, LoadRunner


14. What is the difference between Black Box, White Box, and Grey Box Testing?

Type Definition
Black Box Test without knowledge of internal code
White Box Test with full knowledge of internal code
Grey Box Test with partial knowledge of internal workings

15. Key tips for freshers during interviews:

  • Be clear with terminology.

  • Explain examples from real-life.

  • Always show understanding of why testing is important.

  • If asked about tools, mention any you’ve practiced or learned basics of.


16. What is Test Strategy? How is it different from Test Plan?

Answer:
A Test Strategy is a high-level document that defines the approach, objectives, and goals of testing across the project. It is static and not tied to a particular release, while a Test Plan is project/release-specific and more detailed.

Contents of Test Strategy:

  • Testing objectives

  • Testing types (functional, non-functional)

  • Testing approach (manual/automation)

  • Risk identification and mitigation

Difference Table:

Aspect Test Strategy Test Plan
Level Organization/project-wide Release or module-specific
Detail High-level Detailed steps & schedule
Purpose Define “how to test” broadly Plan “what, when, who” to test

17. What is Exploratory Testing?

Answer:
Exploratory testing is an unscripted testing approach where testers explore the application without predefined test cases. It relies on tester experience and intuition.

Key Points:

  • Helps find hidden defects.

  • Useful for applications with frequent changes.

  • No formal documentation required upfront.

Example:
Opening different screens, trying unusual input combinations, and checking app behavior.


18. What is the difference between Load, Stress, and Performance Testing?

Testing Type Definition
Load Testing Checks system behavior under expected workload.
Stress Testing Checks system behavior under extreme conditions or overload.
Performance Testing Measures speed, responsiveness, and stability under a workload.

Example:

  • Load: 500 users log in simultaneously.

  • Stress: 5000 users log in simultaneously to test crash.

  • Performance: Check response time of login page.


19. What is a Test Scenario? How is it different from a Test Case?

Answer:

  • Test Scenario: A high-level description of what to test.

  • Test Case: Step-by-step instructions to test a specific scenario.

Example:

  • Scenario → Test login functionality of the app.

  • Test Case → Enter username & password, click login, verify dashboard opens.


20. What is a Test Data? How do you create it?

Answer:
Test data is the input data used to execute test cases and validate functionality.

Types:

  • Valid Data: Meets requirements.

  • Invalid Data: Violates requirements to check error handling.

Example:

Ways to create:

  • Manually by testers

  • Using automated scripts

  • Using database queries


21. What is Defect Life Cycle (Bug Life Cycle)?

Answer:
Defect Life Cycle is the journey of a defect from discovery to closure.

States:

  1. New – Bug is reported

  2. Assigned – Developer assigned

  3. Open – Developer starts fixing

  4. Fixed – Bug resolved

  5. Retest – Tester retests the fix

  6. Closed – Bug verified and closed

  7. Reopen – If issue persists

Example:
If a login bug is found → Assigned → Fixed → Retested → Closed.


22. What is Boundary Value Analysis and Equivalence Partitioning?

Answer:

  • Equivalence Partitioning (EP): Divides input data into valid and invalid partitions to reduce test cases.

  • Boundary Value Analysis (BVA): Focuses on edges or boundaries of input values.

Example:

  • Input range: 1–100

  • EP: Valid = 1–100, Invalid <1 or >100

  • BVA: Test values = 0, 1, 2, 99, 100, 101


23. What is the difference between Defect and Error?

Term Definition
Error Mistake made by developer in code or logic
Defect/Bug Issue found in software during testing caused by the error

Example:

  • Developer wrote wrong formula → Error

  • App shows wrong calculation → Defect


24. What is Usability Testing?

Answer:
Usability testing ensures the application is user-friendly. It evaluates:

  • Ease of use

  • Learnability

  • UI clarity

Example:

  • Check if a user can easily navigate the shopping cart and complete checkout.


25. What is Configuration Testing?

Answer:
Configuration testing checks the application’s behavior in different environments:

  • Operating systems

  • Browsers

  • Devices

Example:
A web app should work on Windows, Mac, Android, Chrome, Firefox.


26. What is Ad-hoc Testing? How is it different from Exploratory Testing?

Aspect Ad-hoc Testing Exploratory Testing
Approach Random, informal Structured but unscripted
Documentation Usually none Sometimes documented
Purpose Find defects quickly Learn app and find hidden defects

27. What is a Test Environment?

Answer:
A test environment is a setup where testing is executed, including:

  • Hardware

  • Software

  • Network

  • Database

Example:
Testing a mobile app on Android 13 with Chrome browser using test data in the staging database.


28. What is Cyclomatic Complexity?

Answer:

  • Cyclomatic Complexity is a metric used in white-box testing to measure the complexity of code.

  • Higher complexity → More test cases required.

Formula:
M = E – N + 2P

  • E = Number of edges

  • N = Number of nodes

  • P = Number of connected components

Example:
For a simple “if-else” code, complexity = 2.


29. Difference between Static Testing and Dynamic Testing

Aspect Static Testing Dynamic Testing
Definition Testing without executing code Testing by executing code
Example Code review, walkthrough Unit testing, functional testing
Objective Find defects early Validate functionality

30. Difference between Severity, Priority, and Risk

Aspect Severity Priority Risk
Meaning How serious the bug is How soon to fix Chance of failure impacting business
Example App crash Typo on homepage Server downtime during peak hours

31. What is the difference between Verification and Validation?

We touched on this before, but here’s a deeper look:

Aspect Verification Validation
Purpose Ensure product is being built correctly Ensure product being built is what user wants
Type Static process (reviews, walkthroughs) Dynamic process (actual testing)
Question Answered “Are we building it right?” “Are we building the right product?”
Performed By Developers/QA Testers/User
Example Reviewing SRS, Design Document Executing test cases on the software

32. What is the difference between Severity and Priority?

Already explained, but here’s a memory-friendly summary:

  • Severity: Impact of a bug on the system. Example: App crashes = High severity.

  • Priority: How quickly the bug should be fixed. Example: Typo in homepage = Low severity, High priority if it affects branding.

Tip for interview: They may ask, “High severity, low priority example?” → System crash in rarely used feature.


33. What is the difference between Functional Testing and Non-Functional Testing?

Aspect Functional Testing Non-Functional Testing
Focus What the system does How the system performs
Objective Check features against requirements Check performance, usability, reliability
Example Login, payment processing Load testing, security testing

34. What is the difference between Smoke Testing and Sanity Testing?

Aspect Smoke Testing Sanity Testing
Scope Broad, covers major functionalities Narrow, focuses on specific functionality
Purpose Verify if build is stable Verify if specific bugs are fixed
When performed Initial build verification After minor changes or bug fixes

35. What is a Test Scenario vs Test Case?

  • Test Scenario: High-level idea of what to test.

  • Test Case: Step-by-step procedure with inputs, execution, and expected results.

Example:

  • Scenario → Check login functionality

  • Test Case → Enter username/password, click login, verify dashboard appears


36. What is Regression Testing? When is it done?

  • Definition: Ensures existing functionalities are not broken after changes like bug fixes or new features.

  • When to perform:

    1. After bug fixes

    2. After new feature implementation

    3. After performance optimizations

Example:
If a developer fixes “forgot password” bug, regression testing ensures login, signup, and payment are still working.


37. What is Alpha Testing vs Beta Testing?

Type Alpha Testing Beta Testing
Performed By Internal team Actual users
Environment Controlled/testing environment Real environment
Purpose Detect bugs before external release Get user feedback
Example QA team tests a new messaging app Users test app before official launch

38. What is the difference between Black Box, White Box, and Grey Box Testing?

Testing Type Description
Black Box Test without knowing code
White Box Test with full knowledge of code
Grey Box Test with partial knowledge of code

Example:

  • Black Box → Testing login form by entering valid/invalid data.

  • White Box → Testing all “if-else” paths in login function.

  • Grey Box → Test login with knowledge of session handling but not all code.


39. What is Ad-Hoc Testing?

  • Definition: Informal testing without documentation or planning.

  • Purpose: Find defects quickly.

  • Difference from Exploratory Testing: Exploratory testing is semi-structured, ad-hoc is random.

Example:
Clicking random buttons or entering unusual data to see app response.


40. What is Boundary Value Analysis (BVA) and Equivalence Partitioning (EP)?

  • Equivalence Partitioning (EP): Divides input data into valid and invalid partitions.

  • Boundary Value Analysis (BVA): Focuses on edges of input values where defects often occur.

Example:

  • Input range: 1–100

  • EP → Valid: 1–100, Invalid: <1 or >100

  • BVA → Test values: 0,1,2,99,100,101


41. What is a Test Data? How do you prepare it?

  • Definition: Data used to execute test cases and validate software.

  • Types:

    • Valid data → meets requirements

    • Invalid data → violates requirements

  • Preparation: Manually, from DB, or automated scripts.

Example:


42. What is Test Environment?

  • Definition: Setup where testing is performed, including hardware, software, network, and database.

  • Example: Test a mobile app on Android 13 with Chrome browser and staging database.


43. What is Defect Life Cycle (Bug Life Cycle)?

  • Definition: Journey of a defect from discovery to closure.

States: New → Assigned → Open → Fixed → Retest → Closed → Reopen

Example:
Login bug → Assigned → Fixed → Retested → Closed


44. What is Usability Testing?

  • Definition: Checks if the application is user-friendly and intuitive.

  • Focus: Ease of use, UI clarity, learnability

  • Example: Users should easily navigate a shopping cart and complete checkout.


45. What is Configuration Testing?

  • Definition: Tests application behavior in different environments, OS, browsers, or devices.

  • Example: Web app works on Windows, Mac, Android, Chrome, Firefox.


46. What is Cyclomatic Complexity?

  • Definition: A metric to measure code complexity in white-box testing.

  • Formula: M = E – N + 2P

    • E = Number of edges

    • N = Number of nodes

    • P = Number of connected components

  • Example: Simple if-else code → complexity = 2


47. Difference between Static and Dynamic Testing

Aspect Static Testing Dynamic Testing
Code Execution Not required Required
Purpose Find defects early Validate functionality
Example Code review, walkthrough Unit testing, functional testing

48. What is the difference between Test Plan and Test Strategy?

Aspect Test Strategy Test Plan
Level High-level, organization/project-wide Detailed, module or release specific
Focus Approach, objectives, tools Scope, schedule, resources
Document Type Static Dynamic

49. What is Risk-Based Testing?

  • Definition: Testing based on the risk of failure and its impact on business.

  • Steps:

    1. Identify risks

    2. Analyze risk severity

    3. Prioritize test cases based on risk

  • Example: Payment module is high-risk → tested first


50. What is Performance Testing? Difference from Load Testing?

Aspect Performance Testing Load Testing
Definition Measures system speed, responsiveness, stability Checks system behavior under expected load
Purpose Identify bottlenecks Ensure system can handle workload
Example Page load time, server response 500 users login simultaneously

 

Experienced Interview Questions

 

1. Explain the Software Testing Life Cycle (STLC).

Answer:
STLC defines the sequence of activities performed during the testing process.

Phases of STLC:

  1. Requirement Analysis

    • Understand functional & non-functional requirements

    • Identify testable requirements

  2. Test Planning

    • Prepare test plan, select testing types

    • Estimate resources, effort, and timelines

  3. Test Case Design / Test Development

    • Write test cases & prepare test data

    • Review and baseline test cases

  4. Test Environment Setup

    • Prepare servers, databases, browsers, devices

    • Confirm environment readiness

  5. Test Execution

    • Execute test cases

    • Log defects in a tracking tool (JIRA, Bugzilla)

  6. Test Cycle Closure

    • Prepare test summary report

    • Analyze defect density, test coverage, and lessons learned

Example:
If you are testing an e-commerce checkout flow:

  • Requirement Analysis → Understand payment flow

  • Test Planning → Decide to do functional + regression + performance testing

  • Test Case Design → Write cases for cart, coupon, payment

  • Environment Setup → Configure staging server, test cards

  • Test Execution → Execute, log defects

  • Test Closure → Share report with metrics


2. What is the difference between QA, QC, and Testing?

Aspect QA (Quality Assurance) QC (Quality Control) Testing
Definition Process-oriented Product-oriented Detect defects
Focus Process improvement Detect product defects Validate functionality
Timing Before development During or after development During or after development
Example Define coding standards, guidelines Review deliverables, perform inspections Execute test cases

3. What is a Test Plan? What are its key contents?

Answer:
A Test Plan is a detailed document specifying how testing will be performed.

Contents:

  • Test Plan ID, Scope, Objectives

  • Test Strategy & Approach

  • Testing Tools & Environment

  • Resource Allocation & Roles

  • Schedule & Milestones

  • Risk Identification & Mitigation

  • Entry & Exit Criteria

Example:
For a login module, the test plan includes browsers to test, automation or manual approach, and roles like who will test, who will log defects.


4. What is the difference between Severity and Priority? Provide real-time examples.

  • Severity: Impact of defect on system.

  • Priority: Urgency to fix the defect.

Example:

  • High severity, Low priority: Rarely used feature crashes (critical but low priority).

  • Low severity, High priority: Typo on landing page (minor, but needs quick fix for client).


5. What is Regression Testing? How do you plan it for a project with frequent releases?

Answer:
Regression Testing ensures existing functionality is not broken after new changes.

Approach for frequent releases:

  1. Maintain a Regression Test Suite

  2. Automate critical business workflows using Selenium/TestNG

  3. Prioritize high-risk areas

  4. Execute regression after every sprint or release

Example:
In an e-commerce website, automate login, add to cart, checkout for regression testing.


6. Explain Automation Testing and its advantages.

Answer:
Automation Testing uses tools or scripts to execute test cases automatically.

Advantages:

  • Faster execution

  • Reusable test scripts

  • Higher accuracy

  • Ideal for regression testing

Tools you can mention: Selenium, QTP/UFT, Appium, Postman for API testing, JMeter for performance testing.


7. Explain Black Box, White Box, and Grey Box Testing with real-time examples.

Type Definition & Example
Black Box Test without knowing internal code. Example: Testing login page functionality by entering valid/invalid credentials
White Box Test with full code knowledge. Example: Testing all conditional statements in login function
Grey Box Partial knowledge of code. Example: Test login knowing session handling logic but not full code

8. What is Selenium WebDriver? How does it differ from Selenium IDE?

  • Selenium WebDriver: Code-based automation tool that supports multiple languages (Java, Python, C#) and browsers.

  • Selenium IDE: Record & playback tool, mostly for quick prototyping.

Difference:

Feature WebDriver IDE
Language Support Java, Python, C# Only JavaScript
Browser Support Chrome, Firefox, Edge Chrome, Firefox
Flexibility High (can integrate CI/CD) Low

9. What is Data-Driven and Keyword-Driven Testing?

  • Data-Driven: Test logic remains same, but inputs come from external sources (Excel, CSV, DB).
    Example: Login test with multiple username/password combinations.

  • Keyword-Driven: Test actions are driven by keywords defined in external files.
    Example: Keyword “Click_Login_Button” triggers Selenium code for login.


10. What is the difference between API Testing and UI Testing?

Aspect API Testing UI Testing
Focus Backend functionality, request/response Front-end appearance & user interaction
Tools Postman, SoapUI Selenium, Cypress
Speed Faster Slower
Automation Easy Moderate

Example:
Check user login API returns correct token vs checking login button functionality on web page.


11. What is Continuous Integration (CI) and how is it linked with testing?

  • Definition: CI is the process of automatically building and testing software whenever code is committed.

  • Tools: Jenkins, GitLab CI, Bamboo

  • Testing Link: Automated test scripts run during CI to catch defects early.

Example:
Commit code → Jenkins triggers build → Selenium scripts run → defects reported automatically.


12. Explain TestNG Annotations and their uses.

  • @BeforeSuite / @AfterSuite: Execute before/after entire suite

  • @BeforeTest / @AfterTest: Execute before/after test tag in XML

  • @BeforeClass / @AfterClass: Execute before/after class

  • @BeforeMethod / @AfterMethod: Execute before/after each test method

  • @Test: Defines a test case

Example: Use @BeforeMethod to open browser, @AfterMethod to close browser after each test.


13. How do you perform performance testing for web applications?

  1. Identify critical business transactions

  2. Use tools like JMeter/LoadRunner

  3. Define load scenarios (number of users, duration)

  4. Execute and monitor CPU, memory, response times

  5. Analyze results for bottlenecks

Example: Test 1000 users simultaneously logging in and placing orders.


14. How do you handle defects in production?

  • Log issue with priority, severity, and detailed steps

  • Communicate with developer and BA

  • Create hotfix or patch release if critical

  • Execute regression testing before deploying

Example: Payment gateway failure → High severity → Immediate fix → Regression before deployment


15. Scenario-Based Question: You found a defect, developer says “it’s not a bug, it’s expected behavior”. How do you handle it?

  • Recheck requirements & SRS documents

  • Reproduce defect with screenshots, steps, test data

  • Discuss with QA Lead or BA

  • If confirmed as bug → assign in defect tracking tool

  • Document decision if it’s truly expected behavior


16. Explain Agile Testing and your experience in Scrum.

  • Agile Testing happens alongside development in iterative sprints.

  • Roles: Tester participates in daily stand-ups, sprint planning, backlog refinement.

  • Key Points: Regression in every sprint, exploratory testing, automation integration.

Example: You tested login, cart, checkout in every sprint and automated regression using Selenium + Jenkins.


17. How do you ensure high-quality deliverables in Agile?

  • Early involvement in requirement discussions

  • Maintain automated regression suite

  • Continuous communication with developers

  • Perform risk-based and exploratory testing

  • Track metrics like defect leakage, test coverage


18. Explain Cross-Browser Testing.

  • Test application compatibility across multiple browsers and OS

  • Tools: Selenium Grid, BrowserStack, LambdaTest

  • Example: Application works on Chrome, Firefox, Safari, Edge


19. Explain Defect Life Cycle in real-time projects.

States: New → Assigned → Open → Fixed → Retest → Closed → Reopen → Deferred
Real-time Scenario:

  • User reports login crash → Assigned to dev → Fixed → Retest → Closed


20. Scenario-Based: How do you prioritize testing if release is delayed?

  • Prioritize critical business modules

  • Automate regression to save time

  • Perform smoke testing on all modules

  • Delay non-critical or low-risk modules


21. What is SQL Testing and how do you perform it?

Answer:
SQL Testing (Database Testing) verifies that the database operations, data integrity, and schema are working correctly.

Steps:

  1. Verify data retrieval with SELECT queries.

  2. Test data insertion, update, and deletion.

  3. Check constraints, triggers, and stored procedures.

  4. Validate data consistency between front-end and database.

Example:

  • Verify that after a user places an order, the orders table correctly reflects the transaction.

  • Test query: SELECT * FROM Orders WHERE OrderID=1234


22. What is API Testing? How is it different from UI Testing?

Answer:
API Testing validates backend endpoints without GUI interaction.

Differences:

Aspect API Testing UI Testing
Focus Backend functionality Front-end appearance & behavior
Speed Fast Slower
Tools Postman, SoapUI, RestAssured Selenium, Cypress
Automation Easy Moderate

Example:
Test POST /login API returns a valid token vs testing login button on the web page.


23. How do you perform End-to-End Testing?

Answer:
End-to-End Testing ensures the complete application workflow works as expected.

Steps:

  1. Identify critical business scenarios.

  2. Prepare test cases covering UI + API + DB + integrations.

  3. Execute test cases in production-like environment.

  4. Verify expected results at every step.

Example:
For an e-commerce website: User logs in → adds items to cart → applies coupon → makes payment → receives email confirmation.


24. What is Continuous Integration (CI) and its importance in Testing?

Answer:
Continuous Integration (CI) automatically builds and tests the application whenever code is committed.

Importance:

  • Detect defects early

  • Reduces integration issues

  • Enables automated regression

Example Tools: Jenkins, GitLab CI, Bamboo

Scenario:

  • Developers push code → Jenkins triggers Selenium automation suite → Defects reported automatically


25. What is Test Automation Framework? Name some types.

Answer:
A Test Automation Framework is a set of guidelines, tools, and libraries for automated testing.

Types:

  1. Linear Scripting – Sequential execution (not reusable)

  2. Modular Testing – Reusable modules

  3. Data-Driven Framework – Test logic separate from test data

  4. Keyword-Driven Framework – Test steps driven by keywords

  5. Hybrid Framework – Combination of above

Example:
Login module automated using Data-Driven + Selenium + TestNG


26. What is Cross-Browser Testing?

Answer:
Cross-Browser Testing verifies the application works across different browsers and OS combinations.

Tools: Selenium Grid, BrowserStack, LambdaTest

Example:
Test an e-commerce site on Chrome, Firefox, Edge, Safari across Windows, Mac, Android, iOS.


27. What is Mobile Testing? How do you perform it?

Answer:
Mobile Testing verifies applications on mobile devices.

Types:

  • Functional Testing

  • UI Testing

  • Performance Testing (Battery, Memory, Network)

Tools: Appium, Robotium, Espresso

Example:
Test login, push notifications, responsiveness, and app crash scenarios on Android and iOS devices.


28. What is Performance Testing? Difference from Load Testing?

Aspect Performance Testing Load Testing
Definition Measures responsiveness, speed, stability Checks system under expected workload
Focus Bottlenecks, response time Capacity handling
Tools JMeter, LoadRunner JMeter, LoadRunner
Example Page loads in <2 sec 500 users logging in simultaneously

29. Explain Stress Testing and Spike Testing.

  • Stress Testing: Tests system under extreme conditions until it fails.
    Example: 5000 users login simultaneously.

  • Spike Testing: Tests system response to sudden spike in load.
    Example: 100 → 1000 users suddenly logging in.


30. What is Security Testing? Name common types.

Answer:
Security Testing ensures application is protected from vulnerabilities and attacks.

Types:

  • Penetration Testing

  • Vulnerability Scanning

  • Authentication & Authorization Testing

  • SQL Injection, XSS testing

Example:
Check login page is not vulnerable to SQL injection.


31. How do you perform Real-Time Defect Management?

  1. Log defect in defect tracking tool (JIRA, Bugzilla)

  2. Include steps, screenshots, severity, priority

  3. Communicate with developer and BA

  4. Retest after fix

  5. Close defect only after verification

Scenario:
Payment gateway fails → Log as high severity → Assign to developer → Retest → Close


32. Explain TestNG Features in Automation.

  • Supports annotations like @BeforeMethod, @AfterMethod, @Test

  • Parallel test execution

  • Grouping of tests

  • Data-driven testing via @DataProvider

  • Reporting with HTML/XML

Example:

  • @DataProvider feeds multiple username/password combinations for login automation.


33. What is CI/CD Pipeline Testing?

Answer:
CI/CD pipeline testing ensures automated build, deployment, and testing happen without manual intervention.

Steps:

  1. Developers commit code

  2. CI server builds project

  3. Automated unit, integration, and regression tests run

  4. Reports sent to QA/devs

Tools: Jenkins, GitLab CI, Bamboo


34. Explain SQL Joins in Testing with Example.

  • Inner Join: Returns matching records from two tables

  • Left Join: Returns all records from left table, matching from right

  • Right Join: Returns all from right table, matching from left

  • Full Join: Returns all records, matched or unmatched

Example:
Orders table + Users table → Verify user orders using SQL joins.


35. What is API Automation using RestAssured/Postman?

  • RestAssured: Java-based API automation framework

  • Postman: Manual and automated API testing using collections and Newman CLI

Example:

  • Automate login API → Verify token → Validate response code 200


36. How do you handle Test Data Management in automation?

  • Maintain centralized test data in Excel, JSON, or DB

  • Use Data-Driven Testing to feed dynamic values

  • Mask sensitive data for security compliance

  • Ensure environment consistency


37. How do you perform Smoke Testing vs Sanity Testing in Agile?

  • Smoke Testing: Initial check of major functionalities in each sprint

  • Sanity Testing: Narrow testing for specific bug fixes or minor changes

Example:

  • Smoke: Login, add to cart, checkout

  • Sanity: Verify bug fix for coupon code


38. How do you handle testing in Agile/Scrum?

  • Participate in sprint planning and backlog grooming

  • Identify test scenarios before development

  • Perform continuous regression testing

  • Use automation scripts in CI/CD for faster delivery

  • Report defects and track metrics


39. What is Risk-Based Testing and how do you prioritize?

  • Identify high-risk modules impacting business

  • Focus testing efforts on critical features

  • Low-risk modules tested lightly or delayed

Example:
Payment and login modules → High risk → Tested thoroughly
UI color changes → Low risk → Minimal testing


40. Scenario-Based: How do you handle a production issue at night?

  • Assess severity and priority

  • Notify on-call developer and BA

  • Apply hotfix or rollback if critical

  • Perform regression testing in production or staging

  • Document issue and preventive measures


41. What is Defect Clustering? How do you handle it?

Answer:
Defect clustering is when most defects are concentrated in a few modules of the application.

Explanation:

  • According to the Pareto principle (80/20 rule), 80% of defects usually occur in 20% of modules.

Handling:

  • Focus more testing on modules with frequent defects

  • Perform code review and deeper functional testing

  • Update regression suite with test cases for that module

Example:
In an e-commerce app, 70% of defects are in the payment module → Prioritize testing there.


42. What is Test Coverage? How do you calculate it?

Answer:
Test coverage measures the percentage of application tested to ensure requirements are validated.

Types:

  • Requirement coverage: How many requirements have test cases

  • Code coverage: How much code is executed by test cases (statement, branch, path)

Formula:

Test Coverage (%) = (Number of requirements tested / Total requirements) × 100

Example:

  • Total 100 requirements, tested 90 → Test coverage = 90%


43. What is Defect Leakage? How do you prevent it?

Answer:
Defect leakage occurs when defects escape from one testing phase to the next, or even to production.

Prevention:

  • Review requirements and test cases thoroughly

  • Perform peer reviews and walkthroughs

  • Execute end-to-end and regression testing

  • Automate critical workflows

Example:
Login crash found in production → Missed during UAT → Defect leakage


44. What is Automation Testing Framework? Explain Hybrid Framework.

Answer:

  • Framework provides guidelines, libraries, and reusable code for automation.

Hybrid Framework:

  • Combines Data-Driven + Keyword-Driven + Modular testing

  • Maximizes reusability and flexibility

  • Supports CI/CD and multiple environments

Example:

  • Login module: Keywords define actions, data comes from Excel, modules reusable for checkout and payment.


45. How do you handle Database Testing in Automation?

Answer:

  • Connect automation scripts to the database

  • Use SQL queries to verify data integrity

  • Compare expected results with DB results

  • Example in Selenium + JDBC: Verify order details after checkout

Scenario:

  • Place an order → Fetch order ID from database → Compare with UI → Validate correctness


46. Explain Real-Time Scenario: API + UI Integration Testing.

Answer:

  • Verify API responses match UI behavior

  • Steps:

    1. Trigger UI action (e.g., add item to cart)

    2. Capture API request and response

    3. Validate API response (e.g., JSON data)

    4. Confirm UI reflects correct data

Tools: Selenium + RestAssured + Postman


47. How do you perform Cross-Platform Mobile Testing?

Answer:

  • Test application on iOS and Android devices

  • Check functionality, UI, performance, battery usage, network conditions

  • Tools: Appium, BrowserStack, Espresso

Example:

  • Verify login, push notifications, payments, and camera integration on Android and iOS


48. What is CI/CD Testing in Agile Projects?

Answer:

  • CI/CD ensures automated build, deployment, and testing whenever code is committed.

Steps:

  1. Developers push code → CI triggers build

  2. Automated unit, integration, and regression tests run

  3. Report generated → defects logged

  4. Code deployed to staging automatically

Tools: Jenkins, GitLab CI, Bamboo

Scenario:

  • Selenium regression suite runs automatically on Jenkins after every commit


49. Explain Data-Driven and Keyword-Driven Frameworks in Automation.

Framework Definition & Example
Data-Driven Test logic remains same, test data comes from external source (Excel, CSV, DB) Example: Login test with multiple usernames/passwords
Keyword-Driven Actions driven by keywords defined externally Example: “Click_Login_Button” triggers Selenium code

50. How do you perform Performance Testing for Web Apps?

Steps:

  1. Identify critical transactions (login, checkout, search)

  2. Define scenarios, number of virtual users, duration

  3. Execute with JMeter / LoadRunner

  4. Monitor CPU, memory, response times, throughput

  5. Analyze results for bottlenecks

Example:

  • Test 1000 users placing orders simultaneously → Measure response time


51. What is Spike Testing?

Answer:

  • Tests system response to a sudden large increase in load

  • Checks if system can handle unexpected spikes without crashing

Example:

  • 100 → 1000 users suddenly accessing login page


52. Explain Smoke Testing vs Sanity Testing in Real-Time Projects.

Aspect Smoke Testing Sanity Testing
Scope Broad, checks major functionalities Narrow, checks specific changes
Timing After every new build After minor changes or bug fixes
Purpose Verify build stability Verify bug fixes
Example Login, cart, checkout Validate coupon code fix

53. What is Exploratory Testing? How is it done in Agile?

  • Definition: Testing without predefined test cases based on experience and intuition

  • In Agile: Used to validate new features quickly, especially in sprint demos or early builds

Example:

  • Open new feature → Explore different workflows, edge cases, and boundary values


54. What is Risk-Based Testing? How do you prioritize?

  • Definition: Focus testing on modules with high risk of failure

  • Steps:

    1. Identify critical features

    2. Evaluate risk impact and probability

    3. Prioritize test cases accordingly

Example:

  • Payment module → High risk → Test extensively

  • UI color changes → Low risk → Minimal testing


55. Scenario-Based: How do you handle production defect reported at night?

  1. Assess severity and priority

  2. Notify on-call developer and BA

  3. Apply hotfix or rollback if critical

  4. Perform regression testing

  5. Document defect and preventive measures


56. How do you manage Test Data in Automation for multiple environments?

  • Maintain centralized test data in Excel, JSON, or DB

  • Use environment-specific config files

  • Mask sensitive data (PCI/GDPR compliance)

  • Example: Test user credentials, API tokens, environment URLs


57. What is Continuous Testing? How is it implemented?

  • Definition: Running automated tests continuously during CI/CD pipeline

  • Implementation:

    1. Automated unit and integration tests triggered on commit

    2. Regression scripts run on staging

    3. Reports automatically generated

  • Tools: Jenkins, GitLab CI, Selenium, TestNG


58. How do you perform Security Testing in Web Applications?

  • Validate authentication & authorization

  • Test SQL injection, XSS, CSRF vulnerabilities

  • Verify SSL, encryption, and session management

  • Tools: OWASP ZAP, Burp Suite

Example:

  • Attempt SQL injection in login page → Ensure system rejects invalid input


59. Explain Real-Time Scenario: Selenium + TestNG + Jenkins Integration

  • Selenium scripts automated for regression

  • TestNG handles test execution and reporting

  • Jenkins triggers scripts in CI/CD pipeline after every commit

  • Reports emailed to QA and developers

  • Helps catch defects early


60. What are some common challenges in Automation and how do you overcome them?

Challenge Solution
Dynamic elements in UI Use XPath, CSS selectors, or wait strategies
Environment inconsistencies Maintain config files per environment
Flaky tests Use explicit waits and robust locators
Large regression suite Parallel execution with Selenium Grid
Maintenance of scripts Use modular/hybrid frameworks