Top Interview Questions
Software testing is a critical process in the field of software development that ensures applications work as intended, are reliable, and meet user expectations. In today’s digital era, software is everywhere—from mobile apps and web applications to embedded systems in cars and medical devices. As software becomes increasingly complex, testing has become essential to ensure functionality, security, and performance. Without proper testing, even a small error can lead to significant financial loss, security breaches, or damage to an organization’s reputation.
At its core, software testing is the process of evaluating a software application or system to identify defects, ensure quality, and verify that the software meets specified requirements. It is not just about finding bugs; it is also about validating that the software behaves as expected in different scenarios. Testing can be done manually by human testers or automatically using specialized software tools. Both approaches have advantages and are often used together to create a comprehensive testing strategy.
One of the main goals of software testing is defect detection. Defects, or bugs, are errors, flaws, or inconsistencies in a program that prevent it from functioning correctly. Defects can arise from mistakes in code, misunderstandings of requirements, or unforeseen interactions between different components. By detecting defects early in the development process, testing helps reduce the cost and effort required to fix problems. Research shows that fixing defects during the initial stages of development is significantly cheaper than addressing them after deployment.
Software testing can be broadly categorized into manual testing and automated testing. Manual testing involves human testers executing test cases without the help of scripts or automation tools. Testers interact with the software as end users would, checking for functionality, usability, and user experience. Manual testing is particularly effective for exploratory testing, where the tester examines the software to uncover unexpected behaviors or edge cases.
Automated testing, on the other hand, uses tools and scripts to perform predefined tests on the software automatically. Automation is highly efficient for repetitive tasks, regression testing, and large-scale applications where manual testing would be time-consuming. Popular automation tools include Selenium, JUnit, TestNG, and Cypress. Automated testing ensures consistency, increases speed, and allows tests to be run frequently, such as during continuous integration and delivery processes.
Software testing also includes several levels, each with a specific purpose. Unit testing is the first level, where individual components or modules of a program are tested independently. The goal is to verify that each module works correctly in isolation. Unit tests are often automated and written by developers during the coding process. Integration testing comes next, focusing on interactions between modules. It ensures that combined components work together as expected.
System testing evaluates the complete application as a whole to verify that it meets the requirements. This level of testing checks both functional and non-functional aspects, including performance, security, and usability. Acceptance testing is the final level, usually performed by end users or clients, to ensure the software satisfies business requirements and is ready for deployment.
Another important aspect of software testing is the distinction between functional and non-functional testing. Functional testing examines whether the software performs its intended functions correctly. This includes testing features, user interfaces, APIs, and workflows. Non-functional testing, on the other hand, evaluates qualities such as performance, scalability, reliability, and security. For example, performance testing measures how the application behaves under heavy load, while security testing ensures that sensitive data is protected against unauthorized access.
Software testing methodologies also play a critical role in shaping the testing process. Black-box testing focuses on testing software without knowledge of the internal code. Testers provide inputs and observe outputs to verify functionality. White-box testing, in contrast, examines the internal structure of the software, including code logic, paths, and conditions. There is also gray-box testing, which combines both approaches, giving testers partial knowledge of the system to design more effective test cases.
One of the most widely adopted approaches in modern software testing is agile testing. In agile development, testing is integrated throughout the software development lifecycle rather than being a separate phase. Testers work alongside developers to perform continuous testing, ensuring that new features and updates do not introduce defects. Agile testing emphasizes collaboration, rapid feedback, and adaptability, which are essential in fast-paced development environments.
Software testing is closely linked to quality assurance (QA), but it is not the same. Quality assurance encompasses broader practices to ensure the overall quality of software, including processes, standards, documentation, and development practices. Testing is a part of QA that specifically focuses on detecting defects and validating functionality. Together, testing and QA ensure that software is reliable, maintainable, and meets user expectations.
With the rise of complex software systems, specialized testing has also become increasingly important. Security testing identifies vulnerabilities that could be exploited by hackers, such as SQL injection or cross-site scripting. Performance testing evaluates speed, responsiveness, and stability under various conditions. Usability testing ensures that software is easy to use and intuitive, while compatibility testing checks that applications work across different devices, operating systems, and browsers. Each type of testing addresses specific risks and improves the overall quality of the software.
In addition to technical benefits, software testing has economic and business advantages. High-quality software reduces customer complaints, enhances user satisfaction, and builds trust in the brand. Detecting defects early prevents costly post-release fixes, downtime, or loss of business. In competitive industries, reliable software can become a key differentiator, providing a better user experience and increasing market success.
Despite its importance, software testing can be challenging. It requires careful planning, comprehensive test coverage, and skilled testers who understand both technical and business requirements. Testers must anticipate edge cases, simulate real-world scenarios, and balance thoroughness with efficiency. They must also adapt to changing software requirements, new technologies, and evolving user expectations. However, these challenges are outweighed by the benefits of delivering high-quality, reliable, and secure software.
In conclusion, software testing is a vital component of software development that ensures applications function correctly, meet requirements, and provide a positive user experience. It involves defect detection, validation, and verification through manual and automated methods, across multiple testing levels and methodologies. Testing enhances software quality, reduces costs, improves security, and builds trust among users. In today’s fast-paced, technology-driven world, software testing is not just a technical activity—it is an essential practice that underpins the success of digital products.
Answer:
Software testing is the process of evaluating a software application to ensure that it meets the specified requirements and works as expected. It helps detect defects, improve quality, and ensure reliability.
Key Points for Freshers:
Detects bugs before the software is delivered.
Ensures the software meets functional and non-functional requirements.
Types of testing: Manual Testing and Automation Testing.
Example:
If a calculator app multiplies numbers incorrectly, testing will detect this defect before release.
Answer:
There are mainly four levels of testing:
Unit Testing:
Tests individual modules or components.
Usually done by developers.
Integration Testing:
Tests the interaction between integrated modules.
Ensures combined modules work correctly together.
System Testing:
Tests the complete system as per requirements.
Done by testers.
Acceptance Testing:
Checks if the software meets business requirements.
Done by the client or end-users.
Example:
For an online shopping website:
Unit Testing → Testing the login module separately.
Integration Testing → Login + Shopping Cart integration.
System Testing → Test the full website.
Acceptance Testing → End-users verify if site meets their needs.
Answer:
Functional Testing: Tests the software against functional requirements.
Examples: Unit testing, Integration testing, System testing, Acceptance testing.
Non-Functional Testing: Tests aspects like performance, usability, security.
Examples: Performance testing, Load testing, Security testing, Compatibility testing.
Key Difference:
Functional = “Does it do what it should?”
Non-Functional = “How well does it do it?”
| Aspect | Verification | Validation |
|---|---|---|
| Purpose | Ensures product is built correctly | Ensures correct product is built |
| Focus | Process | Product |
| Performed By | Developers/QA | Testers/Users |
| Example | Review of design docs | Running the application |
Memory Tip:
Verification = “Are we building it right?”
Validation = “Are we building the right thing?”
Answer:
A test case is a set of steps, input data, and expected results used to test a particular feature or functionality of the software.
Components of a Test Case:
Test Case ID
Description
Precondition
Test Steps
Expected Result
Actual Result
Status (Pass/Fail)
Example:
Test Case: Verify login with valid credentials.
Steps: Enter username & password → Click login
Expected Result: User should be logged in successfully
Answer:
A bug (or defect) is an error, flaw, or failure in software that causes it to produce incorrect or unexpected results or behave differently than intended.
Example:
Application crashes when the user enters a special character in a text field.
Severity vs Priority:
Severity: How serious the bug is (Critical, Major, Minor).
Priority: How soon it should be fixed (High, Medium, Low).
| Aspect | Manual Testing | Automation Testing |
|---|---|---|
| Definition | Testing done manually without tools | Testing using automation tools/scripts |
| Time | Time-consuming | Faster execution |
| Best for | Exploratory testing, UI testing | Repetitive testing, regression testing |
| Examples | Functional testing | Selenium, QTP, JUnit |
| Type | Definition |
|---|---|
| Smoke Testing | Checks basic functionality of the application. Done on initial build. “Build Verification Testing.” |
| Sanity Testing | Checks specific functionality after minor changes or bug fixes. Focused and narrow. |
Example:
Smoke Testing → Open app, login, check main page loads.
Sanity Testing → After fixing login issue, just check login functionality.
Answer:
Regression testing ensures that recent changes or bug fixes haven’t affected existing functionality.
Example:
If a developer fixes a payment bug, regression testing ensures the login, cart, and checkout still work correctly.
| Type | Definition |
|---|---|
| Alpha Testing | Done by internal staff before releasing to real users. |
| Beta Testing | Done by real users in a real environment before final release. |
Answer:
A test plan is a document describing the scope, approach, resources, and schedule of testing activities.
Contents:
Test Plan ID
Scope
Testing Strategy
Testing Tools
Resources & Roles
Test Schedule
Risks & Mitigation
| Aspect | Severity | Priority |
|---|---|---|
| Meaning | How serious the bug is | How soon it should be fixed |
| Decided By | Tester | Project Manager / Client |
| Example | App crash (High) | UI typo (Low priority) |
Manual Testing:
TestRail, Jira, Bugzilla
Automation Testing:
Selenium, QTP/UFT, Katalon Studio
Performance Testing:
JMeter, LoadRunner
| Type | Definition |
|---|---|
| Black Box | Test without knowledge of internal code |
| White Box | Test with full knowledge of internal code |
| Grey Box | Test with partial knowledge of internal workings |
Be clear with terminology.
Explain examples from real-life.
Always show understanding of why testing is important.
If asked about tools, mention any you’ve practiced or learned basics of.
Answer:
A Test Strategy is a high-level document that defines the approach, objectives, and goals of testing across the project. It is static and not tied to a particular release, while a Test Plan is project/release-specific and more detailed.
Contents of Test Strategy:
Testing objectives
Testing types (functional, non-functional)
Testing approach (manual/automation)
Risk identification and mitigation
Difference Table:
| Aspect | Test Strategy | Test Plan |
|---|---|---|
| Level | Organization/project-wide | Release or module-specific |
| Detail | High-level | Detailed steps & schedule |
| Purpose | Define “how to test” broadly | Plan “what, when, who” to test |
Answer:
Exploratory testing is an unscripted testing approach where testers explore the application without predefined test cases. It relies on tester experience and intuition.
Key Points:
Helps find hidden defects.
Useful for applications with frequent changes.
No formal documentation required upfront.
Example:
Opening different screens, trying unusual input combinations, and checking app behavior.
| Testing Type | Definition |
|---|---|
| Load Testing | Checks system behavior under expected workload. |
| Stress Testing | Checks system behavior under extreme conditions or overload. |
| Performance Testing | Measures speed, responsiveness, and stability under a workload. |
Example:
Load: 500 users log in simultaneously.
Stress: 5000 users log in simultaneously to test crash.
Performance: Check response time of login page.
Answer:
Test Scenario: A high-level description of what to test.
Test Case: Step-by-step instructions to test a specific scenario.
Example:
Scenario → Test login functionality of the app.
Test Case → Enter username & password, click login, verify dashboard opens.
Answer:
Test data is the input data used to execute test cases and validate functionality.
Types:
Valid Data: Meets requirements.
Invalid Data: Violates requirements to check error handling.
Example:
Valid: Email = “user@example.com”
Invalid: Email = “user@.com”
Ways to create:
Manually by testers
Using automated scripts
Using database queries
Answer:
Defect Life Cycle is the journey of a defect from discovery to closure.
States:
New – Bug is reported
Assigned – Developer assigned
Open – Developer starts fixing
Fixed – Bug resolved
Retest – Tester retests the fix
Closed – Bug verified and closed
Reopen – If issue persists
Example:
If a login bug is found → Assigned → Fixed → Retested → Closed.
Answer:
Equivalence Partitioning (EP): Divides input data into valid and invalid partitions to reduce test cases.
Boundary Value Analysis (BVA): Focuses on edges or boundaries of input values.
Example:
Input range: 1–100
EP: Valid = 1–100, Invalid <1 or >100
BVA: Test values = 0, 1, 2, 99, 100, 101
| Term | Definition |
|---|---|
| Error | Mistake made by developer in code or logic |
| Defect/Bug | Issue found in software during testing caused by the error |
Example:
Developer wrote wrong formula → Error
App shows wrong calculation → Defect
Answer:
Usability testing ensures the application is user-friendly. It evaluates:
Ease of use
Learnability
UI clarity
Example:
Check if a user can easily navigate the shopping cart and complete checkout.
Answer:
Configuration testing checks the application’s behavior in different environments:
Operating systems
Browsers
Devices
Example:
A web app should work on Windows, Mac, Android, Chrome, Firefox.
| Aspect | Ad-hoc Testing | Exploratory Testing |
|---|---|---|
| Approach | Random, informal | Structured but unscripted |
| Documentation | Usually none | Sometimes documented |
| Purpose | Find defects quickly | Learn app and find hidden defects |
Answer:
A test environment is a setup where testing is executed, including:
Hardware
Software
Network
Database
Example:
Testing a mobile app on Android 13 with Chrome browser using test data in the staging database.
Answer:
Cyclomatic Complexity is a metric used in white-box testing to measure the complexity of code.
Higher complexity → More test cases required.
Formula:
M = E – N + 2P
E = Number of edges
N = Number of nodes
P = Number of connected components
Example:
For a simple “if-else” code, complexity = 2.
| Aspect | Static Testing | Dynamic Testing |
|---|---|---|
| Definition | Testing without executing code | Testing by executing code |
| Example | Code review, walkthrough | Unit testing, functional testing |
| Objective | Find defects early | Validate functionality |
| Aspect | Severity | Priority | Risk |
|---|---|---|---|
| Meaning | How serious the bug is | How soon to fix | Chance of failure impacting business |
| Example | App crash | Typo on homepage | Server downtime during peak hours |
We touched on this before, but here’s a deeper look:
| Aspect | Verification | Validation |
|---|---|---|
| Purpose | Ensure product is being built correctly | Ensure product being built is what user wants |
| Type | Static process (reviews, walkthroughs) | Dynamic process (actual testing) |
| Question Answered | “Are we building it right?” | “Are we building the right product?” |
| Performed By | Developers/QA | Testers/User |
| Example | Reviewing SRS, Design Document | Executing test cases on the software |
Already explained, but here’s a memory-friendly summary:
Severity: Impact of a bug on the system. Example: App crashes = High severity.
Priority: How quickly the bug should be fixed. Example: Typo in homepage = Low severity, High priority if it affects branding.
Tip for interview: They may ask, “High severity, low priority example?” → System crash in rarely used feature.
| Aspect | Functional Testing | Non-Functional Testing |
|---|---|---|
| Focus | What the system does | How the system performs |
| Objective | Check features against requirements | Check performance, usability, reliability |
| Example | Login, payment processing | Load testing, security testing |
| Aspect | Smoke Testing | Sanity Testing |
|---|---|---|
| Scope | Broad, covers major functionalities | Narrow, focuses on specific functionality |
| Purpose | Verify if build is stable | Verify if specific bugs are fixed |
| When performed | Initial build verification | After minor changes or bug fixes |
Test Scenario: High-level idea of what to test.
Test Case: Step-by-step procedure with inputs, execution, and expected results.
Example:
Scenario → Check login functionality
Test Case → Enter username/password, click login, verify dashboard appears
Definition: Ensures existing functionalities are not broken after changes like bug fixes or new features.
When to perform:
After bug fixes
After new feature implementation
After performance optimizations
Example:
If a developer fixes “forgot password” bug, regression testing ensures login, signup, and payment are still working.
| Type | Alpha Testing | Beta Testing |
|---|---|---|
| Performed By | Internal team | Actual users |
| Environment | Controlled/testing environment | Real environment |
| Purpose | Detect bugs before external release | Get user feedback |
| Example | QA team tests a new messaging app | Users test app before official launch |
| Testing Type | Description |
|---|---|
| Black Box | Test without knowing code |
| White Box | Test with full knowledge of code |
| Grey Box | Test with partial knowledge of code |
Example:
Black Box → Testing login form by entering valid/invalid data.
White Box → Testing all “if-else” paths in login function.
Grey Box → Test login with knowledge of session handling but not all code.
Definition: Informal testing without documentation or planning.
Purpose: Find defects quickly.
Difference from Exploratory Testing: Exploratory testing is semi-structured, ad-hoc is random.
Example:
Clicking random buttons or entering unusual data to see app response.
Equivalence Partitioning (EP): Divides input data into valid and invalid partitions.
Boundary Value Analysis (BVA): Focuses on edges of input values where defects often occur.
Example:
Input range: 1–100
EP → Valid: 1–100, Invalid: <1 or >100
BVA → Test values: 0,1,2,99,100,101
Definition: Data used to execute test cases and validate software.
Types:
Valid data → meets requirements
Invalid data → violates requirements
Preparation: Manually, from DB, or automated scripts.
Example:
Valid email: user@example.com
Invalid email: user@.com
Definition: Setup where testing is performed, including hardware, software, network, and database.
Example: Test a mobile app on Android 13 with Chrome browser and staging database.
Definition: Journey of a defect from discovery to closure.
States: New → Assigned → Open → Fixed → Retest → Closed → Reopen
Example:
Login bug → Assigned → Fixed → Retested → Closed
Definition: Checks if the application is user-friendly and intuitive.
Focus: Ease of use, UI clarity, learnability
Example: Users should easily navigate a shopping cart and complete checkout.
Definition: Tests application behavior in different environments, OS, browsers, or devices.
Example: Web app works on Windows, Mac, Android, Chrome, Firefox.
Definition: A metric to measure code complexity in white-box testing.
Formula: M = E – N + 2P
E = Number of edges
N = Number of nodes
P = Number of connected components
Example: Simple if-else code → complexity = 2
| Aspect | Static Testing | Dynamic Testing |
|---|---|---|
| Code Execution | Not required | Required |
| Purpose | Find defects early | Validate functionality |
| Example | Code review, walkthrough | Unit testing, functional testing |
| Aspect | Test Strategy | Test Plan |
|---|---|---|
| Level | High-level, organization/project-wide | Detailed, module or release specific |
| Focus | Approach, objectives, tools | Scope, schedule, resources |
| Document Type | Static | Dynamic |
Definition: Testing based on the risk of failure and its impact on business.
Steps:
Identify risks
Analyze risk severity
Prioritize test cases based on risk
Example: Payment module is high-risk → tested first
| Aspect | Performance Testing | Load Testing |
|---|---|---|
| Definition | Measures system speed, responsiveness, stability | Checks system behavior under expected load |
| Purpose | Identify bottlenecks | Ensure system can handle workload |
| Example | Page load time, server response | 500 users login simultaneously |
Answer:
STLC defines the sequence of activities performed during the testing process.
Phases of STLC:
Requirement Analysis
Understand functional & non-functional requirements
Identify testable requirements
Test Planning
Prepare test plan, select testing types
Estimate resources, effort, and timelines
Test Case Design / Test Development
Write test cases & prepare test data
Review and baseline test cases
Test Environment Setup
Prepare servers, databases, browsers, devices
Confirm environment readiness
Test Execution
Execute test cases
Log defects in a tracking tool (JIRA, Bugzilla)
Test Cycle Closure
Prepare test summary report
Analyze defect density, test coverage, and lessons learned
Example:
If you are testing an e-commerce checkout flow:
Requirement Analysis → Understand payment flow
Test Planning → Decide to do functional + regression + performance testing
Test Case Design → Write cases for cart, coupon, payment
Environment Setup → Configure staging server, test cards
Test Execution → Execute, log defects
Test Closure → Share report with metrics
| Aspect | QA (Quality Assurance) | QC (Quality Control) | Testing |
|---|---|---|---|
| Definition | Process-oriented | Product-oriented | Detect defects |
| Focus | Process improvement | Detect product defects | Validate functionality |
| Timing | Before development | During or after development | During or after development |
| Example | Define coding standards, guidelines | Review deliverables, perform inspections | Execute test cases |
Answer:
A Test Plan is a detailed document specifying how testing will be performed.
Contents:
Test Plan ID, Scope, Objectives
Test Strategy & Approach
Testing Tools & Environment
Resource Allocation & Roles
Schedule & Milestones
Risk Identification & Mitigation
Entry & Exit Criteria
Example:
For a login module, the test plan includes browsers to test, automation or manual approach, and roles like who will test, who will log defects.
Severity: Impact of defect on system.
Priority: Urgency to fix the defect.
Example:
High severity, Low priority: Rarely used feature crashes (critical but low priority).
Low severity, High priority: Typo on landing page (minor, but needs quick fix for client).
Answer:
Regression Testing ensures existing functionality is not broken after new changes.
Approach for frequent releases:
Maintain a Regression Test Suite
Automate critical business workflows using Selenium/TestNG
Prioritize high-risk areas
Execute regression after every sprint or release
Example:
In an e-commerce website, automate login, add to cart, checkout for regression testing.
Answer:
Automation Testing uses tools or scripts to execute test cases automatically.
Advantages:
Faster execution
Reusable test scripts
Higher accuracy
Ideal for regression testing
Tools you can mention: Selenium, QTP/UFT, Appium, Postman for API testing, JMeter for performance testing.
| Type | Definition & Example |
|---|---|
| Black Box | Test without knowing internal code. Example: Testing login page functionality by entering valid/invalid credentials |
| White Box | Test with full code knowledge. Example: Testing all conditional statements in login function |
| Grey Box | Partial knowledge of code. Example: Test login knowing session handling logic but not full code |
Selenium WebDriver: Code-based automation tool that supports multiple languages (Java, Python, C#) and browsers.
Selenium IDE: Record & playback tool, mostly for quick prototyping.
Difference:
| Feature | WebDriver | IDE |
|---|---|---|
| Language Support | Java, Python, C# | Only JavaScript |
| Browser Support | Chrome, Firefox, Edge | Chrome, Firefox |
| Flexibility | High (can integrate CI/CD) | Low |
Data-Driven: Test logic remains same, but inputs come from external sources (Excel, CSV, DB).
Example: Login test with multiple username/password combinations.
Keyword-Driven: Test actions are driven by keywords defined in external files.
Example: Keyword “Click_Login_Button” triggers Selenium code for login.
| Aspect | API Testing | UI Testing |
|---|---|---|
| Focus | Backend functionality, request/response | Front-end appearance & user interaction |
| Tools | Postman, SoapUI | Selenium, Cypress |
| Speed | Faster | Slower |
| Automation | Easy | Moderate |
Example:
Check user login API returns correct token vs checking login button functionality on web page.
Definition: CI is the process of automatically building and testing software whenever code is committed.
Tools: Jenkins, GitLab CI, Bamboo
Testing Link: Automated test scripts run during CI to catch defects early.
Example:
Commit code → Jenkins triggers build → Selenium scripts run → defects reported automatically.
@BeforeSuite / @AfterSuite: Execute before/after entire suite
@BeforeTest / @AfterTest: Execute before/after test tag in XML
@BeforeClass / @AfterClass: Execute before/after class
@BeforeMethod / @AfterMethod: Execute before/after each test method
@Test: Defines a test case
Example: Use @BeforeMethod to open browser, @AfterMethod to close browser after each test.
Identify critical business transactions
Use tools like JMeter/LoadRunner
Define load scenarios (number of users, duration)
Execute and monitor CPU, memory, response times
Analyze results for bottlenecks
Example: Test 1000 users simultaneously logging in and placing orders.
Log issue with priority, severity, and detailed steps
Communicate with developer and BA
Create hotfix or patch release if critical
Execute regression testing before deploying
Example: Payment gateway failure → High severity → Immediate fix → Regression before deployment
Recheck requirements & SRS documents
Reproduce defect with screenshots, steps, test data
Discuss with QA Lead or BA
If confirmed as bug → assign in defect tracking tool
Document decision if it’s truly expected behavior
Agile Testing happens alongside development in iterative sprints.
Roles: Tester participates in daily stand-ups, sprint planning, backlog refinement.
Key Points: Regression in every sprint, exploratory testing, automation integration.
Example: You tested login, cart, checkout in every sprint and automated regression using Selenium + Jenkins.
Early involvement in requirement discussions
Maintain automated regression suite
Continuous communication with developers
Perform risk-based and exploratory testing
Track metrics like defect leakage, test coverage
Test application compatibility across multiple browsers and OS
Tools: Selenium Grid, BrowserStack, LambdaTest
Example: Application works on Chrome, Firefox, Safari, Edge
States: New → Assigned → Open → Fixed → Retest → Closed → Reopen → Deferred
Real-time Scenario:
User reports login crash → Assigned to dev → Fixed → Retest → Closed
Prioritize critical business modules
Automate regression to save time
Perform smoke testing on all modules
Delay non-critical or low-risk modules
Answer:
SQL Testing (Database Testing) verifies that the database operations, data integrity, and schema are working correctly.
Steps:
Verify data retrieval with SELECT queries.
Test data insertion, update, and deletion.
Check constraints, triggers, and stored procedures.
Validate data consistency between front-end and database.
Example:
Verify that after a user places an order, the orders table correctly reflects the transaction.
Test query: SELECT * FROM Orders WHERE OrderID=1234
Answer:
API Testing validates backend endpoints without GUI interaction.
Differences:
| Aspect | API Testing | UI Testing |
|---|---|---|
| Focus | Backend functionality | Front-end appearance & behavior |
| Speed | Fast | Slower |
| Tools | Postman, SoapUI, RestAssured | Selenium, Cypress |
| Automation | Easy | Moderate |
Example:
Test POST /login API returns a valid token vs testing login button on the web page.
Answer:
End-to-End Testing ensures the complete application workflow works as expected.
Steps:
Identify critical business scenarios.
Prepare test cases covering UI + API + DB + integrations.
Execute test cases in production-like environment.
Verify expected results at every step.
Example:
For an e-commerce website: User logs in → adds items to cart → applies coupon → makes payment → receives email confirmation.
Answer:
Continuous Integration (CI) automatically builds and tests the application whenever code is committed.
Importance:
Detect defects early
Reduces integration issues
Enables automated regression
Example Tools: Jenkins, GitLab CI, Bamboo
Scenario:
Developers push code → Jenkins triggers Selenium automation suite → Defects reported automatically
Answer:
A Test Automation Framework is a set of guidelines, tools, and libraries for automated testing.
Types:
Linear Scripting – Sequential execution (not reusable)
Modular Testing – Reusable modules
Data-Driven Framework – Test logic separate from test data
Keyword-Driven Framework – Test steps driven by keywords
Hybrid Framework – Combination of above
Example:
Login module automated using Data-Driven + Selenium + TestNG
Answer:
Cross-Browser Testing verifies the application works across different browsers and OS combinations.
Tools: Selenium Grid, BrowserStack, LambdaTest
Example:
Test an e-commerce site on Chrome, Firefox, Edge, Safari across Windows, Mac, Android, iOS.
Answer:
Mobile Testing verifies applications on mobile devices.
Types:
Functional Testing
UI Testing
Performance Testing (Battery, Memory, Network)
Tools: Appium, Robotium, Espresso
Example:
Test login, push notifications, responsiveness, and app crash scenarios on Android and iOS devices.
| Aspect | Performance Testing | Load Testing |
|---|---|---|
| Definition | Measures responsiveness, speed, stability | Checks system under expected workload |
| Focus | Bottlenecks, response time | Capacity handling |
| Tools | JMeter, LoadRunner | JMeter, LoadRunner |
| Example | Page loads in <2 sec | 500 users logging in simultaneously |
Stress Testing: Tests system under extreme conditions until it fails.
Example: 5000 users login simultaneously.
Spike Testing: Tests system response to sudden spike in load.
Example: 100 → 1000 users suddenly logging in.
Answer:
Security Testing ensures application is protected from vulnerabilities and attacks.
Types:
Penetration Testing
Vulnerability Scanning
Authentication & Authorization Testing
SQL Injection, XSS testing
Example:
Check login page is not vulnerable to SQL injection.
Log defect in defect tracking tool (JIRA, Bugzilla)
Include steps, screenshots, severity, priority
Communicate with developer and BA
Retest after fix
Close defect only after verification
Scenario:
Payment gateway fails → Log as high severity → Assign to developer → Retest → Close
Supports annotations like @BeforeMethod, @AfterMethod, @Test
Parallel test execution
Grouping of tests
Data-driven testing via @DataProvider
Reporting with HTML/XML
Example:
@DataProvider feeds multiple username/password combinations for login automation.
Answer:
CI/CD pipeline testing ensures automated build, deployment, and testing happen without manual intervention.
Steps:
Developers commit code
CI server builds project
Automated unit, integration, and regression tests run
Reports sent to QA/devs
Tools: Jenkins, GitLab CI, Bamboo
Inner Join: Returns matching records from two tables
Left Join: Returns all records from left table, matching from right
Right Join: Returns all from right table, matching from left
Full Join: Returns all records, matched or unmatched
Example:
Orders table + Users table → Verify user orders using SQL joins.
RestAssured: Java-based API automation framework
Postman: Manual and automated API testing using collections and Newman CLI
Example:
Automate login API → Verify token → Validate response code 200
Maintain centralized test data in Excel, JSON, or DB
Use Data-Driven Testing to feed dynamic values
Mask sensitive data for security compliance
Ensure environment consistency
Smoke Testing: Initial check of major functionalities in each sprint
Sanity Testing: Narrow testing for specific bug fixes or minor changes
Example:
Smoke: Login, add to cart, checkout
Sanity: Verify bug fix for coupon code
Participate in sprint planning and backlog grooming
Identify test scenarios before development
Perform continuous regression testing
Use automation scripts in CI/CD for faster delivery
Report defects and track metrics
Identify high-risk modules impacting business
Focus testing efforts on critical features
Low-risk modules tested lightly or delayed
Example:
Payment and login modules → High risk → Tested thoroughly
UI color changes → Low risk → Minimal testing
Assess severity and priority
Notify on-call developer and BA
Apply hotfix or rollback if critical
Perform regression testing in production or staging
Document issue and preventive measures
Answer:
Defect clustering is when most defects are concentrated in a few modules of the application.
Explanation:
According to the Pareto principle (80/20 rule), 80% of defects usually occur in 20% of modules.
Handling:
Focus more testing on modules with frequent defects
Perform code review and deeper functional testing
Update regression suite with test cases for that module
Example:
In an e-commerce app, 70% of defects are in the payment module → Prioritize testing there.
Answer:
Test coverage measures the percentage of application tested to ensure requirements are validated.
Types:
Requirement coverage: How many requirements have test cases
Code coverage: How much code is executed by test cases (statement, branch, path)
Formula:
Test Coverage (%) = (Number of requirements tested / Total requirements) × 100
Example:
Total 100 requirements, tested 90 → Test coverage = 90%
Answer:
Defect leakage occurs when defects escape from one testing phase to the next, or even to production.
Prevention:
Review requirements and test cases thoroughly
Perform peer reviews and walkthroughs
Execute end-to-end and regression testing
Automate critical workflows
Example:
Login crash found in production → Missed during UAT → Defect leakage
Answer:
Framework provides guidelines, libraries, and reusable code for automation.
Hybrid Framework:
Combines Data-Driven + Keyword-Driven + Modular testing
Maximizes reusability and flexibility
Supports CI/CD and multiple environments
Example:
Login module: Keywords define actions, data comes from Excel, modules reusable for checkout and payment.
Answer:
Connect automation scripts to the database
Use SQL queries to verify data integrity
Compare expected results with DB results
Example in Selenium + JDBC: Verify order details after checkout
Scenario:
Place an order → Fetch order ID from database → Compare with UI → Validate correctness
Answer:
Verify API responses match UI behavior
Steps:
Trigger UI action (e.g., add item to cart)
Capture API request and response
Validate API response (e.g., JSON data)
Confirm UI reflects correct data
Tools: Selenium + RestAssured + Postman
Answer:
Test application on iOS and Android devices
Check functionality, UI, performance, battery usage, network conditions
Tools: Appium, BrowserStack, Espresso
Example:
Verify login, push notifications, payments, and camera integration on Android and iOS
Answer:
CI/CD ensures automated build, deployment, and testing whenever code is committed.
Steps:
Developers push code → CI triggers build
Automated unit, integration, and regression tests run
Report generated → defects logged
Code deployed to staging automatically
Tools: Jenkins, GitLab CI, Bamboo
Scenario:
Selenium regression suite runs automatically on Jenkins after every commit
| Framework | Definition & Example |
|---|---|
| Data-Driven | Test logic remains same, test data comes from external source (Excel, CSV, DB) Example: Login test with multiple usernames/passwords |
| Keyword-Driven | Actions driven by keywords defined externally Example: “Click_Login_Button” triggers Selenium code |
Steps:
Identify critical transactions (login, checkout, search)
Define scenarios, number of virtual users, duration
Execute with JMeter / LoadRunner
Monitor CPU, memory, response times, throughput
Analyze results for bottlenecks
Example:
Test 1000 users placing orders simultaneously → Measure response time
Answer:
Tests system response to a sudden large increase in load
Checks if system can handle unexpected spikes without crashing
Example:
100 → 1000 users suddenly accessing login page
| Aspect | Smoke Testing | Sanity Testing |
|---|---|---|
| Scope | Broad, checks major functionalities | Narrow, checks specific changes |
| Timing | After every new build | After minor changes or bug fixes |
| Purpose | Verify build stability | Verify bug fixes |
| Example | Login, cart, checkout | Validate coupon code fix |
Definition: Testing without predefined test cases based on experience and intuition
In Agile: Used to validate new features quickly, especially in sprint demos or early builds
Example:
Open new feature → Explore different workflows, edge cases, and boundary values
Definition: Focus testing on modules with high risk of failure
Steps:
Identify critical features
Evaluate risk impact and probability
Prioritize test cases accordingly
Example:
Payment module → High risk → Test extensively
UI color changes → Low risk → Minimal testing
Assess severity and priority
Notify on-call developer and BA
Apply hotfix or rollback if critical
Perform regression testing
Document defect and preventive measures
Maintain centralized test data in Excel, JSON, or DB
Use environment-specific config files
Mask sensitive data (PCI/GDPR compliance)
Example: Test user credentials, API tokens, environment URLs
Definition: Running automated tests continuously during CI/CD pipeline
Implementation:
Automated unit and integration tests triggered on commit
Regression scripts run on staging
Reports automatically generated
Tools: Jenkins, GitLab CI, Selenium, TestNG
Validate authentication & authorization
Test SQL injection, XSS, CSRF vulnerabilities
Verify SSL, encryption, and session management
Tools: OWASP ZAP, Burp Suite
Example:
Attempt SQL injection in login page → Ensure system rejects invalid input
Selenium scripts automated for regression
TestNG handles test execution and reporting
Jenkins triggers scripts in CI/CD pipeline after every commit
Reports emailed to QA and developers
Helps catch defects early
| Challenge | Solution |
|---|---|
| Dynamic elements in UI | Use XPath, CSS selectors, or wait strategies |
| Environment inconsistencies | Maintain config files per environment |
| Flaky tests | Use explicit waits and robust locators |
| Large regression suite | Parallel execution with Selenium Grid |
| Maintenance of scripts | Use modular/hybrid frameworks |