Top Interview Questions
In the world of software development, ensuring the quality of software products is of paramount importance. Software bugs, errors, and inconsistencies can lead to costly mistakes, poor user experience, and reputational damage. This is where software testing comes into play, serving as a critical phase in the Software Development Life Cycle (SDLC). Among the various testing techniques available, Manual Testing remains a fundamental approach, even in the era of automation.
Manual Testing is the process of manually executing test cases without the use of any automated tools. Testers perform the role of an end-user by interacting with the application to identify any defects, inconsistencies, or unexpected behaviors. This testing technique is performed throughout the software development lifecycle, especially during the early stages, to ensure the application meets the specified requirements and works as intended.
Unlike automated testing, where scripts and tools are used to perform repetitive tasks, manual testing relies entirely on the skill, experience, and observation of the tester. It is particularly useful for applications with dynamic requirements, frequently changing interfaces, or where human observation is crucial.
The primary goals of manual testing include:
Identifying Defects: The foremost objective is to uncover errors, bugs, or defects in the software application that could affect its functionality, usability, or performance.
Ensuring Quality: Manual testing ensures the software meets the quality standards defined by the organization or project.
Validating Functionality: It helps verify that the application behaves as expected according to the requirements specification.
Enhancing User Experience: By simulating end-user scenarios, manual testing helps ensure that the application is user-friendly and intuitive.
Providing Feedback: Testers provide critical feedback to developers, enabling them to fix issues and improve the software’s overall quality.
Manual testing encompasses several types, each serving a specific purpose. Some common types include:
Black Box Testing: Testers examine the functionality of an application without knowing its internal code structure. It focuses on input and output validation.
White Box Testing: Also known as structural testing, it involves understanding the internal logic and code structure of the application to design test cases.
Functional Testing: Ensures that the application’s features function as per the requirements. Testers validate individual functions and workflows.
Non-Functional Testing: This tests aspects such as performance, usability, reliability, and security rather than specific functionalities.
Regression Testing: Conducted to ensure that recent code changes have not adversely affected existing functionalities.
Sanity and Smoke Testing: Quick checks performed to verify whether the software build is stable enough for further testing.
Acceptance Testing: Performed to determine whether the software meets business requirements and is ready for deployment.
Exploratory Testing: Testers actively explore the application without predefined test cases, discovering potential issues through intuition and experience.
Manual testing is a structured process that ensures systematic evaluation of the software. The typical steps include:
Requirement Analysis: Testers study the project requirements and specifications to understand the expected behavior of the software.
Test Planning: This phase involves defining the scope, objectives, resources, timelines, and strategies for testing.
Test Case Design: Testers create detailed test cases, including preconditions, inputs, expected results, and postconditions.
Test Environment Setup: The necessary hardware, software, network configurations, and databases are prepared to replicate the real user environment.
Test Execution: Testers manually execute the test cases, comparing actual outcomes with expected results.
Defect Reporting: Any discrepancies or defects identified during testing are logged in a defect tracking tool for developers to fix.
Retesting and Regression Testing: After defects are fixed, testers re-execute the test cases and perform regression testing to ensure no new issues are introduced.
Test Closure: Once testing is complete, a summary report is prepared, including metrics, defects, lessons learned, and recommendations.
Manual testing has several advantages that make it indispensable in software quality assurance:
Human Observation: Testers can detect subtle issues, such as UI inconsistencies or user experience problems, which automated tools might overlook.
Flexibility: Testers can adapt to changes in requirements and explore new functionalities without waiting for automation scripts to be updated.
Early Bug Detection: Manual testing can start early in the SDLC, helping identify defects in the initial stages of development.
Cost-Effective for Small Projects: For small-scale applications with limited functionality, manual testing is often more practical than investing in automation.
Exploratory Testing: Allows testers to discover unexpected defects by exploring the software creatively.
Despite its advantages, manual testing also has limitations:
Time-Consuming: Manual execution of test cases, especially for large and complex applications, can be slow and labor-intensive.
Human Error: Testers may overlook defects due to fatigue or lack of attention.
Repetitive Nature: Repeated execution of test cases for regression testing can become monotonous.
Scalability Issues: Manual testing may not be feasible for large-scale projects with frequent releases and extensive test coverage.
Limited Coverage: Some scenarios, especially performance or load testing, are difficult to achieve manually.
While manual testing itself does not involve automation, several tools can assist testers in managing test cases and tracking defects:
Test Management Tools: Tools like TestRail, Quality Center (ALM), and Zephyr help organize test cases and plan test execution.
Bug Tracking Tools: Tools like JIRA, Bugzilla, and Mantis allow testers to log, track, and manage defects efficiently.
Documentation Tools: Tools such as Confluence help in creating test documentation and sharing knowledge with the team.
To maximize the effectiveness of manual testing, the following best practices are recommended:
Understand Requirements Thoroughly: Testers should have a clear understanding of business and functional requirements before designing test cases.
Prioritize Test Cases: Focus on high-risk areas and critical functionalities first.
Maintain Test Documentation: Keep test cases, scenarios, and defect reports well-documented for future reference.
Regular Communication: Collaborate closely with developers, business analysts, and stakeholders.
Continuous Learning: Stay updated with industry trends, new testing techniques, and tools.
While automated testing uses scripts and tools to perform repetitive tasks efficiently, manual testing remains essential for areas where human judgment is critical. Automation is ideal for regression, performance, and repetitive tests, whereas manual testing excels in exploratory testing, UI/UX validation, and scenarios with frequent changes. Often, a hybrid approach combining manual and automated testing provides the best results.
Answer:
Software Testing is the process of evaluating a software application to identify defects, ensure it meets the requirements, and verify that it works as intended. The goal is to deliver a quality product.
Key Points:
Testing can be manual or automated.
Ensures correctness, completeness, and reliability of software.
Answer:
Manual Testing is the process of manually executing test cases without using any automated tools. Testers follow predefined steps to check if the software behaves as expected.
Advantages:
No need for programming knowledge.
Helps find UI/UX issues.
Flexible for exploratory testing.
Disadvantages:
Time-consuming for large projects.
Prone to human error.
Answer:
Functional Testing – Verifies software against functional requirements.
Example: Checking login functionality.
Non-Functional Testing – Focuses on performance, usability, and security.
Example: Testing load time of a webpage.
Regression Testing – Ensures new changes don’t break existing functionality.
Smoke Testing – Basic tests to check if the build is stable enough for further testing.
Sanity Testing – Checks specific functionality after minor changes.
Integration Testing – Tests interactions between modules.
System Testing – Tests the complete system as a whole.
User Acceptance Testing (UAT) – Performed by the end-users to verify the software meets their needs.
Answer:
A test case is a documented set of conditions, inputs, and expected results used to verify if a software application works correctly.
Components of a Test Case:
Test Case ID
Test Description
Pre-conditions
Test Steps
Expected Result
Actual Result
Status (Pass/Fail)
Answer:
A Test Plan is a formal document describing the strategy, scope, resources, and schedule of testing activities. It defines the objectives and deliverables of the testing process.
Key Components:
Test objectives
Scope of testing
Test resources
Test environment
Test schedule
Risks and mitigation
| Verification | Validation |
|---|---|
| Checks if the product is built correctly | Checks if the right product is built |
| Static process (reviews, inspections) | Dynamic process (executing the software) |
| Done during development | Done after development |
Error: Mistake made by a developer during coding.
Defect/Bug: Flaw in the software that causes it to fail.
Failure: When the software does not perform as expected due to a bug.
Answer:
STLC is the series of steps followed to ensure the quality of software.
Phases:
Requirement Analysis – Understand what needs to be tested.
Test Planning – Define testing strategy and resources.
Test Case Development – Write detailed test cases.
Test Environment Setup – Prepare hardware, software, and network.
Test Execution – Run the tests and record results.
Defect Reporting – Log any defects found.
Test Closure – Summarize testing and lessons learned.
Answer:
The Bug Life Cycle is the journey of a defect from identification to closure.
Stages:
New – Bug is reported.
Assigned – Developer is assigned.
Open – Developer starts analyzing.
Fixed – Developer fixes it.
Retest – Tester verifies the fix.
Closed – Bug is resolved.
Reopen – If not fixed properly.
| Severity | Priority |
|---|---|
| Indicates the impact of the bug on the system | Indicates the urgency to fix the bug |
| Critical, Major, Minor | High, Medium, Low |
| Example: Application crashes on login | Example: Typo on homepage |
Answer:
Exploratory Testing is an unscripted testing technique where testers explore the application without predefined test cases to find defects. It relies on experience and intuition.
Answer:
Ad-hoc Testing is informal testing without planning or documentation. It is done to quickly find defects and is often used when time is limited.
Black Box Testing: Tester tests functionality without knowing the internal code.
White Box Testing: Tester checks internal code, logic, and structure.
Gray Box Testing: Tester has partial knowledge of the code and tests both functionality and logic.
Answer:
Regression Testing is performed to ensure that new code changes do not affect existing functionality.
When:
After bug fixes
After new feature implementation
After changes in configuration
| Smoke Testing | Sanity Testing |
|---|---|
| Shallow and wide testing | Narrow and deep testing |
| Performed on initial build | Performed after minor changes |
| Checks stability of build | Checks functionality after changes |
Answer:
Test Data is the set of input values used to execute test cases. It can be valid or invalid data to check application behavior.
| Static Testing | Dynamic Testing |
|---|---|
| Done without executing code (reviews, walkthroughs) | Done by executing the software |
| Finds defects early | Finds functional defects |
| Faster and cost-effective | Time-consuming |
Boundary Value Analysis: Tests values at the edges of input ranges.
Example: If input range is 1–10, test 0, 1, 10, 11.
Equivalence Partitioning: Divides input data into valid and invalid partitions to reduce test cases.
Example: For ages 18–60, partitions: <18 (invalid), 18–60 (valid), >60 (invalid).
| Alpha Testing | Beta Testing |
|---|---|
| Done by internal employees | Done by real users |
| Performed before release | Performed after release in limited environment |
| Focuses on finding bugs | Focuses on usability and feedback |
Even though manual testing is mostly without tools, some tools help manage testing efficiently:
JIRA: Bug tracking and test management.
TestLink: Test case management.
Bugzilla: Bug tracking.
Quality Center/ALM: Test planning and execution.
Answer:
Testing a login page involves checking both functional and non-functional aspects. Steps:
Valid Input Testing:
Enter valid username and password → should log in successfully.
Invalid Input Testing:
Wrong username → show error.
Wrong password → show error.
Empty fields → show validation message.
Boundary Testing:
Max/min length of username/password fields.
Special Characters:
Check if special characters are allowed/disallowed.
Security Testing:
SQL injection, XSS, or password masking.
UI Testing:
Check alignment, font, buttons, and links.
| Test Scenario | Test Case |
|---|---|
| High-level functionality to be tested | Step-by-step instructions to test a functionality |
| Derived from requirements | Derived from test scenarios |
| Example: "Test login functionality" | Example: "Enter valid username/password → Click login → Verify dashboard" |
Answer:
Severity: Impact of the bug on the system.
Priority: Urgency to fix the bug.
Example:
Bug: Application crashes on login → Severity: Critical, Priority: High
Bug: Typo on homepage → Severity: Minor, Priority: Medium
Answer:
Time-consuming for large projects.
Difficult to test repetitive tasks.
Human error and missing defects.
Requires good understanding of requirements.
Hard to maintain large test cases manually.
| Functional Testing | Non-Functional Testing |
|---|---|
| Checks what the system does | Checks how the system performs |
| Example: Login, forms, CRUD operations | Example: Performance, usability, security, compatibility |
| System Testing | Integration Testing |
|---|---|
| Tests the complete system | Tests combined modules |
| Performed after integration | Performed after unit/module testing |
| Checks end-to-end functionality | Checks data flow and interaction between modules |
Smoke Testing:
Performed on a new build to check basic functionality.
Example: After build deployment, verify login, homepage, and navigation work.
Sanity Testing:
Performed after minor changes to ensure specific functionality works.
Example: If a bug in login is fixed, test only login and related pages.
Answer:
A Test Summary Report is a document that provides overall testing results, coverage, and quality metrics after the testing phase.
Contents:
Total test cases executed
Passed/Failed test cases
Defects summary (open/closed/reopened)
Testing coverage
Risks and observations
| Verification | Validation |
|---|---|
| Checks if the product is built correctly | Checks if the right product is built |
| Static process (reviews, walkthroughs) | Dynamic process (execute the software) |
| Example: Review of SRS document | Example: Running test cases on application |
Answer:
A defect moves through multiple stages from discovery to closure.
Stages:
New: Bug reported.
Assigned: Developer assigned.
Open: Developer starts analyzing.
Fixed: Bug is fixed.
Retest: Tester verifies the fix.
Closed: Bug resolved.
Reopened: If the bug persists after retest.
| Alpha Testing | Beta Testing |
|---|---|
| Conducted by internal team | Conducted by end users |
| Done in development environment | Done in real-world environment |
| Focus on finding bugs | Focus on usability and feedback |
| Type | Definition | Who Performs |
|---|---|---|
| Black Box | Tests functionality without knowing internal code | Manual tester |
| White Box | Tests internal code logic | Developer / QA |
| Gray Box | Partial knowledge of code; tests functional + internal logic | QA with coding knowledge |
Answer:
Boundary Value Analysis is a technique where test cases are designed to include boundary values.
Example:
Input range: 1–100
Test values: 0, 1, 100, 101
Answer:
Equivalence Partitioning divides input data into valid and invalid partitions to reduce the number of test cases.
Example:
Age input field: 18–60
Partitions: <18 (invalid), 18–60 (valid), >60 (invalid)
Even in manual testing, tools help manage and track testing efficiently:
JIRA: Bug tracking and reporting
Bugzilla: Bug tracking
TestLink: Test case management
Quality Center/ALM: Test planning and execution
Answer:
Functional Testing:
Addition, subtraction, multiplication, division
Negative numbers, decimal numbers, zero
Boundary Testing:
Max/min input values
Error Handling:
Divide by zero → show error
Invalid characters → show error
UI Testing:
Check buttons, layout, font, and alignment
Usability Testing:
Ease of use for end users
Answer:
High Priority: Critical business functionality, core features
Medium Priority: Features not critical but used frequently
Low Priority: Minor features, rarely used modules
Answer:
Communicate with stakeholders to clarify requirements.
Maintain flexible test cases to accommodate changes.
Perform exploratory testing when requirements are unclear.
Update test documentation regularly.
Answer:
Ad-hoc Testing is informal testing without planning or documentation. It is done to quickly find defects and is often used when time is limited.
Answer:
Exploratory Testing is unscripted and simultaneous learning, test design, and execution. Testers use intuition and experience to explore the application.
| Static Testing | Dynamic Testing |
|---|---|
| Done without executing the code | Done by executing the code |
| Examples: Reviews, walkthroughs, inspections | Examples: Functional testing, performance testing |
| Early defect detection | Detects runtime errors |
| Cost-effective and faster | Time-consuming |
Answer:
A Test Environment is a setup consisting of hardware, software, network configurations, and tools required to execute test cases.
Example:
Web server, database server, client machines, browsers, OS versions, and test data.
| Load Testing | Stress Testing |
|---|---|
| Checks performance under expected user load | Checks behavior under extreme load beyond capacity |
| Goal: Ensure system can handle normal load | Goal: Identify breaking point |
| Example: 1000 users logging in simultaneously | Example: 5000 users logging in simultaneously |
| Severity | Priority |
|---|---|
| Impact of the bug on the system | Urgency to fix the bug |
| Critical, Major, Minor | High, Medium, Low |
| Example: App crashes on login → Critical | Example: Typo on homepage → Medium |
Unit Testing: Tests individual modules or functions (usually by developers).
Integration Testing: Tests combined modules for correct interaction.
System Testing: Tests complete application end-to-end.
User Acceptance Testing (UAT): Conducted by end users to validate requirements.
Answer:
End-to-End Testing verifies the flow of an application from start to finish. It checks system integration, database, network, and interfaces to ensure everything works together.
Example:
In an e-commerce site: search → add to cart → payment → order confirmation → email notification.
Answer:
Negative Testing ensures the system behaves correctly with invalid inputs or unexpected actions.
Example:
Enter letters in a numeric field → error message.
Enter invalid email format → validation message.
Leave mandatory fields empty → error message.
| Retesting | Regression Testing |
|---|---|
| Verify specific defect after fixing | Ensure existing functionality is not broken |
| Done with same test case that failed earlier | Done with a set of related test cases |
| Focused | Broad coverage |
| Test Strategy | Test Plan |
|---|---|
| High-level document defining approach and goals | Detailed document defining scope, resources, schedule, and activities |
| Static and generic | Specific and project-dependent |
| Example: “We will use manual testing for functional testing” | Example: “We will execute 500 test cases in 2 weeks using Chrome and Firefox browsers” |
Answer:
Configuration Testing checks if the application works correctly in different environments, such as operating systems, browsers, devices, and network settings.
Example:
Test a web app on Windows, macOS, Android, iOS, and different browsers like Chrome, Firefox, Safari.
Answer:
Test cases are prioritized based on business impact, critical functionality, and risk:
High Priority: Core features (login, payment, search).
Medium Priority: Important features but not critical (profile update, wishlist).
Low Priority: Minor features or rarely used functionality (help, FAQ pages).
| Functional Testing | Non-Functional Testing |
|---|---|
| Checks what the system does | Checks how the system performs |
| Examples: Login, registration, CRUD operations | Examples: Load testing, usability, security, compatibility |
| Alpha Testing | Beta Testing | Pilot Testing |
|---|---|---|
| Conducted by internal team | Conducted by end users | Conducted on a limited release to test deployment |
| Done in development environment | Done in real-world environment | Done in real environment with limited users |
| Focus on defect identification | Focus on usability and feedback | Focus on feasibility and readiness |
| Test Data | Test Case |
|---|---|
| Input values used to execute test cases | Step-by-step instructions for testing |
| Can be valid or invalid | Includes test steps, expected results, and status |
| Example: Username = “abc123”, Password = “Pass@123” | Example: Step 1: Open login page, Step 2: Enter credentials, Step 3: Click login, Step 4: Verify dashboard |
Steps to Report a Defect:
Log in the defect tracking tool (JIRA, Bugzilla).
Provide a clear title and description.
Include steps to reproduce.
Attach screenshots or logs if applicable.
Specify Severity, Priority, Module, and Environment.
Assign it to the responsible developer.
Answer:
Check all options are displayed.
Verify selection works and the correct value is submitted.
Test boundary values if applicable.
Check default option.
Test with invalid inputs (if any).
Check UI consistency across browsers.
Answer:
Functional Testing: Add to cart, remove items, apply coupons, make payment.
Boundary Testing: Maximum quantity allowed, minimum order.
Negative Testing: Invalid card number, expired coupons.
Integration Testing: Payment gateway, email notifications.
Usability Testing: Easy navigation and user-friendly interface.
Security Testing: SSL, encryption of payment info.
| Usability Testing | User Acceptance Testing (UAT) |
|---|---|
| Focuses on user-friendliness | Focuses on requirements fulfillment |
| Done by QA testers | Done by end-users or clients |
| Checks ease of use, navigation, layout | Checks workflow, business scenarios, and system behavior |
Answer:
Test Coverage measures the percentage of application requirements or code that has been tested.
Example:
If there are 100 requirements and 80 are tested → Test Coverage = 80%.
| Monkey Testing | Ad-hoc Testing |
|---|---|
| Random testing without any planning | Informal testing without scripts but may have experience-based approach |
| Goal: Crash the application | Goal: Find defects quickly |
| Mostly automated | Mostly manual |
Answer:
Manual Testing is the process of manually executing test cases without using any automated tools. The tester acts as an end-user to verify that the software behaves as expected.
Key Differences:
| Aspect | Manual Testing | Automation Testing |
|---|---|---|
| Execution | By human tester | By automated scripts/tools |
| Time Efficiency | Slower for large projects | Faster for repetitive tasks |
| Accuracy | Prone to human error | Highly accurate if scripts are correct |
| Cost | Low initial cost | High initial cost for tools/scripts |
| Best For | Exploratory, usability, ad-hoc testing | Regression, repetitive, large datasets |
Example: Testing a signup form manually involves entering data, submitting, and verifying errors. Automated testing would use a script to input data and check validation messages.
Answer:
Functional Testing: Verifies that the software functions according to requirements.
Example: Checking login functionality with valid/invalid credentials.
Non-Functional Testing: Checks performance, usability, reliability, etc.
Example: Testing the website load time.
Smoke Testing: Quick tests to ensure basic functionality works.
Example: Open the application and verify the main pages load.
Sanity Testing: Ensures that a specific functionality works after changes.
Example: After fixing a bug in search, check search works.
Regression Testing: Confirms that new changes don’t break existing functionality.
User Acceptance Testing (UAT): Final testing by the client to ensure requirements are met.
Exploratory Testing: Testing without predefined test cases to find defects.
Answer:
A Test Case is a set of conditions, actions, and expected results to verify a feature of the application.
Steps to Write Effective Test Cases:
Test case ID
Test scenario
Preconditions
Test steps
Test data
Expected result
Actual result
Pass/Fail status
Example:
| Test Case ID | TC_001 |
|---|---|
| Test Scenario | Verify login with valid credentials |
| Preconditions | User must be registered |
| Test Steps | 1. Open login page2. Enter valid username/password3. Click login |
| Expected Result | User should be redirected to the dashboard |
| Actual Result | - |
| Status | Pass/Fail |
Answer:
The Bug Life Cycle describes the stages a defect goes through:
New: Bug is logged.
Assigned: Assigned to a developer.
Open: Developer starts analyzing/fixing the bug.
Fixed: Developer fixes the bug.
Retest: Tester verifies the fix.
Reopen: If the bug persists, it’s reopened.
Closed: Bug is resolved and verified.
Deferred/Rejected: Bug may not be fixed due to low priority or invalid issues.
Tools: JIRA, Bugzilla, Quality Center.
| Aspect | Severity (Impact) | Priority (Urgency) |
|---|---|---|
| Definition | How serious the bug is | How soon it should be fixed |
| Example | Crash of application (High severity) | Typo in UI (Low severity, High priority if client noticed) |
| Controlled By | QA Team | Product Owner / Client |
Answer:
Black Box Testing: Testing without knowledge of internal code.
Functional, Non-functional, Regression, UAT.
White Box Testing: Testing with knowledge of internal code.
Code coverage, path testing, unit testing.
Gray Box Testing: Combination of black box and white box.
Focus on testing from a user perspective but with limited code knowledge.
| Aspect | Verification | Validation |
|---|---|---|
| Definition | Checks if software is built correctly | Checks if the software meets user needs |
| Performed By | QA Team | QA Team / End Users |
| Type | Static (review, walkthrough, inspection) | Dynamic (executing the code) |
| Example | Reviewing requirement documents | Testing login functionality |
Answer:
Exploratory Testing is simultaneous learning, test design, and test execution. Testers explore the application without predefined test cases to find hidden bugs.
Steps:
Understand the application.
Define objectives.
Test areas based on priority.
Document defects.
Report findings.
Example: Open a new feature in an application and try random combinations of actions to see if it breaks.
Answer:
A Test Plan is a document outlining testing strategy, scope, resources, and schedule.
Contents:
Test Plan ID & Title
Introduction / Objectives
Scope (In-scope / Out-of-scope)
Testing Strategy (Functional / Non-functional)
Resource allocation
Risk and Mitigation
Deliverables
Entry/Exit criteria
Test Environment
Answer:
Repetitive tasks are time-consuming.
Human errors can lead to missed defects.
Limited coverage for complex applications.
Difficulty in regression testing for large systems.
Keeping up with frequent changes in requirements.
Answer:
Boundary Value Analysis (BVA): Focus on values at the edges of input ranges.
Example: For input 1–100, test 0, 1, 2, 99, 100, 101.
Equivalence Partitioning (EP): Divides inputs into valid/invalid groups.
Example: Input age 18–60, test one value from each group (valid: 18, 30, 60; invalid: 17, 61).
| Aspect | Retesting | Regression Testing |
|---|---|---|
| Purpose | Verify specific defect is fixed | Ensure new changes didn’t break existing functionality |
| Scope | Limited to fixed defect | Broad; multiple areas may be affected |
| Test Case | Same as failed defect | New + existing test cases |
Answer:
Test Scenario: High-level description of what to test.
Example: Verify login functionality.
Test Case: Step-by-step instructions to execute a scenario.
Example: Enter valid credentials → Click login → Verify dashboard.
| Aspect | Functional Testing | Non-Functional Testing |
|---|---|---|
| Purpose | Verify software functions as expected | Verify software performance, usability, security |
| Example | Login, Signup, Search | Load testing, stress testing |
| Measurement | Pass/Fail | Metrics like response time, throughput |
Even though manual testing doesn’t require scripting, tools help in managing test cases, defects, and reporting:
Test Management Tools: TestRail, Quality Center, Zephyr
Bug Tracking Tools: JIRA, Bugzilla, Mantis
Documentation Tools: MS Excel, Confluence
Collaboration Tools: Slack, MS Teams
Answer:
Immediately inform the development team and stakeholders.
Create a detailed bug report with steps to reproduce.
Reproduce the issue in a test environment if possible.
Prioritize fixing as production critical.
Coordinate retesting once fixed.
Document lessons learned to prevent recurrence.
Understand requirements clearly before testing.
Create detailed test cases and scenarios.
Prioritize testing based on risk and impact.
Report defects with clear steps and screenshots.
Perform exploratory testing alongside structured testing.
Maintain proper documentation for audit and review.
Collaborate with developers to clarify requirements.
| Aspect | Alpha Testing | Beta Testing |
|---|---|---|
| Performed By | Internal QA/Test team | Actual end users |
| Environment | Controlled (lab/test environment) | Real-time production environment |
| Purpose | Identify bugs before release | Collect feedback from real users |
| Frequency | Before product release | After alpha testing, just before final release |
Example:
A software company releases a web application first internally (Alpha) and then provides it to a limited set of users outside the company (Beta) for feedback.
Answer:
Test metrics are quantitative measures to monitor and improve testing processes.
Common Types:
Test Case Execution Metrics: Number of test cases executed, pass/fail percentage.
Defect Metrics: Number of defects, severity distribution, defect density.
Requirement Coverage Metrics: Percentage of requirements covered by test cases.
Productivity Metrics: Number of test cases executed per day per tester.
Test Progress Metrics: Percentage of test completion, open/closed defects.
| Aspect | Static Testing | Dynamic Testing |
|---|---|---|
| Definition | Testing without executing code | Testing by executing code |
| Performed By | QA team, developers | QA team |
| Techniques | Reviews, walkthroughs, inspections | Functional, regression, performance testing |
| Example | Reviewing requirement documents | Executing login functionality to check output |
Answer:
A Requirement Traceability Matrix (RTM) is a document that maps requirements to test cases, ensuring every requirement is tested.
Purpose:
Ensures all requirements are covered.
Helps in impact analysis for requirement changes.
Useful in audits.
Example:
| Requirement ID | Requirement Description | Test Case ID | Status |
|---|---|---|---|
| RQ_01 | User should login with valid credentials | TC_01 | Pass |
| RQ_02 | User should receive error for invalid login | TC_02 | Fail |
| Aspect | Ad-Hoc Testing | Exploratory Testing |
|---|---|---|
| Documentation | Minimal or none | Can be documented based on findings |
| Purpose | Find defects quickly | Understand application and find hidden defects |
| Planning | No formal planning | Requires testing experience & strategy |
| Example | Randomly click on features | Explore new module systematically while noting defects |
Answer:
Risk-Based Testing prioritizes testing based on risk of failure and impact on business.
Steps:
Identify high-risk areas (critical functionality, frequently used features).
Estimate probability and impact of defects.
Allocate testing effort to high-risk areas first.
Perform detailed testing on critical features, basic testing on low-risk areas.
Example:
In an online payment app, testing payment flow and security is high risk and tested thoroughly; minor UI color changes are low risk.
Answer:
A matrix helps decide which bugs to fix first based on severity (impact) and priority (urgency).
| Severity \ Priority | High Priority | Medium Priority | Low Priority |
|---|---|---|---|
| Critical | Fix immediately | Fix soon | Fix later |
| Major | Fix soon | Fix when convenient | Optional |
| Minor | Optional | Optional | Can defer |
| Aspect | Positive Testing | Negative Testing |
|---|---|---|
| Purpose | Checks system works with valid inputs | Checks system handles invalid inputs |
| Focus | Expected behavior | Unexpected behavior |
| Example | Enter valid username/password for login | Enter invalid username/password and check error |
Answer:
Session-Based Testing is a time-boxed exploratory testing technique where testers test a part of the application for a defined time, document findings, and then review.
Example:
90-minute session for testing the checkout feature in an e-commerce app.
Tester explores all functionalities and reports defects with notes.
Answer:
Review new requirements and understand impact.
Update test cases or create new ones.
Re-prioritize testing based on risk and criticality.
Communicate changes to stakeholders.
Perform regression testing to ensure stability.
| Type | Definition | Example |
|---|---|---|
| Load Testing | Checks system performance under expected load | 1000 users accessing the website |
| Stress Testing | Checks system under extreme load | 5000 users to see when system crashes |
| Performance Testing | Checks responsiveness, speed, stability | Measure page load time for 100 users |
Factors to consider:
Criticality of the feature to the business.
Frequency of feature usage by end users.
Probability of defect occurrence.
Dependency on other modules.
Past defect history.
Example:
Payment module is high priority, profile update module is medium priority, color change in UI is low priority.
Q: You found a critical bug in production, but the developer says it’s not reproducible. What will you do?
Answer:
Reproduce the bug in test/staging environment.
Collect detailed information: screenshots, logs, steps to reproduce.
Communicate clearly with the developer.
If still critical and blocking, escalate to project manager or team lead.
Perform regression testing once fixed.
Answer:
Compatibility Testing ensures the application works across different browsers, devices, OS, and resolutions.
Steps:
Identify target platforms (Windows, Mac, iOS, Android).
Identify browsers (Chrome, Edge, Safari, Firefox).
Execute test cases on each platform.
Log discrepancies as defects.
Example: A website may work in Chrome but UI breaks in Safari.
| Term | Definition |
|---|---|
| Error | Mistake made by developer in code |
| Defect/Bug | Flaw in software due to an error |
| Failure | Deviation from expected behavior during execution |
Example:
Developer writes wrong SQL → Error
Application shows wrong data → Defect
User cannot retrieve records → Failure
Answer:
Defect Density: Measures the number of defects per size of the software (e.g., per 1000 lines of code or per module).
Formula:
[
\text{Defect Density} = \frac{\text{Total Defects Found}}{\text{Size of Software Module}}
]
Example: If 20 defects are found in a module of 5000 LOC, defect density = 20/5 = 4 defects/KLOC.
Defect Leakage: Defects that were missed during testing but found by end users or in production.
Formula:
[
\text{Defect Leakage} = \frac{\text{Defects found post-release}}{\text{Defects found pre-release + post-release}} \times 100
]
Example: If 2 defects are found in production out of 50 total defects, leakage = (2/50)*100 = 4%.
Answer:
Test Artifacts are documents or deliverables created during the testing process to support test execution, tracking, and reporting.
Examples:
Test Plan
Test Scenarios
Test Cases
Test Data
Traceability Matrix (RTM)
Defect Reports
Test Summary Reports
Checklists
| Aspect | Build Verification Testing (BVT) | Smoke Testing |
|---|---|---|
| Purpose | Ensure new build is stable enough for detailed testing | Quick check of critical functionalities |
| Scope | Limited but covers critical modules | Very limited, high-level only |
| Frequency | Every new build | Usually on every build |
| Outcome | Determines if detailed testing can proceed | Determines if build is testable |
Example:
After a nightly build, QA performs BVT to check login, dashboard, and key workflows.
Answer:
Monkey Testing is random testing where inputs are generated without any predefined test cases. It is usually done to check application stability and crash scenarios.
Example:
Clicking random buttons, entering random values, or randomly navigating pages to check if the application crashes.
Types:
Smart Monkey Testing (some knowledge of application)
Dumb Monkey Testing (completely random inputs)
| Aspect | Verification | Validation |
|---|---|---|
| Definition | Checks if the product is built correctly | Checks if the product meets user requirements |
| Activity Type | Static (reviewing, inspection) | Dynamic (executing the application) |
| Example | Reviewing requirement documents | Testing login functionality |
| Aspect | Requirement Gathering | Requirement Analysis |
|---|---|---|
| Purpose | Collect requirements from stakeholders | Understand and prioritize requirements |
| Activity | Meetings, interviews, questionnaires | Identify gaps, feasibility, risk assessment |
| Output | Raw requirement documents | Refined and validated requirements |
Steps:
Identify impacted modules due to changes.
Prioritize test cases based on critical functionality and risk.
Execute existing test cases to ensure no breakage.
Log defects if any.
Re-execute failed test cases after fixes.
Example:
After fixing a bug in the checkout module of an e-commerce app, QA tests the entire checkout flow and payment options to ensure nothing else is broken.
| Aspect | Static Testing | Dynamic Testing |
|---|---|---|
| Definition | Testing without executing code | Testing by executing code |
| Techniques | Reviews, inspections, walkthroughs | Functional, regression, performance tests |
| Goal | Detect defects early in artifacts | Detect defects during execution |
| Example | Reviewing test cases | Executing login test case |
Answer:
End-to-End Testing verifies the complete flow of an application from start to finish, including integration with external systems.
Example:
In an e-commerce website:
User logs in.
Searches and selects a product.
Adds product to cart.
Makes payment via payment gateway.
Receives confirmation email.
QA tests the entire flow to ensure seamless operation.
| Aspect | Usability Testing | UAT |
|---|---|---|
| Purpose | Checks application ease of use | Checks application meets business requirements |
| Performed By | QA team or UX testers | End users or client |
| Example | Check if navigation is intuitive | Client verifies order placement workflow |
Answer:
Test estimation predicts the effort required for testing.
Techniques:
Expert Judgment: Based on experience of senior QA.
Work Breakdown Approach: Break tasks and estimate each task.
Test Point Analysis (TPA): Calculate points for each module based on complexity.
Function Point Analysis (FPA): Based on requirements and functionality.
Use Case Points: Estimation based on use cases.
Answer:
Test Deliverables are artifacts produced during the testing lifecycle.
Examples:
Test Plan Document
Test Cases and Test Scenarios
Traceability Matrix (RTM)
Defect Reports
Test Summary Report
Test Closure Report
Answer:
Communicate with stakeholders for clarification.
Document assumptions clearly for reference.
Perform exploratory testing based on business knowledge.
Highlight gaps in the requirement document.
Update test cases after finalizing requirements.
| Aspect | Severity | Priority | Risk |
|---|---|---|---|
| Definition | Impact of a defect | Urgency to fix a defect | Probability of defect occurrence and its impact |
| Controlled By | QA Team | Product Owner/Client | QA and Project Manager |
| Example | App crash (High) | Typo in banner (High Priority) | Payment failure may occur (High Risk) |
Answer:
Manual security testing identifies vulnerabilities without automation.
Steps:
Validate input fields for SQL injection, XSS, CSRF.
Check authentication and authorization.
Verify password policies and data encryption.
Ensure session management and logout functionality.
Document security loopholes and report defects.
Example:
Entering ' OR '1'='1 in login fields to check for SQL injection vulnerability.
| Type | Knowledge of Code | Focus | Example |
|---|---|---|---|
| Black Box | No | Functionality, UI | Login testing with valid/invalid credentials |
| White Box | Yes | Code logic, paths | Unit testing a function or algorithm |
| Gray Box | Partial | Functionality + code logic | Testing API response based on DB knowledge |
| Testing Type | Definition | Example |
|---|---|---|
| Load Testing | Test system under expected load | 1000 users on a website |
| Stress Testing | Test system beyond maximum capacity to check stability | 5000 users, system may crash |
| Spike Testing | Sudden increase/decrease in load | Sudden 5000 users logging in within 1 min |
Q: You are testing a new feature, and your test cases passed but a critical issue is reported by a user in production. What steps do you take?
Answer:
Reproduce the issue in test/staging environment.
Collect logs, screenshots, and test data.
Report defect with full details in bug tracking tool.
Communicate severity and impact to the team.
Perform retesting and regression testing after fix.
Update test cases if necessary.