Koding Tree

Best Software Testing Training Institute in Bangalore

Koding Tree

Software Training Institute

Best Software Testing Training Institute in Bangalore

Koding Tree

Software Training Institute

Best Software Testing Training Institute in Bangalore

Koding Tree

Software Training Institute

Our Programs

Advanced selenium/Rest assured with devops tools

Koding Tree offers comprehensive training in Advanced Selenium, API Manual Testing, Rest Assured, DevOps, and Java, covering everything from core fundamentals to advanced automation and system-level concepts

Advanced selenium

💡 Why this matters: Most testers use only basic TestNG. Mastering @DataProvider, @Factory, and dependsOnMethods lets you run smarter, data-driven, and highly controlled test suites — skills that immediately impress in interviews.


Annotations & Execution Control

  • Full annotation lifecycle – @BeforeSuite, @BeforeTest, @BeforeClass, @BeforeMethod, @AfterMethod, @AfterClass, @AfterSuite
  • invocationCount – running a test method multiple times; constraints and valid use cases
  • priority – default value 0; negative values allowed; alphabetical tie-breaking
  • dependsOnMethods – precedence over priority when both are set
  • enabled=false vs alwaysRun=true – controlling test skipping within groups
  • Master suite XML – combining multiple testng.xml files into one run using <suite-files>

DataProvider – All Return Types

  • Object[] – single-parameter dataset; Object[][] – multi-row multi-column dataset
  • Iterator<Object[]> – lazy loading for large datasets without loading everything into memory
  • Why Map is not a valid @DataProvider return type

Assert, Groups & @Factory

  • Assert vs SoftAssert – fail-fast vs collect-all-failures strategy
  • @Test(groups) – smoke, regression, sanity; running specific groups from testng.xml
  • @Parameters – passing values from testng.xml into test methods at runtime
  • @Factory – dynamically generating multiple test class instances from a single class
  • Running TestNG programmatically via TestNG and XmlSuite classes – no XML needed

💡 Why this matters: Dependency Injection is how enterprise frameworks avoid duplicating browser setup across dozens of test classes. Understanding java.lang.reflect makes you stand out in senior-level framework design interviews.


Topics Covered

  • What Dependency Injection means – providing objects to a class rather than letting it create them
  • Injecting WebDriver into test classes via constructor or method parameters
  • java.lang.Class, java.lang.reflect.Method – inspecting and invoking methods at runtime
  • Running test methods programmatically using reflection – no hardcoded test names
  • Writing test results back to an Excel file after each test via injected utility classes
  • DI vs static WebDriver – when to use each approach in real projects

💡 Why this matters: Flaky tests cost teams hours of manual re-runs. Implementing IRetryAnalyzer and ITestListener at the framework level means your suite handles failures automatically — a hallmark of a production-ready framework.


Topics Covered

  • IRetryAnalyzer – implementing retry(ITestResult) to auto re-run failed tests up to a configurable limit
  • ITestListener – hooking into onTestFailure(), onTestStart(), onTestSuccess() lifecycle events
  • Taking a screenshot on failure using ITestResult + TakesScreenshot + FileUtils.copyFile()
  • Attaching failure screenshots automatically to ExtentReports via Base64 encoding
  • Reporter.log() – adding custom step messages visible in the TestNG HTML report
  • Registering listeners via @Listeners annotation or the <listeners> block in testng.xml

💡 Why this matters: Design patterns are asked in virtually every senior QA interview. Learning to implement and explain them with real Selenium examples puts you ahead of most candidates.


Singleton Pattern

  • Private constructor + static method = one WebDriver instance for the entire test run
  • Preventing multiple browser launches when multiple test classes initialize the driver

Factory Pattern

  • DriverFactory class – returning ChromeDriver, FirefoxDriver, or EdgeDriver based on input parameter
  • Removing browser-specific if-else logic from individual test classes

Decorator Pattern

  • Selenium 3 – EventFiringWebDriver + implementing WebDriverEventListener
  • Selenium 4 – EventFiringDecorator wrapping WebDriver + WebDriverListener interface
  • Logging every click(), navigate().to(), and exception automatically through the listener
  • Choosing between extends AbstractWebDriverEventListener vs implements WebDriverListener

💡 Why this matters: Standard POM is just the beginning. Advanced annotations like @FindAll, @FindBys, and @CacheLookup make your page classes faster and more expressive — and interviewers notice the difference.


Topics Covered

  • @FindBy – declaring WebElement and List<WebElement> fields, initialized via PageFactory.initElements(driver, this)
  • @FindAll – locating elements matching any of multiple locators (OR logic)
  • @FindBys – locating elements matching all of multiple locators (AND / parent-child chaining)
  • @CacheLookup – caching the element reference after the first DOM lookup to improve performance
  • When @CacheLookup helps vs. when it breaks – dynamic elements, page refresh, AJAX updates
  • Comparing @FindAll vs @FindBys – logic difference and when to use each in real tests

💡 Why this matters: Knowing three different screenshot approaches — and when each is appropriate — shows depth of knowledge. Most testers only know one method, and interviewers notice.


Topics Covered

  • TakesScreenshot – casting WebDriver and calling getScreenshotAs(OutputType.FILE) for full-page capture
  • Saving screenshots with FileUtils.copyFile() using timestamped filenames to avoid overwriting
  • WebElement.getScreenshotAs() – capturing just a specific element on the page (Selenium 4)
  • AShot library – full-page screenshots including content below the visible fold
  • Robot class (java.awt.Robot) – OS-level desktop screenshot using createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()))
  • Capturing screenshots on failure via ITestListener.onTestFailure(ITestResult) – no test-level code needed

💡 Why this matters: Some elements simply cannot be handled by standard Selenium — hidden inputs, custom widgets, OS-level dialogs. JavascriptExecutor and Robot are what professionals reach for when Selenium hits a wall.


JavascriptExecutor

  • Casting WebDriver to JavascriptExecutor and calling executeScript(String, Object...)
  • Scrolling by pixel: window.scrollBy(0, 500); scrolling to element: arguments[0].scrollIntoView()
  • Clicking elements that WebElement.click() cannot reach: arguments[0].click()
  • Setting input values via JS: arguments[0].value='text' for blocked input fields
  • Reading values: return arguments[0].value / return arguments[0].textContent
  • Limitations – bypasses user interaction validation; use carefully in real automation

Robot Class

  • Robot.mouseMove(x, y), mousePress(InputEvent.BUTTON1_DOWN_MASK), mouseRelease()
  • Robot.keyPress(KeyEvent.VK_ENTER) – keyboard simulation at OS level
  • Handling Windows file upload dialogs and OS pop-ups that Selenium cannot touch
  • Combining with StringSelection and Toolkit.getDefaultToolkit().getSystemClipboard() for clipboard operations

💡 Why this matters: Professional testers don’t just run tests — they produce shareable evidence. Rich HTML reports and structured log files let your team and stakeholders see exactly what was tested, what passed, and why something failed — with zero manual effort.


ExtentReports

  • Setting up ExtentSparkReporter and ExtentReports instance in a base class or listener
  • Creating ExtentTest nodes per test and logging pass(), fail(), skip() with messages
  • Attaching Base64-encoded screenshots to reports on failure
  • Integrating with ITestListener for automatic lifecycle-driven reporting
  • Flushing reports at end of suite with extent.flush()

Log4J & Reporter

  • log4j2.xml / log4j.properties – configuring ConsoleAppender, FileAppender, and log patterns
  • LogManager.getLogger(ClassName.class) – logging at DEBUG, INFO, WARN, ERROR levels
  • Reporter.log() – embedding step-level messages directly into the TestNG HTML report
  • Reporter.getOutput(ITestResult) – reading Reporter messages to attach to ExtentReport

Why this matters:
Real projects store test data in many different formats. Knowing how to read Excel, config files, CSVs, JSON, and databases makes your framework flexible and environment-agnostic — a major differentiator in job interviews.


Excel – Apache POI

  • XSSFWorkbook, XSSFSheet, XSSFRow, XSSFCell — reading and writing .xlsx files
  • Converting Excel rows to Map<String, String> for use in @DataProvider
  • Writing test results (Pass/Fail/timestamp) back to Excel after each run

Property Files & CSV

  • java.util.Properties — loading key-value config (base URL, credentials) from .properties files
  • Apache Commons CSV — parsing CSV files into row/column collections for test data

JSON & Database

  • JSON data types — objects, arrays, strings, numbers, booleans, null
  • Parsing JSON with JSONObject / JSONArray or REST Assured’s JsonPath
  • Connecting to MySQL using java.sql.DriverManager.getConnection(url, user, pass)
  • Statement, PreparedStatement, ResultSet — querying DB and comparing values to UI output

Why this matters:
Jenkins and other CI tools trigger test runs from the command line. If you can only run tests inside an IDE, you cannot work in a real CI/CD pipeline. This module makes your framework pipeline-ready.


Topics Covered

  • Running mvn test from the terminal — no IDE open, no manual steps
  • Passing parameters via -D system properties:
    mvn test -Dbrowser=chrome -Denv=staging
  • Reading System.getProperty("browser") inside test code and inside testng.xml using ${browser}
  • Selecting specific TestNG suites via maven-surefire-plugin <suiteXmlFile> config
  • Running a single test class or method:
    mvn -Dtest=LoginTest#verifyLogin test
  • Full parameter chain: Jenkins build param → pom.xmltestng.xml@Parameters → test method

Why this matters: Parallel execution across multiple browsers and machines is how enterprise teams cut test times from hours to minutes. Selenium Grid and cloud platforms are standard tools at any mature QA organization.


Selenium Grid

  • Standalone mode – single-node Grid using java -jar selenium-server.jar standalone
  • Hub & Node – registering remote machines to a central Hub for distributed execution
  • Connecting via RemoteWebDriver using Hub URL and ChromeOptions / FirefoxOptions
  • Parallel execution in testng.xml using parallel="tests" and thread-count
  • Thread-safe driver management using ThreadLocal<WebDriver> to avoid cross-thread interference

Cloud — SauceLabs

  • Configuring RemoteWebDriver with SauceLabs remote Hub URL and access credentials
  • Setting browser, OS version, and screen resolution via MutableCapabilities / sauce:options
  • Viewing live test execution and video recordings in the SauceLabs web dashboard

Why this matters: Docker eliminates the “it works on my machine” problem for good. Running Selenium inside Docker containers is now standard practice at companies that run tests in CI pipelines — and it’s a skill most testers don’t have.


Topics Covered

  • Docker core concepts — images, containers, ports, volumes; docker pull, docker run, docker ps
  • Pulling Selenium’s official images: selenium/standalone-chrome, selenium/standalone-firefox
  • Connecting RemoteWebDriver to a containerized browser on http://localhost:4444
  • Watching tests visually inside containers using VNC Viewer (port 5900)
  • Running Selenium Hub + Node in separate containers and linking them via Docker network
  • Running parallel browser containers for cross-browser execution without local installs
  • docker stop / docker rm — cleaning up environments after test runs

Why this matters: CI/CD integration is the difference between a framework that runs when someone remembers to trigger it, and one that runs automatically on every code change. Jenkins is still the most widely used CI tool in Java test projects.


Topics Covered

  • Installing Jenkins locally and configuring the Maven plugin and JDK settings
  • Creating a Freestyle job — pointing to a Git repo and triggering mvn test
  • Parameterized builds — adding String Parameter and Choice Parameter for browser, env, suite
  • Passing Jenkins build parameters into Maven via -D${browser} in the goals field
  • HTML Publisher plugin — publishing ExtentReports HTML inside Jenkins after each run
  • Post-build email notifications — configuring SMTP and dynamic report-link emails
  • Build triggers — manual, schedule (cron), and upstream job chaining

Why this matters: A Keyword Driven Framework separates test logic from test data – so QA analysts without coding skills can write test cases in Excel and the framework executes them automatically. This is a sign of a senior framework developer.


Architecture & Concepts

  • What keyword-driven testing is – separating action keywords from test data in an Excel file
  • Excel test case structure – columns for Keyword, Locator Type, Locator Value, Test Data
  • Building a KeywordExecutor / ActionEngine using reflection or a switch statement

Implementation

  • Reading test steps from Excel row by row using XSSFWorkbook / XSSFRow
  • Implementing WebUtil helper methods:
    clickElement(), typeText(), verifyText(), navigateTo()
  • Dispatching the correct method based on the keyword value read from Excel
  • Adding new test cases entirely in Excel – zero Java code changes required
  • Logging each keyword action to ExtentReports for full traceability

 

Why this matters: A working project on a real application is the difference between a resume and a portfolio. You leave this module with something you can demo, explain end-to-end, and be proud of in any senior QA interview.


Live Project

  • Automating a real Point-of-Sale (POS) web application – login, products, orders, billing
  • Applying all course components: TestNG, POM with @FindBy, ExtentReports, Log4j, Excel data, Jenkins CI/CD
  • Jira – creating test cases, logging bugs with screenshots, tracking sprint progress
  • Debugging real failures – identifying whether the issue is in the app, framework, or locator
  • Code review – refactoring for readability, removing duplication, improving structure

Interview Preparation

Design Patterns
Explain Singleton, Factory, Decorator with real code examples from your own project

Framework Walkthrough
Walk through your package structure, layer responsibilities, and why each design decision was made

CI/CD Pipeline
Describe the full chain: Jenkins → pom.xmltestng.xml@Parameters → test method

Parallel Execution
Explain ThreadLocal<WebDriver> and how you prevented driver conflicts across parallel threads

Data Strategies
Compare Excel, JSON, Properties, MySQL — and justify where you used each in the project

Docker & Grid
Describe how you used Docker containers as browser nodes and ran tests without installing browsers locally

Api munual testing

Why This Module Matters: Before you can test an API, you need to understand what it is and why it exists. This foundation shapes every decision you make as a tester — from what to test, to how to document bugs, to how you explain your work in interviews.


API Concepts & Architecture

  • API (Application Programming Interface) — a contract that lets two software systems exchange data; how apps talk to each other behind the scenes
  • REST (Representational State Transfer) — the most widely used architectural style for web APIs; stateless, resource-based communication
  • HTTP (HyperText Transfer Protocol) — the transport protocol used by every REST API; every request and response travels over HTTP
  • Client-server model — your test tool (Postman) is the client; the API server processes requests and returns responses
  • Monolithic architecture — one large application that handles everything; harder to scale and test independently
  • Microservices — small independent services, each with its own API; testing each service separately is faster and more targeted
  • API Gateway — the front door that receives all requests and routes them to the correct microservice

 

Why This Module Matters: Every single API test you write is based on an HTTP method. Knowing which method to use and what it means is the difference between writing meaningful tests and just clicking Send and hoping for the best.


HTTP Methods Mapped to CRUD

  • GET — Read/retrieve data; no body needed; safe and idempotent — sending it 100 times changes nothing
  • POST — Create a new resource; requires a request body with the data to save; not idempotent
  • PUT — Full update; replaces the entire resource with what you send; whatever you leave out gets removed
  • PATCH — Partial update; only the fields you specify are changed; everything else stays the same
  • DELETE — Remove a resource; typically returns 204 No Content — success with no data back
  • HEAD — Like GET but returns only headers; used for health checks without loading actual data
  • OPTIONS — Returns a list of HTTP methods the server supports for a given endpoint; used in CORS preflight

Why This Module Matters: Status codes are the first thing you check in every test result. A tester who can read a three-digit code and instantly know whether the problem is in the request, the server, or the data is worth far more than one who just checks “did I get a response?”


2xx – Success & 3xx – Redirects

  • 200 OK — Request succeeded; result is in the body — typical for GET and PUT
  • 201 Created — New resource created successfully — typical for POST; often includes new resource URL in Location header
  • 202 Accepted — Request accepted for processing but not yet completed (asynchronous operations)
  • 204 No Content — Success but no data returned — typical for DELETE
  • 301 Moved Permanently / 302 Found — Resource has moved; client should follow the new URL

4xx – Client Errors & 5xx – Server Errors

  • 400 Bad Request — Your request has a problem: missing field, wrong format, invalid value — fix the request
  • 401 Unauthorized — Not authenticated; no token, expired token, or wrong credentials
  • 403 Forbidden — Authenticated but not permitted; you’re logged in but your role doesn’t allow this action
  • 404 Not Found — The resource you asked for doesn’t exist; check the ID or URL
  • 500 Internal Server Error — Server-side failure; the bug is in the application code, not your request
  • 502 Bad Gateway / 503 Service Unavailable — Infrastructure issues; server or upstream service is down

Why This Module Matters: A complete test validates more than just the response body. Headers, status lines, response time, and parameter types all matter — and knowing every part of a request and response means you catch bugs that less thorough testers miss entirely.


Request Anatomy

  • Endpoint — the full URL of a specific resource: https://api.example.com/v1/users/42
  • Path parameter — a variable embedded in the URL path: /users/{id} — identifies a specific record
  • Query parameter — appended after ? for filtering or searching: /products?category=books&limit=10
  • Request Headers — metadata sent with every request: Content-Type: application/json, Authorization: Bearer <token>
  • Request Body / Payload — the data you send to create or update (required for POST, PUT, PATCH)

Response Anatomy

  • Status line — HTTP version + code + phrase: HTTP/1.1 201 Created
  • Response Headers — server metadata: Content-Type, Cache-Control, X-Request-Id
  • Response Body — the data returned: JSON object, array, or empty for 204
  • Response time — how fast the server responded in milliseconds; a performance validation point

 

Why This Module Matters: Almost every modern API sends and receives JSON. If you can’t read and write JSON confidently, you can’t write meaningful request bodies or validate response data accurately. This module is the difference between surface-level testing and deep, reliable testing.


JSON Structure & Data Types

  • JSON Object — key-value pairs in curly braces:
    { "id": 1, "name": "Bhanu", "active": true }
  • JSON Array — ordered list in square brackets:
    ["admin", "viewer", "editor"], array of objects:
    [ { ... }, { ... } ]
  • Data types
    String (in double quotes), Number (no quotes), Boolean (true / false), null, nested Object, Array
  • Nested JSON — objects inside objects; the standard structure for real API responses
  • Common JSON mistakes that cause 400 Bad Request — missing commas, single quotes, trailing commas, wrong brackets
  • JSON vs XML — JSON is lighter and easier to read; XML uses tags like HTML; most REST APIs use JSON today

Why This Module Matters: Postman is the standard tool for API testing in the industry. Getting comfortable with its interface from day one means you spend your time testing — not fighting the tool. Most QA job descriptions now list Postman as a required skill.


Postman UI & Basic Requests

  • Installing Postman desktop app; signing in; Postman workspace overview
  • Sending a GET request — enter URL, click Send, read the response body, status, and time
  • Sending a POST request — select Body tab → rawJSON → enter payload → Send
  • Response panel tabs — Body (Pretty / Raw / Preview), Headers, status code, response time, size
  • Saving requests to a Collection — naming, adding to a folder, adding description
  • Request history and duplicating requests for reuse

Why This Module Matters: No professional tester hardcodes URLs and tokens into every request. Collections keep your work organised and shareable; environments let the same test run against Dev, Staging, or Production with a single switch; variables make your tests dynamic and reusable.


Collections

  • Postman Collection — a named folder of related API requests; the shareable unit of API test work
  • Creating collections, adding folders per module or feature, organising by CRUD operation type
  • Exporting as JSON file for sharing; importing a colleague’s collection to run immediately

Environments & Variables

  • Environment — a named set of key-value pairs (Dev, Staging, Production); switch in one click, no request edits needed
  • Environment Variable — scoped to the selected environment: {{base_url}}, {{token}}
  • Collection Variable — scoped to the collection; shared across all requests within it
  • Global Variable — available everywhere in Postman across all collections and environments
  • Variable syntax in requests — {{variable_name}} in URL, header value, or body
  • Setting variables from a response —
    pm.environment.set("userId", pm.response.json().id)

Why This Module Matters: Most real-world APIs are secured. If you can’t handle authentication in Postman, you can’t test 80% of a real application. Understanding auth also helps you write better security-related negative test cases — which is something every interviewer asks about.


Authentication Types

  • Authentication vs Authorization — Authentication = who you are; Authorization = what you are allowed to do
  • API Key — a static key sent as a header: x-api-key: abc123 or as a query parameter
  • Bearer Token — sent in the Authorization header: Authorization: Bearer <jwt_token>
  • Basic Auth — username and password encoded as Base64 in the Authorization header
  • OAuth 2.0 — a flow where you get a temporary access token from an auth server and use it to call protected APIs
  • JWT (JSON Web Token) — a self-contained token with three sections: Header, Payload (claims), Signature

Handling Tokens in Postman

  • Storing the token in an environment variable {{access_token}} — never hardcode tokens in requests
  • Login flow: POST /login → extract token from response body →
    pm.environment.set("token", ...)
  • Token expiry — causes 401 Unauthorized; test for this scenario specifically
  • Setting Auth at the collection level so all requests inherit it automatically

Why This Module Matters: A tester who only checks the happy path misses half the bugs. APIs fail in predictable ways — bad inputs, missing auth, wrong permissions, nonexistent resources. Testing these “expected failure” scenarios is what makes a QA engineer genuinely valuable to a team.


Positive Test Scenarios — Happy Path

  • Valid GET with existing resource ID → expect 200 OK + correct data in body
  • Valid POST with all required fields → expect 201 Created + new resource in response
  • Valid PUT / PATCH with correct ID and body → expect 200 OK + updated data
  • Valid DELETE with existing resource ID → expect 204 No Content

Negative Test Scenarios — Error Path

  • Missing required field in POST body → expect 400 Bad Request with meaningful error message
  • Wrong data type (e.g. text where a number is expected) → expect 400 Bad Request
  • GET with a non-existent ID → expect 404 Not Found
  • No token in the request header → expect 401 Unauthorized
  • Valid token but accessing a resource the user has no permission for → expect 403 Forbidden
  • Boundary values — maximum field length, minimum numeric value, empty string, zero
  • Duplicate POST — creating a resource that already exists → expect 400 or 409 Conflict

Why This Module Matters: Manual checking — reading the response and deciding if it’s right — doesn’t scale. Test scripts run automatically after every request and give you a pass/fail result instantly. This is what separates a manual tester from a semi-automated API tester, and it’s a skill employers specifically look for.


Writing Tests in Postman

  • The Tests tab — JavaScript that Postman runs automatically after every response is received
  • pm.test("description", function () { ... }) — structure of every Postman test case
  • pm.response.to.have.status(201) — assert the exact status code returned
  • pm.response.to.have.header("Content-Type") — assert a specific header is present
  • pm.response.responseTime < 2000 — assert response is under 2 seconds

Response Body Assertions

  • const json = pm.response.json() — parse response body as a JavaScript object
  • pm.expect(json.name).to.eql("Bhanu") — assert exact field value
  • pm.expect(json).to.have.property("id") — assert a key exists in the response
  • pm.expect(json.id).to.be.a("number") — assert correct data type
  • pm.expect(json.items).to.be.an("array").that.is.not.empty — assert non-empty array
  • Chaining: .to.include, .to.not.be.null, .to.be.true, .to.be.above(0)

Why This Module Matters: Real test scenarios require setup — unique data, fresh tokens, IDs from previous responses. Pre-request scripts automate this setup so your test suite is self-sufficient and repeatable. Without this skill, you end up manually copying values between requests, which defeats the purpose of automation.


Dynamic Data & Request Chaining

  • Pre-request Script — JavaScript that runs BEFORE the request is sent; perfect for generating dynamic test data
  • Generating unique values:
    pm.environment.set("email", "user" + Date.now() + "@test.com")
  • Conditional token refresh — check if token exists, only call POST /login when it doesn’t
  • Request chaining — POST /users creates user → extract id → store as {{userId}} → use in GET /users/{{userId}}
  • Full workflow automation: Login → extract token → create record → retrieve it → update → delete — all chained automatically

Why This Module Matters: Running one request at a time isn’t testing — it’s clicking. Collection Runner lets you run your entire test suite in one go and see a complete pass/fail summary. Pairing it with a CSV data file means you can test 50 different inputs without writing 50 separate requests.


Collection Runner

  • Collection Runner — executes all requests in a collection sequentially in one automated run
  • Configuring iterations — how many times the entire collection runs; delay between requests in milliseconds
  • Selecting specific requests or folders to include in the run
  • Results panel — per-request pass/fail summary, assertion details, response time
  • Exporting run results as HTML or JSON report to share with the team

Data-Driven Testing with CSV

  • CSV data file — each row is one test iteration; column headers become variable names in Postman
  • Uploading CSV in Runner — select file, Postman automatically creates one iteration per row
  • Using {{column_name}} in URL, body, and headers — same request runs with different data each time
  • Example: test POST /users with 20 different name/email combinations from a single CSV — no code needed
  • JSON data file as alternative for complex nested test data structures

Why This Module Matters: Knowing Postman as a tool isn’t enough. Having a test strategy is a professional skill. Companies hire testers who can look at API documentation, build a complete test plan, organize their work, and plug it into a daily pipeline — not just testers who know how to click Send.


Swagger / OpenAPI & Test Planning

  • Swagger / OpenAPI — the industry-standard format for documenting REST APIs; your primary source for test cases
  • Reading Swagger UI — endpoints, request/response schemas, required fields, example payloads
  • Importing OpenAPI spec into Postman — auto-generates a collection with all endpoints ready to test
  • Deriving test cases from documentation — positive cases, negative cases, boundary values, security tests

Test Organization & CI/CD

  • Collection folder structure — organize by feature: Login folder, Users folder, Products folder, Orders folder
  • Happy Path folder + Error Scenarios folder within each feature — clear separation of test types
  • Environment strategy — separate Dev, QA, Staging, and Production environments for safe multi-environment testing
  • Regression suite — a curated collection of critical tests re-run after every release to catch regressions
  • Newman — Postman’s command-line runner:
    newman run collection.json -e environment.json
  • CI/CD integration — trigger Newman automatically via Jenkins, GitHub Actions, or any pipeline after each deployment

Interview Preparation

GET vs POST — GET retrieves with no body; POST creates with a body; GET is idempotent (safe to repeat), POST is not

401 vs 403 vs 404

  • 401 = not authenticated (missing or invalid token)
  • 403 = authenticated but not authorized (wrong role or permission)

PUT vs PATCH — PUT replaces the whole resource; PATCH updates only specified fields

What do you validate? — Status code, response body fields and types, required keys present, response time, content type header

Bearer Token Flow — POST /login → receive token → store → use in header:
Authorization: Bearer {{token}}

Data-driven testing — Upload CSV to Collection Runner; each row becomes one test iteration; {{column}} variables pull in each row’s data automatically


Exception Handling Mechanism

  • What is an exception – an unexpected event that disrupts normal program execution
  • Checked vs unchecked exceptions – compile-time vs runtime
  • try block – code that may throw an exception
  • catch (ExceptionType e) – handling a specific exception type
  • Multiple catch blocks – handling different exception types separately
  • finally block – always executes; used for resource cleanup (closing files, connections)
  • throw keyword – manually throwing an exception from your code
  • throws keyword – declaring that a method may throw a checked exception
  • Common exceptions:
    • NullPointerException
    • ArrayIndexOutOfBoundsException
    • NumberFormatException
    • ArithmeticException

List – Ordered, Allows Duplicates

  • ArrayList – dynamic array; fast random access; allows duplicates; maintains insertion order
  • LinkedList – doubly-linked list; fast add/remove at ends; slower random access
  • Common methods: add(), get(int), remove(), size(), contains(), set()
  • Iterating with for-each and Iterator

Set – No Duplicates

  • HashSet – no duplicates, no guaranteed order; backed by a hash table
  • LinkedHashSet – no duplicates, preserves insertion order
  • TreeSet – no duplicates, elements stored in natural sorted order

Queue & Stack

  • Queue interface – FIFO; offer(), poll(), peek()
  • Stack – LIFO; push(), pop(), peek()
  • Vector – legacy thread-safe version of ArrayList

Map – Key-Value Pairs

  • HashMap – key-value pairs, no guaranteed order, allows one null key
  • LinkedHashMap – key-value pairs, preserves insertion order
  • Common methods: put(), get(), remove(), containsKey(), keySet(), values()

Sorting & Iteration

  • Comparable interface – compareTo() for natural ordering inside the class
  • Comparator interface – compare() for custom ordering defined outside the class
  • Collections.sort(list) and Collections.sort(list, comparator)
  • IteratorhasNext() and next() for safe manual traversal

Rest assured

Why This Module Matters: Before you can test an API, you need to understand what it is and why it exists. This foundation shapes every decision you make as a tester — from what to test, to how to document bugs, to how you explain your work in interviews.


API Concepts & Architecture

  • API (Application Programming Interface) — a contract that lets two software systems exchange data; how apps talk to each other behind the scenes
  • REST (Representational State Transfer) — the most widely used architectural style for web APIs; stateless, resource-based communication
  • HTTP (HyperText Transfer Protocol) — the transport protocol used by every REST API; every request and response travels over HTTP
  • Client-server model — your test tool (Postman) is the client; the API server processes requests and returns responses
  • Monolithic architecture — one large application that handles everything; harder to scale and test independently
  • Microservices — small independent services, each with its own API; testing each service separately is faster and more targeted
  • API Gateway — the front door that receives all requests and routes them to the correct microservice

Why This Module Matters: Every single API test you write is based on an HTTP method. Knowing which method to use and what it means is the difference between writing meaningful tests and just clicking Send and hoping for the best.


HTTP Methods Mapped to CRUD

  • GET — Read/retrieve data; no body needed; safe and idempotent — sending it 100 times changes nothing
  • POST — Create a new resource; requires a request body with the data to save; not idempotent
  • PUT — Full update; replaces the entire resource with what you send; whatever you leave out gets removed
  • PATCH — Partial update; only the fields you specify are changed; everything else stays the same
  • DELETE — Remove a resource; typically returns 204 No Content — success with no data back
  • HEAD — Like GET but returns only headers; used for health checks without loading actual data
  • OPTIONS — Returns a list of HTTP methods the server supports for a given endpoint; used in CORS preflight

Why This Module Matters: Status codes are the first thing you check in every test result. A tester who can read a three-digit code and instantly know whether the problem is in the request, the server, or the data is worth far more than one who just checks “did I get a response?”


2xx – Success & 3xx – Redirects

  • 200 OK — Request succeeded; result is in the body — typical for GET and PUT
  • 201 Created — New resource created successfully — typical for POST; often includes new resource URL in Location header
  • 202 Accepted — Request accepted for processing but not yet completed (asynchronous operations)
  • 204 No Content — Success but no data returned — typical for DELETE
  • 301 Moved Permanently / 302 Found — Resource has moved; client should follow the new URL

4xx – Client Errors & 5xx – Server Errors

  • 400 Bad Request — Your request has a problem: missing field, wrong format, invalid value — fix the request
  • 401 Unauthorized — Not authenticated; no token, expired token, or wrong credentials
  • 403 Forbidden — Authenticated but not permitted; you’re logged in but your role doesn’t allow this action
  • 404 Not Found — The resource you asked for doesn’t exist; check the ID or URL
  • 500 Internal Server Error — Server-side failure; the bug is in the application code, not your request
  • 502 Bad Gateway / 503 Service Unavailable — Infrastructure issues; server or upstream service is down

Why This Module Matters: A complete test validates more than just the response body. Headers, status lines, response time, and parameter types all matter — and knowing every part of a request and response means you catch bugs that less thorough testers miss entirely.


Request Anatomy

  • Endpoint — the full URL of a specific resource: https://api.example.com/v1/users/42
  • Path parameter — a variable embedded in the URL path: /users/{id} — identifies a specific record
  • Query parameter — appended after ? for filtering or searching: /products?category=books&limit=10
  • Request Headers — metadata sent with every request: Content-Type: application/json, Authorization: Bearer <token>
  • Request Body / Payload — the data you send to create or update (required for POST, PUT, PATCH)

Response Anatomy

  • Status line — HTTP version + code + phrase: HTTP/1.1 201 Created
  • Response Headers — server metadata: Content-Type, Cache-Control, X-Request-Id
  • Response Body — the data returned: JSON object, array, or empty for 204
  • Response time — how fast the server responded in milliseconds; a performance validation point

Why This Module Matters: Almost every modern API sends and receives JSON. If you can’t read and write JSON confidently, you can’t write meaningful request bodies or validate response data accurately. This module is the difference between surface-level testing and deep, reliable testing.


JSON Structure & Data Types

  • JSON Object — key-value pairs in curly braces:
    { "id": 1, "name": "Bhanu", "active": true }
  • JSON Array — ordered list in square brackets:
    ["admin", "viewer", "editor"], array of objects:
    [ { ... }, { ... } ]
  • Data types
    String (in double quotes), Number (no quotes), Boolean (true / false), null, nested Object, Array
  • Nested JSON — objects inside objects; the standard structure for real API responses
  • Common JSON mistakes that cause 400 Bad Request — missing commas, single quotes, trailing commas, wrong brackets
  • JSON vs XML — JSON is lighter and easier to read; XML uses tags like HTML; most REST APIs use JSON today

Why This Module Matters: Postman is the standard tool for API testing in the industry. Getting comfortable with its interface from day one means you spend your time testing — not fighting the tool. Most QA job descriptions now list Postman as a required skill.


Postman UI & Basic Requests

  • Installing Postman desktop app; signing in; Postman workspace overview
  • Sending a GET request — enter URL, click Send, read the response body, status, and time
  • Sending a POST request — select Body tab → rawJSON → enter payload → Send
  • Response panel tabs — Body (Pretty / Raw / Preview), Headers, status code, response time, size
  • Saving requests to a Collection — naming, adding to a folder, adding description
  • Request history and duplicating requests for reuse

Why This Module Matters: No professional tester hardcodes URLs and tokens into every request. Collections keep your work organised and shareable; environments let the same test run against Dev, Staging, or Production with a single switch; variables make your tests dynamic and reusable.


Collections

  • Postman Collection — a named folder of related API requests; the shareable unit of API test work
  • Creating collections, adding folders per module or feature, organising by CRUD operation type
  • Exporting as JSON file for sharing; importing a colleague’s collection to run immediately

Environments & Variables

  • Environment — a named set of key-value pairs (Dev, Staging, Production); switch in one click, no request edits needed
  • Environment Variable — scoped to the selected environment: {{base_url}}, {{token}}
  • Collection Variable — scoped to the collection; shared across all requests within it
  • Global Variable — available everywhere in Postman across all collections and environments
  • Variable syntax in requests — {{variable_name}} in URL, header value, or body
  • Setting variables from a response —
    pm.environment.set("userId", pm.response.json().id)

Why This Module Matters: Most real-world APIs are secured. If you can’t handle authentication in Postman, you can’t test 80% of a real application. Understanding auth also helps you write better security-related negative test cases — which is something every interviewer asks about.


Authentication Types

  • Authentication vs Authorization — Authentication = who you are; Authorization = what you are allowed to do
  • API Key — a static key sent as a header: x-api-key: abc123 or as a query parameter
  • Bearer Token — sent in the Authorization header: Authorization: Bearer <jwt_token>
  • Basic Auth — username and password encoded as Base64 in the Authorization header
  • OAuth 2.0 — a flow where you get a temporary access token from an auth server and use it to call protected APIs
  • JWT (JSON Web Token) — a self-contained token with three sections: Header, Payload (claims), Signature

Handling Tokens in Postman

  • Storing the token in an environment variable {{access_token}} — never hardcode tokens in requests
  • Login flow: POST /login → extract token from response body →
    pm.environment.set("token", ...)
  • Token expiry — causes 401 Unauthorized; test for this scenario specifically
  • Setting Auth at the collection level so all requests inherit it automatically

Why This Module Matters: A tester who only checks the happy path misses half the bugs. APIs fail in predictable ways — bad inputs, missing auth, wrong permissions, nonexistent resources. Testing these “expected failure” scenarios is what makes a QA engineer genuinely valuable to a team.


Positive Test Scenarios — Happy Path

  • Valid GET with existing resource ID → expect 200 OK + correct data in body
  • Valid POST with all required fields → expect 201 Created + new resource in response
  • Valid PUT / PATCH with correct ID and body → expect 200 OK + updated data
  • Valid DELETE with existing resource ID → expect 204 No Content

Negative Test Scenarios — Error Path

  • Missing required field in POST body → expect 400 Bad Request with meaningful error message
  • Wrong data type (e.g. text where a number is expected) → expect 400 Bad Request
  • GET with a non-existent ID → expect 404 Not Found
  • No token in the request header → expect 401 Unauthorized
  • Valid token but accessing a resource the user has no permission for → expect 403 Forbidden
  • Boundary values — maximum field length, minimum numeric value, empty string, zero
  • Duplicate POST — creating a resource that already exists → expect 400 or 409 Conflict

Why This Module Matters: Manual checking — reading the response and deciding if it’s right — doesn’t scale. Test scripts run automatically after every request and give you a pass/fail result instantly. This is what separates a manual tester from a semi-automated API tester, and it’s a skill employers specifically look for.


Writing Tests in Postman

  • The Tests tab — JavaScript that Postman runs automatically after every response is received
  • pm.test("description", function () { ... }) — structure of every Postman test case
  • pm.response.to.have.status(201) — assert the exact status code returned
  • pm.response.to.have.header("Content-Type") — assert a specific header is present
  • pm.response.responseTime < 2000 — assert response is under 2 seconds

Response Body Assertions

  • const json = pm.response.json() — parse response body as a JavaScript object
  • pm.expect(json.name).to.eql("Bhanu") — assert exact field value
  • pm.expect(json).to.have.property("id") — assert a key exists in the response
  • pm.expect(json.id).to.be.a("number") — assert correct data type
  • pm.expect(json.items).to.be.an("array").that.is.not.empty — assert non-empty array
  • Chaining: .to.include, .to.not.be.null, .to.be.true, .to.be.above(0)

Why This Module Matters: Real test scenarios require setup — unique data, fresh tokens, IDs from previous responses. Pre-request scripts automate this setup so your test suite is self-sufficient and repeatable. Without this skill, you end up manually copying values between requests, which defeats the purpose of automation.


Dynamic Data & Request Chaining

  • Pre-request Script — JavaScript that runs BEFORE the request is sent; perfect for generating dynamic test data
  • Generating unique values:
    pm.environment.set("email", "user" + Date.now() + "@test.com")
  • Conditional token refresh — check if token exists, only call POST /login when it doesn’t
  • Request chaining — POST /users creates user → extract id → store as {{userId}} → use in GET /users/{{userId}}
  • Full workflow automation: Login → extract token → create record → retrieve it → update → delete — all chained automatically

Why This Module Matters: Running one request at a time isn’t testing — it’s clicking. Collection Runner lets you run your entire test suite in one go and see a complete pass/fail summary. Pairing it with a CSV data file means you can test 50 different inputs without writing 50 separate requests.


Collection Runner

  • Collection Runner — executes all requests in a collection sequentially in one automated run
  • Configuring iterations — how many times the entire collection runs; delay between requests in milliseconds
  • Selecting specific requests or folders to include in the run
  • Results panel — per-request pass/fail summary, assertion details, response time
  • Exporting run results as HTML or JSON report to share with the team

Data-Driven Testing with CSV

  • CSV data file — each row is one test iteration; column headers become variable names in Postman
  • Uploading CSV in Runner — select file, Postman automatically creates one iteration per row
  • Using {{column_name}} in URL, body, and headers — same request runs with different data each time
  • Example: test POST /users with 20 different name/email combinations from a single CSV — no code needed
  • JSON data file as alternative for complex nested test data structures

Why This Module Matters: Knowing Postman as a tool isn’t enough. Having a test strategy is a professional skill. Companies hire testers who can look at API documentation, build a complete test plan, organize their work, and plug it into a daily pipeline — not just testers who know how to click Send.


Swagger / OpenAPI & Test Planning

  • Swagger / OpenAPI — the industry-standard format for documenting REST APIs; your primary source for test cases
  • Reading Swagger UI — endpoints, request/response schemas, required fields, example payloads
  • Importing OpenAPI spec into Postman — auto-generates a collection with all endpoints ready to test
  • Deriving test cases from documentation — positive cases, negative cases, boundary values, security tests

Test Organization & CI/CD

  • Collection folder structure — organize by feature: Login folder, Users folder, Products folder, Orders folder
  • Happy Path folder + Error Scenarios folder within each feature — clear separation of test types
  • Environment strategy — separate Dev, QA, Staging, and Production environments for safe multi-environment testing
  • Regression suite — a curated collection of critical tests re-run after every release to catch regressions
  • Newman — Postman’s command-line runner:
    newman run collection.json -e environment.json
  • CI/CD integration — trigger Newman automatically via Jenkins, GitHub Actions, or any pipeline after each deployment

Interview Preparation

GET vs POST — GET retrieves with no body; POST creates with a body; GET is idempotent (safe to repeat), POST is not

401 vs 403 vs 404

  • 401 = not authenticated (missing or invalid token)
  • 403 = authenticated but not authorized (wrong role or permission)

PUT vs PATCH — PUT replaces the whole resource; PATCH updates only specified fields

What do you validate? — Status code, response body fields and types, required keys present, response time, content type header

Bearer Token Flow — POST /login → receive token → store → use in header:
Authorization: Bearer {{token}}

Data-driven testing — Upload CSV to Collection Runner; each row becomes one test iteration; {{column}} variables pull in each row’s data automatically


Exception Handling Mechanism

  • What is an exception – an unexpected event that disrupts normal program execution
  • Checked vs unchecked exceptions – compile-time vs runtime
  • try block – code that may throw an exception
  • catch (ExceptionType e) – handling a specific exception type
  • Multiple catch blocks – handling different exception types separately
  • finally block – always executes; used for resource cleanup (closing files, connections)
  • throw keyword – manually throwing an exception from your code
  • throws keyword – declaring that a method may throw a checked exception
  • Common exceptions:
    • NullPointerException
    • ArrayIndexOutOfBoundsException
    • NumberFormatException
    • ArithmeticException

List – Ordered, Allows Duplicates

  • ArrayList – dynamic array; fast random access; allows duplicates; maintains insertion order
  • LinkedList – doubly-linked list; fast add/remove at ends; slower random access
  • Common methods: add(), get(int), remove(), size(), contains(), set()
  • Iterating with for-each and Iterator

Set – No Duplicates

  • HashSet – no duplicates, no guaranteed order; backed by a hash table
  • LinkedHashSet – no duplicates, preserves insertion order
  • TreeSet – no duplicates, elements stored in natural sorted order

Queue & Stack

  • Queue interface – FIFO; offer(), poll(), peek()
  • Stack – LIFO; push(), pop(), peek()
  • Vector – legacy thread-safe version of ArrayList

Map – Key-Value Pairs

  • HashMap – key-value pairs, no guaranteed order, allows one null key
  • LinkedHashMap – key-value pairs, preserves insertion order
  • Common methods: put(), get(), remove(), containsKey(), keySet(), values()

Sorting & Iteration

  • Comparable interface – compareTo() for natural ordering inside the class
  • Comparator interface – compare() for custom ordering defined outside the class
  • Collections.sort(list) and Collections.sort(list, comparator)
  • IteratorhasNext() and next() for safe manual traversal

Devops

Maven

Git

Jenkins

Our Programs

Advanced selenium/Rest assured with devops tools

Koding Tree offers comprehensive training in Advanced Selenium, API Manual Testing, Rest Assured, DevOps, and Java, covering everything from core fundamentals to advanced automation and system-level concepts

Advanced selenium

💡 Why this matters: Most testers use only basic TestNG. Mastering @DataProvider, @Factory, and dependsOnMethods lets you run smarter, data-driven, and highly controlled test suites — skills that immediately impress in interviews.


Annotations & Execution Control

  • Full annotation lifecycle – @BeforeSuite, @BeforeTest, @BeforeClass, @BeforeMethod, @AfterMethod, @AfterClass, @AfterSuite
  • invocationCount – running a test method multiple times; constraints and valid use cases
  • priority – default value 0; negative values allowed; alphabetical tie-breaking
  • dependsOnMethods – precedence over priority when both are set
  • enabled=false vs alwaysRun=true – controlling test skipping within groups
  • Master suite XML – combining multiple testng.xml files into one run using <suite-files>

DataProvider – All Return Types

  • Object[] – single-parameter dataset; Object[][] – multi-row multi-column dataset
  • Iterator<Object[]> – lazy loading for large datasets without loading everything into memory
  • Why Map is not a valid @DataProvider return type

Assert, Groups & @Factory

  • Assert vs SoftAssert – fail-fast vs collect-all-failures strategy
  • @Test(groups) – smoke, regression, sanity; running specific groups from testng.xml
  • @Parameters – passing values from testng.xml into test methods at runtime
  • @Factory – dynamically generating multiple test class instances from a single class
  • Running TestNG programmatically via TestNG and XmlSuite classes – no XML needed

💡 Why this matters: Dependency Injection is how enterprise frameworks avoid duplicating browser setup across dozens of test classes. Understanding java.lang.reflect makes you stand out in senior-level framework design interviews.


Topics Covered

  • What Dependency Injection means – providing objects to a class rather than letting it create them
  • Injecting WebDriver into test classes via constructor or method parameters
  • java.lang.Class, java.lang.reflect.Method – inspecting and invoking methods at runtime
  • Running test methods programmatically using reflection – no hardcoded test names
  • Writing test results back to an Excel file after each test via injected utility classes
  • DI vs static WebDriver – when to use each approach in real projects

💡 Why this matters: Flaky tests cost teams hours of manual re-runs. Implementing IRetryAnalyzer and ITestListener at the framework level means your suite handles failures automatically — a hallmark of a production-ready framework.


Topics Covered

  • IRetryAnalyzer – implementing retry(ITestResult) to auto re-run failed tests up to a configurable limit
  • ITestListener – hooking into onTestFailure(), onTestStart(), onTestSuccess() lifecycle events
  • Taking a screenshot on failure using ITestResult + TakesScreenshot + FileUtils.copyFile()
  • Attaching failure screenshots automatically to ExtentReports via Base64 encoding
  • Reporter.log() – adding custom step messages visible in the TestNG HTML report
  • Registering listeners via @Listeners annotation or the <listeners> block in testng.xml

💡 Why this matters: Design patterns are asked in virtually every senior QA interview. Learning to implement and explain them with real Selenium examples puts you ahead of most candidates.


Singleton Pattern

  • Private constructor + static method = one WebDriver instance for the entire test run
  • Preventing multiple browser launches when multiple test classes initialize the driver

Factory Pattern

  • DriverFactory class – returning ChromeDriver, FirefoxDriver, or EdgeDriver based on input parameter
  • Removing browser-specific if-else logic from individual test classes

Decorator Pattern

  • Selenium 3 – EventFiringWebDriver + implementing WebDriverEventListener
  • Selenium 4 – EventFiringDecorator wrapping WebDriver + WebDriverListener interface
  • Logging every click(), navigate().to(), and exception automatically through the listener
  • Choosing between extends AbstractWebDriverEventListener vs implements WebDriverListener

💡 Why this matters: Standard POM is just the beginning. Advanced annotations like @FindAll, @FindBys, and @CacheLookup make your page classes faster and more expressive — and interviewers notice the difference.


Topics Covered

  • @FindBy – declaring WebElement and List<WebElement> fields, initialized via PageFactory.initElements(driver, this)
  • @FindAll – locating elements matching any of multiple locators (OR logic)
  • @FindBys – locating elements matching all of multiple locators (AND / parent-child chaining)
  • @CacheLookup – caching the element reference after the first DOM lookup to improve performance
  • When @CacheLookup helps vs. when it breaks – dynamic elements, page refresh, AJAX updates
  • Comparing @FindAll vs @FindBys – logic difference and when to use each in real tests

💡 Why this matters: Knowing three different screenshot approaches — and when each is appropriate — shows depth of knowledge. Most testers only know one method, and interviewers notice.


Topics Covered

  • TakesScreenshot – casting WebDriver and calling getScreenshotAs(OutputType.FILE) for full-page capture
  • Saving screenshots with FileUtils.copyFile() using timestamped filenames to avoid overwriting
  • WebElement.getScreenshotAs() – capturing just a specific element on the page (Selenium 4)
  • AShot library – full-page screenshots including content below the visible fold
  • Robot class (java.awt.Robot) – OS-level desktop screenshot using createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()))
  • Capturing screenshots on failure via ITestListener.onTestFailure(ITestResult) – no test-level code needed

💡 Why this matters: Some elements simply cannot be handled by standard Selenium — hidden inputs, custom widgets, OS-level dialogs. JavascriptExecutor and Robot are what professionals reach for when Selenium hits a wall.


JavascriptExecutor

  • Casting WebDriver to JavascriptExecutor and calling executeScript(String, Object...)
  • Scrolling by pixel: window.scrollBy(0, 500); scrolling to element: arguments[0].scrollIntoView()
  • Clicking elements that WebElement.click() cannot reach: arguments[0].click()
  • Setting input values via JS: arguments[0].value='text' for blocked input fields
  • Reading values: return arguments[0].value / return arguments[0].textContent
  • Limitations – bypasses user interaction validation; use carefully in real automation

Robot Class

  • Robot.mouseMove(x, y), mousePress(InputEvent.BUTTON1_DOWN_MASK), mouseRelease()
  • Robot.keyPress(KeyEvent.VK_ENTER) – keyboard simulation at OS level
  • Handling Windows file upload dialogs and OS pop-ups that Selenium cannot touch
  • Combining with StringSelection and Toolkit.getDefaultToolkit().getSystemClipboard() for clipboard operations

💡 Why this matters: Professional testers don’t just run tests — they produce shareable evidence. Rich HTML reports and structured log files let your team and stakeholders see exactly what was tested, what passed, and why something failed — with zero manual effort.


ExtentReports

  • Setting up ExtentSparkReporter and ExtentReports instance in a base class or listener
  • Creating ExtentTest nodes per test and logging pass(), fail(), skip() with messages
  • Attaching Base64-encoded screenshots to reports on failure
  • Integrating with ITestListener for automatic lifecycle-driven reporting
  • Flushing reports at end of suite with extent.flush()

Log4J & Reporter

  • log4j2.xml / log4j.properties – configuring ConsoleAppender, FileAppender, and log patterns
  • LogManager.getLogger(ClassName.class) – logging at DEBUG, INFO, WARN, ERROR levels
  • Reporter.log() – embedding step-level messages directly into the TestNG HTML report
  • Reporter.getOutput(ITestResult) – reading Reporter messages to attach to ExtentReport

Inheritance

  • extends keyword – child class inherits all non-private members of the parent
  • super keyword – calling the parent class constructor or methods
  • Types: single inheritance, multilevel inheritance, hierarchical inheritance
  • Java does not support multiple class inheritance – why, and how interfaces solve this

Encapsulation & Access Modifiers

  • private, protected, public, package-private – scope of each
  • Encapsulation pattern – private fields with public getter and setter methods
  • Why encapsulation prevents invalid state and makes code easier to maintain
  • Method overloading – same method name, different parameter signature

Method Overriding & Runtime Polymorphism

  • Method overriding – child class provides its own implementation of a parent method
  • @Override annotation – compile-time check that overriding is valid
  • Runtime (dynamic) polymorphism – method resolved at runtime based on actual object type
  • Overloading vs overriding – compile-time vs runtime; key differences interviewers test

Object Type Casting

  • Upcasting – implicit; Animal a = new Dog() – always safe
  • Downcasting – explicit; Dog d = (Dog) a – requires instanceof check
  • instanceof operator – safely checking type before casting

Interfaces

  • interface keyword – defines a contract of method signatures without implementation
  • implements keyword – a class agrees to provide all interface methods
  • Interface methods are implicitly public abstract, fields are public static final
  • A class can implement multiple interfaces – Java’s answer to multiple inheritance

Abstract Classes

  • abstract class – cannot be instantiated; must be subclassed
  • abstract method – declared without a body; subclass must provide implementation
  • Can contain both abstract methods and fully implemented (concrete) methods
  • Interface vs abstract class – when to use each; key differences for interviews

String Class & Key Methods

  • String is immutable – every operation creates a new object in the String Pool
  • length(), charAt(int), indexOf(String), lastIndexOf()
  • substring(int start), substring(int start, int end)
  • toUpperCase(), toLowerCase(), trim()
  • replace(), contains(), startsWith(), endsWith()
  • equals() vs equalsIgnoreCase() vs == – why == compares references, not content
  • split(String regex), join(), toCharArray()

StringBuilder & StringBuffer

  • StringBuilder – mutable, not thread-safe, faster for single-threaded use
  • append(), insert(), delete(), reverse(), toString()
  • StringBuffer – mutable, thread-safe, used in multi-threaded programs
  • String vs StringBuilder vs StringBuffer – performance and thread-safety trade-offs

List – Ordered, Allows Duplicates

  • ArrayList – dynamic array; fast random access; allows duplicates; maintains insertion order
  • LinkedList – doubly-linked list; fast add/remove at ends; slower random access
  • Common methods: add(), get(int), remove(), size(), contains(), set()
  • Iterating with for-each and Iterator

Set – No Duplicates

  • HashSet – no duplicates, no guaranteed order; backed by a hash table
  • LinkedHashSet – no duplicates, preserves insertion order
  • TreeSet – no duplicates, elements stored in natural sorted order

Queue & Stack

  • Queue interface – FIFO; offer(), poll(), peek()
  • Stack – LIFO; push(), pop(), peek()
  • Vector – legacy thread-safe version of ArrayList

Map – Key-Value Pairs

  • HashMap – key-value pairs, no guaranteed order, allows one null key
  • LinkedHashMap – key-value pairs, preserves insertion order
  • Common methods: put(), get(), remove(), containsKey(), keySet(), values()

Sorting & Iteration

  • Comparable interface – compareTo() for natural ordering inside the class
  • Comparator interface – compare() for custom ordering defined outside the class
  • Collections.sort(list) and Collections.sort(list, comparator)
  • IteratorhasNext() and next() for safe manual traversal

Why this matters: A Keyword Driven Framework separates test logic from test data – so QA analysts without coding skills can write test cases in Excel and the framework executes them automatically. This is a sign of a senior framework developer.


Architecture & Concepts

  • What keyword-driven testing is – separating action keywords from test data in an Excel file
  • Excel test case structure – columns for Keyword, Locator Type, Locator Value, Test Data
  • Building a KeywordExecutor / ActionEngine using reflection or a switch statement

Implementation

  • Reading test steps from Excel row by row using XSSFWorkbook / XSSFRow
  • Implementing WebUtil helper methods:
    clickElement(), typeText(), verifyText(), navigateTo()
  • Dispatching the correct method based on the keyword value read from Excel
  • Adding new test cases entirely in Excel – zero Java code changes required
  • Logging each keyword action to ExtentReports for full traceability

 

Why this matters: A working project on a real application is the difference between a resume and a portfolio. You leave this module with something you can demo, explain end-to-end, and be proud of in any senior QA interview.


Live Project

  • Automating a real Point-of-Sale (POS) web application – login, products, orders, billing
  • Applying all course components: TestNG, POM with @FindBy, ExtentReports, Log4j, Excel data, Jenkins CI/CD
  • Jira – creating test cases, logging bugs with screenshots, tracking sprint progress
  • Debugging real failures – identifying whether the issue is in the app, framework, or locator
  • Code review – refactoring for readability, removing duplication, improving structure

Interview Preparation

Design Patterns
Explain Singleton, Factory, Decorator with real code examples from your own project

Framework Walkthrough
Walk through your package structure, layer responsibilities, and why each design decision was made

CI/CD Pipeline
Describe the full chain: Jenkins → pom.xmltestng.xml@Parameters → test method

Parallel Execution
Explain ThreadLocal<WebDriver> and how you prevented driver conflicts across parallel threads

Data Strategies
Compare Excel, JSON, Properties, MySQL — and justify where you used each in the project

Docker & Grid
Describe how you used Docker containers as browser nodes and ran tests without installing browsers locally

Api munual testing

Tools & Setup

  • Installing Java JDK, Eclipse IDE, and configuring environment variables
  • Creating a Maven project and understanding pom.xml structure
  • Adding Selenium WebDriver dependency via Maven Central
  • Setting up ChromeDriver, GeckoDriver (Firefox), and EdgeDriver

WebDriver Fundamentals

  • Understanding the Selenium architecture – how WebDriver communicates with browsers
  • Instantiating WebDriver for Chrome, Firefox, and Edge browsers
  • Writing and running your first automation script end-to-end
  • Understanding the difference between Selenium 3 and Selenium 4

WebDriver Browser Commands

  • driver.get() and driver.navigate().to() – loading URLs
  • navigate().back(), navigate().forward(), navigate().refresh()
  • driver.manage().window().maximize(), setSize(), setPosition()
  • driver.getTitle() and driver.getCurrentUrl() for page verification
  • driver.quit() vs driver.close() – when to use each

💡 Locators are how your script finds elements on a page. Mastering all 8 strategies means you can automate any website, no matter how it is built.


All 8 Locator Types

  • By.id – fastest and most reliable when available
  • By.name – using the HTML name attribute
  • By.className – single CSS class targeting
  • By.tagName – selecting by HTML element type
  • By.linkText – exact anchor text matching
  • By.partialLinkText – partial anchor text matching
  • By.cssSelector – powerful CSS-based targeting
  • By.xpath – most flexible, works anywhere in the DOM

💡 CSS Selectors and XPath are the two most powerful locator types. Knowing both in depth allows you to find any element — even in the most complex web pages.


CSS Selector Techniques

  • Tag, ID, class, and attribute-based CSS selectors
  • Combining multiple attributes: input[type='text'][name='email']
  • Child and descendant selectors, sibling selectors
  • :nth-child(), ^=, $=, *= – starts-with, ends-with, contains patterns

XPath Techniques

  • Absolute XPath vs Relative XPath – differences and when to use each
  • XPath with attributes: //tag[@attribute='value']
  • contains(), starts-with(), text() functions
  • XPath axes: parent, child, following-sibling, preceding-sibling, ancestor
  • Grouping and index-based XPath: (//tag)[2]
  • AND / OR logical operators in XPath expressions

💡 Once you find an element, you need to interact with it – type, click, read values. These are the core actions every automation script performs.


WebElement Action Methods

  • sendKeys() – typing into text fields and input boxes
  • click() – clicking buttons, links, checkboxes, and radio buttons
  • clear() – clearing existing text from input fields
  • submit() – submitting forms directly

WebElement Property Methods

  • isSelected() – checking state of checkboxes and radio buttons
  • isEnabled() – verifying if a field or button is active
  • isDisplayed() – checking if an element is visible on screen
  • getText() – reading the visible text of any element
  • getAttribute() – reading any HTML attribute value
  • getCssValue() – reading applied CSS property values

💡 Many modern websites require mouse gestures and keyboard combos that a simple click cannot handle. The Actions class and screenshot tools cover exactly these scenarios.


Actions Class – Advanced Mouse & Keyboard

  • moveToElement() – hovering over menus and tooltips
  • doubleClick() and contextClick() (right-click)
  • dragAndDrop() and dragAndDropBy()
  • Keyboard actions: keyDown(Keys.SHIFT), keyUp(), sendKeys(Keys.ENTER)
  • Scroll actions using scrollToElement() and scrollByAmount() (Selenium 4)
  • build().perform() – chaining and executing action sequences

Capturing Screenshots

  • TakesScreenshot interface – full-page screenshot on test failure
  • Element-level screenshot using WebElement.getScreenshotAs()
  • AShot library – for full-page scrolling screenshots
  • JavaScript Executor-based screenshot approach

💡 Real applications open pop-ups, new tabs, and embed content in frames. Knowing how to switch between them keeps your scripts from getting stuck.


Window & Tab Handling

  • driver.getWindowHandle() – capturing the current window reference
  • driver.getWindowHandles() – getting all open window/tab references
  • driver.switchTo().window(handle) – switching between windows and tabs
  • Opening a new tab using Selenium 4’s newWindow(WindowType.TAB)
  • Closing child windows and returning to the parent window

Frame & Alert Handling

  • driver.switchTo().frame() – switching by index, name, or WebElement
  • driver.switchTo().defaultContent() – returning to the main page
  • driver.switchTo().alert() – handling JavaScript alerts and confirms
  • alert.accept(), alert.dismiss(), and alert.sendKeys()

Classes & Objects

  • Class as a blueprint – instance variables (fields) and instance methods
  • Creating objects: ClassName obj = new ClassName()
  • Default constructor – auto-provided by Java when no constructor is defined
  • Parameterised constructor – initialising objects with values at creation time
  • this keyword – resolving naming conflicts between parameters and instance fields

static & final Members

  • static variable – one copy shared across all instances of a class
  • static method – belongs to the class, not to any object
  • final variable – constant; must be assigned exactly once
  • final method – cannot be overridden by a subclass
  • final class – cannot be extended (e.g., String is a final class)

💡 Timing issues cause most automation failures. Proper synchronization makes your scripts reliable — even on slow networks or dynamic pages that load asynchronously.


Wait Types

  • Thread.sleep() – static wait (why it should be avoided in real scripts)
  • driver.manage().timeouts().implicitlyWait() – global element search timeout
  • WebDriverWait with ExpectedConditions – explicit conditional waits
  • Key conditions:
    • visibilityOfElementLocated
    • elementToBeClickable
    • textToBePresentInElement
    • alertIsPresent
  • FluentWait – polling interval, custom timeout, ignoring exceptions
  • Choosing the right wait strategy for different application behaviours

💡 POM is the industry-standard way to organise automation code. It separates locators from test logic — making scripts easier to maintain when the UI changes.


POM Architecture

  • Why POM exists – separation of page logic from test logic
  • Creating Page classes with locators and action methods
  • @FindBy annotation – declaring locators declaratively
  • PageFactory.initElements(driver, this) – initialising page elements
  • Structuring a multi-page project: LoginPage, HomePage, CartPage, etc.

Common Issues & Fixes

  • Understanding and handling StaleElementReferenceException
  • Lazy initialisation vs eager initialisation of page elements
  • Reusing page objects across multiple test classes

💡 TestNG is the test management backbone of most Selenium projects. It controls the order, grouping, and reporting of your tests — making your suite production-quality.


TestNG Annotations & Configuration

  • Core annotations: @Test, @BeforeMethod, @AfterMethod, @BeforeClass, @AfterClass, @BeforeSuite, @AfterSuite
  • Test grouping with groups attribute – running smoke vs regression sets
  • Parameterisation using @Parameters and testng.xml configuration
  • Data-driven testing using @DataProvider
  • Parallel test execution – configuring thread count in testng.xml
  • Test priority, dependency (dependsOnMethods), and enabled flag
  • Custom logging using Reporter.log() inside test methods

Assertions

  • Hard assertions with Assert – test stops immediately on failure
  • Soft assertions with SoftAssert – collect all failures, report at end
  • Common assertion methods: assertEquals, assertTrue, assertFalse, assertNull, assertNotNull
  • Choosing between hard and soft assertions based on test scenario

💡 Data-driven testing lets one script run across hundreds of input combinations stored in an Excel sheet — a standard practice in enterprise QA teams.


Apache POI – Excel Integration

  • Adding Apache POI dependency to pom.xml for .xlsx support
  • XSSFWorkbook, XSSFSheet, XSSFRow, XSSFCell – reading the Excel object hierarchy
  • Reading test data row-by-row from an Excel file into test scripts
  • Handling different cell types: String, Numeric, Boolean, Formula
  • Writing test results (Pass/Fail) back into the Excel sheet
  • Building a reusable ExcelUtils utility class for the framework

 

💡 This module brings everything together into a production-level framework — the kind you will actually work with inside a software company.


Framework Design (AFW)

  • Recommended folder structure: src/main for utilities, src/test for test scripts
  • Building a BaseTest class – centralised driver initialisation and teardown
  • Integrating POM page classes, Excel utilities, and TestNG configuration
  • Maven Surefire Plugin – triggering test suites from command line
  • Version control with GitHub – pushing the project, branching basics

Selenium Grid & Cross-Browser Testing

  • Selenium Grid architecture – Hub and Node setup for distributed execution
  • RemoteWebDriver – running tests on remote machines and browsers
  • Cloud-based grid execution using SauceLabs
  • Configuring browser capabilities for parallel cross-browser runs

Jenkins CI/CD Integration

  • Installing and configuring Jenkins for a Java/Maven project
  • Creating a Jenkins job linked to a GitHub repository
  • Scheduling automated test runs using cron-style build triggers
  • Viewing TestNG reports and build history inside Jenkins dashboard

Rest assured

Tools & Setup

  • Installing Java JDK, Eclipse IDE, and configuring environment variables
  • Creating a Maven project and understanding pom.xml structure
  • Adding Selenium WebDriver dependency via Maven Central
  • Setting up ChromeDriver, GeckoDriver (Firefox), and EdgeDriver

WebDriver Fundamentals

  • Understanding the Selenium architecture – how WebDriver communicates with browsers
  • Instantiating WebDriver for Chrome, Firefox, and Edge browsers
  • Writing and running your first automation script end-to-end
  • Understanding the difference between Selenium 3 and Selenium 4

WebDriver Browser Commands

  • driver.get() and driver.navigate().to() – loading URLs
  • navigate().back(), navigate().forward(), navigate().refresh()
  • driver.manage().window().maximize(), setSize(), setPosition()
  • driver.getTitle() and driver.getCurrentUrl() for page verification
  • driver.quit() vs driver.close() – when to use each

💡 Locators are how your script finds elements on a page. Mastering all 8 strategies means you can automate any website, no matter how it is built.


All 8 Locator Types

  • By.id – fastest and most reliable when available
  • By.name – using the HTML name attribute
  • By.className – single CSS class targeting
  • By.tagName – selecting by HTML element type
  • By.linkText – exact anchor text matching
  • By.partialLinkText – partial anchor text matching
  • By.cssSelector – powerful CSS-based targeting
  • By.xpath – most flexible, works anywhere in the DOM

💡 CSS Selectors and XPath are the two most powerful locator types. Knowing both in depth allows you to find any element — even in the most complex web pages.


CSS Selector Techniques

  • Tag, ID, class, and attribute-based CSS selectors
  • Combining multiple attributes: input[type='text'][name='email']
  • Child and descendant selectors, sibling selectors
  • :nth-child(), ^=, $=, *= – starts-with, ends-with, contains patterns

XPath Techniques

  • Absolute XPath vs Relative XPath – differences and when to use each
  • XPath with attributes: //tag[@attribute='value']
  • contains(), starts-with(), text() functions
  • XPath axes: parent, child, following-sibling, preceding-sibling, ancestor
  • Grouping and index-based XPath: (//tag)[2]
  • AND / OR logical operators in XPath expressions

💡 Once you find an element, you need to interact with it – type, click, read values. These are the core actions every automation script performs.


WebElement Action Methods

  • sendKeys() – typing into text fields and input boxes
  • click() – clicking buttons, links, checkboxes, and radio buttons
  • clear() – clearing existing text from input fields
  • submit() – submitting forms directly

WebElement Property Methods

  • isSelected() – checking state of checkboxes and radio buttons
  • isEnabled() – verifying if a field or button is active
  • isDisplayed() – checking if an element is visible on screen
  • getText() – reading the visible text of any element
  • getAttribute() – reading any HTML attribute value
  • getCssValue() – reading applied CSS property values

💡 Many modern websites require mouse gestures and keyboard combos that a simple click cannot handle. The Actions class and screenshot tools cover exactly these scenarios.


Actions Class – Advanced Mouse & Keyboard

  • moveToElement() – hovering over menus and tooltips
  • doubleClick() and contextClick() (right-click)
  • dragAndDrop() and dragAndDropBy()
  • Keyboard actions: keyDown(Keys.SHIFT), keyUp(), sendKeys(Keys.ENTER)
  • Scroll actions using scrollToElement() and scrollByAmount() (Selenium 4)
  • build().perform() – chaining and executing action sequences

Capturing Screenshots

  • TakesScreenshot interface – full-page screenshot on test failure
  • Element-level screenshot using WebElement.getScreenshotAs()
  • AShot library – for full-page scrolling screenshots
  • JavaScript Executor-based screenshot approach

💡 Real applications open pop-ups, new tabs, and embed content in frames. Knowing how to switch between them keeps your scripts from getting stuck.


Window & Tab Handling

  • driver.getWindowHandle() – capturing the current window reference
  • driver.getWindowHandles() – getting all open window/tab references
  • driver.switchTo().window(handle) – switching between windows and tabs
  • Opening a new tab using Selenium 4’s newWindow(WindowType.TAB)
  • Closing child windows and returning to the parent window

Frame & Alert Handling

  • driver.switchTo().frame() – switching by index, name, or WebElement
  • driver.switchTo().defaultContent() – returning to the main page
  • driver.switchTo().alert() – handling JavaScript alerts and confirms
  • alert.accept(), alert.dismiss(), and alert.sendKeys()

Classes & Objects

  • Class as a blueprint – instance variables (fields) and instance methods
  • Creating objects: ClassName obj = new ClassName()
  • Default constructor – auto-provided by Java when no constructor is defined
  • Parameterised constructor – initialising objects with values at creation time
  • this keyword – resolving naming conflicts between parameters and instance fields

static & final Members

  • static variable – one copy shared across all instances of a class
  • static method – belongs to the class, not to any object
  • final variable – constant; must be assigned exactly once
  • final method – cannot be overridden by a subclass
  • final class – cannot be extended (e.g., String is a final class)

💡 Timing issues cause most automation failures. Proper synchronization makes your scripts reliable — even on slow networks or dynamic pages that load asynchronously.


Wait Types

  • Thread.sleep() – static wait (why it should be avoided in real scripts)
  • driver.manage().timeouts().implicitlyWait() – global element search timeout
  • WebDriverWait with ExpectedConditions – explicit conditional waits
  • Key conditions:
    • visibilityOfElementLocated
    • elementToBeClickable
    • textToBePresentInElement
    • alertIsPresent
  • FluentWait – polling interval, custom timeout, ignoring exceptions
  • Choosing the right wait strategy for different application behaviours

💡 POM is the industry-standard way to organise automation code. It separates locators from test logic — making scripts easier to maintain when the UI changes.


POM Architecture

  • Why POM exists – separation of page logic from test logic
  • Creating Page classes with locators and action methods
  • @FindBy annotation – declaring locators declaratively
  • PageFactory.initElements(driver, this) – initialising page elements
  • Structuring a multi-page project: LoginPage, HomePage, CartPage, etc.

Common Issues & Fixes

  • Understanding and handling StaleElementReferenceException
  • Lazy initialisation vs eager initialisation of page elements
  • Reusing page objects across multiple test classes

💡 TestNG is the test management backbone of most Selenium projects. It controls the order, grouping, and reporting of your tests — making your suite production-quality.


TestNG Annotations & Configuration

  • Core annotations: @Test, @BeforeMethod, @AfterMethod, @BeforeClass, @AfterClass, @BeforeSuite, @AfterSuite
  • Test grouping with groups attribute – running smoke vs regression sets
  • Parameterisation using @Parameters and testng.xml configuration
  • Data-driven testing using @DataProvider
  • Parallel test execution – configuring thread count in testng.xml
  • Test priority, dependency (dependsOnMethods), and enabled flag
  • Custom logging using Reporter.log() inside test methods

Assertions

  • Hard assertions with Assert – test stops immediately on failure
  • Soft assertions with SoftAssert – collect all failures, report at end
  • Common assertion methods: assertEquals, assertTrue, assertFalse, assertNull, assertNotNull
  • Choosing between hard and soft assertions based on test scenario

💡 Data-driven testing lets one script run across hundreds of input combinations stored in an Excel sheet — a standard practice in enterprise QA teams.


Apache POI – Excel Integration

  • Adding Apache POI dependency to pom.xml for .xlsx support
  • XSSFWorkbook, XSSFSheet, XSSFRow, XSSFCell – reading the Excel object hierarchy
  • Reading test data row-by-row from an Excel file into test scripts
  • Handling different cell types: String, Numeric, Boolean, Formula
  • Writing test results (Pass/Fail) back into the Excel sheet
  • Building a reusable ExcelUtils utility class for the framework

 

💡 This module brings everything together into a production-level framework — the kind you will actually work with inside a software company.


Framework Design (AFW)

  • Recommended folder structure: src/main for utilities, src/test for test scripts
  • Building a BaseTest class – centralised driver initialisation and teardown
  • Integrating POM page classes, Excel utilities, and TestNG configuration
  • Maven Surefire Plugin – triggering test suites from command line
  • Version control with GitHub – pushing the project, branching basics

Selenium Grid & Cross-Browser Testing

  • Selenium Grid architecture – Hub and Node setup for distributed execution
  • RemoteWebDriver – running tests on remote machines and browsers
  • Cloud-based grid execution using SauceLabs
  • Configuring browser capabilities for parallel cross-browser runs

Jenkins CI/CD Integration

  • Installing and configuring Jenkins for a Java/Maven project
  • Creating a Jenkins job linked to a GitHub repository
  • Scheduling automated test runs using cron-style build triggers
  • Viewing TestNG reports and build history inside Jenkins dashboard

Devops

Maven

Git

Jenkins

Live Projects You Will Build

Flight Booking API

Microservices based airline reservation backend.

E-Commerce Platform

Full-featured Amazon clone with Cart & Payment Gateway.

Smart Banking System

Secure transaction portal with Spring Security & JWT.

Hospital Management

Patient records & doctor booking system.