How to create your first dynamic performance test in Apache jmeter

by SkillAiNest

As a QA engineer, I have always found performance testing to be one of the most interesting and impactful parts of software testing. Yes, functional testing is important, but it’s of little use if users have to wait 5 seconds for each page to load.

For me personally, there is a deep satisfaction that comes from seeing your product come alive under load to see how it will actually perform in production when thousands of users use it.

Performance testing is about discovering how your system performs under real-world stress in terms of load, concurrency, and throughput. An important aspect of performance testing is to ensure that the APIs can withstand the expected load. You can do this using tools like Apache Jmeter and K6.

In this tutorial, we will explore how you can create your first end-to-end performance test in Apache jmeter. You will be learning to create a test suite that is dynamic (tests can be run with any test data) and is one-click execution (tests can be executed through GUI as well as CLI).

Table of Contents

  1. Conditions

  2. Introduction to Apache jmeter

  3. The result

Conditions

Before you begin, make sure you have:

You can check if JMeter is installed by running the command below:

jmeter -v

Note: This tutorial will be used JSONPLAY holder Public API You will learn how you can get post_add and use it in a chain request to get user details.

Let’s begin.

Introduction to Apache jmeter

Apache jmeter is an open source API load and stress testing tool. It is a powerful testing tool that supports a wide range of protocols, including HTTP, HTTPS, FTP, JDBC, SOAP, and REST.

JMeter helps you answer critical questions about your APIs, such as:

  • How does my API perform under heavy load?

  • What is the maximum number of users before failure?

  • What applications or endpoints are slowing things down?

Let’s go through the step-by-step process of creating a dynamic load testing suite with Jmeter.

Step 1: Create a new test plan

Once JMeter opens, you will see a blank test plan. Think of it as your central workspace, containing everything: test configuration, users, requests, assertions, and results.

Right click Test Plan → Add → Threads (Users) → Thread Group To add a thread group. A thread group is basically a test suite that contains our test cases.

Add a thread group

Step 2: Create a thread group

To create a thread group, fill in the following input fields:

The arrangementPriceDescription
Number of threads (users)5It represents the number of concurrent users. In this case, it would be ‘5’
Ramp-up period (seconds)10This means that it takes time for the threads to reach the maximum value.
Loop count2This determines how often you want the thread group.

You have now created a small, controlled load test of 10 total requests (5 users × 2 loops).

Thread group

Step 3: Add HTTP Request Defaults

When you’re building a suite of 100s of APIs, you don’t need to add your application details to all the API samplers in Jmeter. JMeter lets you set a time globally using a config element called HTTP Request Defaults. To add this element, follow the steps below:

  1. Right click Thread Group → Add → Configuration Element → HTTP Request Defaults.

  2. Enter the following:

This means that all requests in this test will automatically use this base URL.

Step 4: Add CSV Dataset Config (Dynamic Input)

In real projects, APIs rarely use static inputs. Take for example a login API that you want to run for 100 compatible users. In a real-world scenario, each login request will have a different username and password.

To replicate this on Jmeter, you need to run your test for 100 different login credentials. This means you should be tested Driven by test data. We can create a data driven test using one in jmeter CSV file:

  1. Create a file named data.csv with the following content:

     post_id
     1
     2
     3
     4
     5
    
  2. Save it in your JMeter project folder.

  3. In jmeter, right-click Thread Group → Add → Configuration Element → CSV Dataset Config.

    Add CSV data set config

  4. Fill in the following fields:

Now each user chooses a new one post_id For each iteration from the CSV file.

Step 5: Add HTTP Request Samples

Now let’s add the actual API call that we will test under load. To do this, follow the steps below:

  1. Right click Thread Group → Add → Sampler → HTTP Request.

    Add an HTTP request

  2. Change its name Get post data.

  3. Set the following fields:

    • Method: get

    • Route: /posts/${post_id}

Here ${post_id} It is dynamically fetched from your CSV file. The Protocol and Server IP fields will automatically receive data from the ‘HTTP Request Default’ config element we added in step #3.

Add a gate request

When the API responds, we can extract a value (eg userId) from this and use later. It is used to implement an end-to-end flow where data is received (with GET) from an API and sent to the next POST/DELETE API.

For our API, below is an example response:

{
  "userId": 1,
  "id": 3,
  "title": "fugiat veniam minus",
  "body": "This is an example post body"
}

to extract userId:

  1. Right click Get Post Data → Post Processors → JSON Extractor.

    Add a JSON extractor

  2. Set the variables below in the JSON extractor:

JSON Extractor

Now you can use ${user_id} In the next application, make your test fully dynamic.

Step 7: Add a claim

Assertions help you verify that your API responds correctly even under load. You can emphasize the API response code, the response time, or even the response payload. To add a claim, follow the steps below:

  1. Right click Get Post Data → Add → Assertions → Response Assertion.

    Add a counterclaim

  2. Create something like:

Add a counterclaim

This makes Jmeter only ensure the request succeeds if the word fugiat appears in response.

Step 8: Add listeners

We will add the audience to display in different formats, such as visually or abstractly. Let’s add two essentials:

  1. View the results tree: To view and debug individual requests.

  2. Summary report: To view performance metrics such as response time, error rate, and throughput.

Add them through Thread Group → Add → Listener → (select listeners)

Add listeners to jmeter

Step 9: Run your test

Hit the green Start The button above will start JMeter sending requests to your API using the post IDs generated from your CSV file.

As the test runs:

  • In green check marks View the results tree Meaning successful responses.

  • Claim failure will be displayed in red.

  • Summary report Collect key metrics.

jmeter View results tree

J Mater Summary Report

Step 10: Another Request Chain (Optional)

Let’s take it a step further: we’ll use extracted user_id From the first response to get user details from Call customers. To do this, follow the steps below:

  1. Right click Thread Group → Add → Sampler → HTTP Request.

  2. Change the name Get user details.

  3. Determined:

    • Method: get

    • route: /users/${user_id}

      Get users API

Test implementation in JMeter

Step 11: Analyze the results

Once the test is complete, open Summary report. You will see:

MatricDescription
Sample countTotal number of requests sent
AverageResponse time means each request
min/maxFastest and slowest response times
error %Percentage of failed requests
ThroughputRequests are handled every second

If your error percentage is 0% and throughput is stable, your system handled the load well.

Pro tip

  • Parameterize everything. Use multiple CSVs for realistic test flows (users, IDs, tokens).

  • Add a timer (Like Permanent timer) to simulate the thinking time between user actions.

  • Use the claim wisely. Do not include additional claims; Pay attention to key validations like response time and API status codes.

  • Generate HTML reports using the command below:

      jmeter -n -t test-plan.jmx -l results.jtl -e -o report
    

Example folder structure:

Follow the folder structure below for an organized test suite.

performance-test/
├── data.csv
├── test-plan.jmx
└── results/
    ├── summary.csv
    └── report.html

The result

Performance testing is an essential element of the production readiness checklist for any product. This helps you ensure that your product can gracefully handle the expected user load and scale.

This guide is your first step toward writing end-to-end performance test cases and bridging the gap between being a functional test engineer and a full-stack QA engineer who understands both quality and scalability.

I hope you find this tutorial helpful. If you want to stay connected or learn more about performance testing, follow me LinkedIn.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro