Nginx
Nginx is a high-performance open-source web server, reverse proxy, and load balancer. In Virtual Client, the Nginx workload measures web server performance by serving static content of configurable sizes over HTTPS and measuring throughput and latency under sustained load. The HTTP load is generated by the Wrk or Wrk2 client tool.
The workload supports two deployment topologies:
- Client-Server (two-node) — A dedicated client machine runs wrk/wrk2 against a dedicated server machine running Nginx. This separates resource consumption between the web server and the load generator for accurate measurements.
- Reverse Proxy (three-node) — A client sends requests to an Nginx reverse-proxy instance, which forwards them to a backend Nginx server. This topology is used to benchmark Nginx reverse-proxy performance.
The NginxServerExecutor manages the Nginx server lifecycle on the server (and reverse-proxy) instances. It handles configuration generation, SSL setup, static content creation of configurable file sizes, and server start/stop/reset operations via the NginxCommand enum (Start, Stop, GetVersion, GetConfig).
Deployment Modes
The Nginx workload requires a multi-VM layout and does not support single-VM mode. This is because the
NginxServerExecutor depends on separate client and server instances connected via a layout file.
- Client-Server (two-node) — A dedicated client machine runs wrk/wrk2, and a dedicated server machine runs Nginx.
- Reverse Proxy (three-node) — A client machine, a reverse-proxy machine, and a backend server machine.
What is Being Measured?
The Nginx workload measures the throughput and latency of an Nginx web server serving static files of a configurable size. The Wrk (or Wrk2) client tool generates concurrent HTTP/HTTPS requests across different connection counts and thread configurations to characterize server performance at varying load levels.
The following scenarios are included in the standard profile:
| Scenario | Connections | Description |
|---|---|---|
| Latency_100_Connections | 100 | Low-concurrency baseline measurement at full thread count. |
| Latency_1K_Connections | 1,000 | Medium-concurrency measurement at full thread count. |
| Latency_5K_Connections | 5,000 | High-concurrency measurement at full thread count. |
| Latency_10K_Connections | 10,000 | Very high-concurrency measurement at full thread count. |
| Latency_100_Connections_Thread/2 | 100 | Low-concurrency measurement at half the logical core count. |
| Latency_1K_Connections_Thread/2 | 1,000 | Medium-concurrency measurement at half the logical core count. |
| Latency_5K_Connections_Thread/2 | 5,000 | High-concurrency measurement at half the logical core count. |
| Latency_10K_Connections_Thread/2 | 10,000 | Very high-concurrency at half the logical core count. |
| Latency_100_Connections_Thread/4 | 100 | Low-concurrency measurement at one quarter of the logical core count. |
| Latency_1K_Connections_Thread/4 | 1,000 | Medium-concurrency measurement at one quarter of the logical core count. |
| Latency_5K_Connections_Thread/4 | 5,000 | High-concurrency measurement at one quarter of the logical core count. |
| Latency_10K_Connections_Thread/4 | 10,000 | Very high-concurrency at one quarter of the logical core count. |
| Latency_100_Connections_Thread/8 | 100 | Low-concurrency measurement at one eighth of the logical core count. |
| Latency_1K_Connections_Thread/8 | 1,000 | Medium-concurrency measurement at one eighth of the logical core count. |
| Latency_5K_Connections_Thread/8 | 5,000 | High-concurrency measurement at one eighth of the logical core count. |
| Latency_10K_Connections_Thread/8 | 10,000 | Very high-concurrency at one eighth of the logical core count. |
Workload Metrics
The following metrics are examples of those captured by the Virtual Client when running the Nginx workload. Latency values are normalized to milliseconds by the parser regardless of the unit reported by Wrk (nanoseconds, microseconds, milliseconds, or seconds).
| Tool Name | Metric Name | Example Value | Unit |
|---|---|---|---|
| Wrk | latency_p50 | 1.234 | milliseconds |
| Wrk | latency_p75 | 2.456 | milliseconds |
| Wrk | latency_p90 | 3.678 | milliseconds |
| Wrk | latency_p99 | 8.901 | milliseconds |
| Wrk | latency_p99_9 | 15.432 | milliseconds |
| Wrk | latency_p99_99 | 20.987 | milliseconds |
| Wrk | latency_p99_999 | 30.123 | milliseconds |
| Wrk | latency_p100 | 45.678 | milliseconds |
| Wrk | requests/sec | 25432.56 | requests/sec |
| Wrk | transfers/sec | 312.45 | megabytes/sec |
When wrk2 is used, an additional set of uncorrected latency metrics is emitted (e.g., uncorrected_latency_p50, uncorrected_latency_p99).
See the Wrk/Wrk2 documentation for the complete list of wrk metrics.
Profiles
The following profiles are available for the Nginx workload.
| Profile Name | Description | Client Tool | Topology | Platforms |
|---|---|---|---|---|
| PERF-WEB-NGINX-WRK.json | Nginx web server benchmark using wrk across multiple connection and thread counts. | WrkExecutor | Client → Server | linux-x64, linux-arm64 |
| PERF-WEB-NGINX-WRK2.json | Nginx web server benchmark using wrk2 with constant request rate and corrected latency. | Wrk2Executor | Client → Server | linux-x64 |
| PERF-WEB-NGINX-WRK-RP.json | Nginx reverse-proxy benchmark using wrk across a three-node layout. | WrkExecutor | Client → RP → Server | linux-x64, linux-arm64 |
| PERF-WEB-NGINX-WRK2-RP.json | Nginx reverse-proxy benchmark using wrk2 across a three-node layout. | Wrk2Executor | Client → RP → Server | linux-x64 |
Server Parameters
The following table describes the key parameters supported by the NginxServerExecutor.
| Parameter | Description | Default |
|---|---|---|
| PackageName | The name of the Nginx dependency package. | required |
| Role | The role of the current instance: Server or ReverseProxy. | required |
| Workers | Number of Nginx worker processes. Set to 0 or omit to use all cores. | 0 (auto) |
| FileSizeInKB | Size of the static file to serve (in kilobytes). | 1 |
| Timeout | Maximum time to keep the server online before resetting. | 30 minutes |
| pollingInterval | Interval between server state polling cycles. | 60 seconds |