

Other constraints include the fact that ApacheBench’s send-everything-at-once approach may not reflect the way your application handles requests over time and the distance between the server and ab client (e.g., the same host or different availability zones) will affect latency scores. Memory utilization by one invocation of the ab command, as seen in Datadog's Live Processes view. The two timeseries graphs below, for example, show the number of concurrent connections as well as requests per second for the command: The -c flag does allow ab to complete its tests in less time, and simulates a higher number of concurrent connections. Additionally, ab runs on a single thread-the -c value tells ab how many file descriptors to allocate at a time for TCP connections, not how many HTTP requests to send simultaneously.

If you’re using both the -t and the -n flags, note that -t should always go first-otherwise ApacheBench will override the value of -n and assign the default -n value of 50000, or 50,000 requests.

Metrics for request and connection status, including HTTP status codes and request failures.Metrics for request latency, which range from percentile breakdowns of all requests to durations of specific phases within TCP connections.In this post, we will explain how ApacheBench works, and walk you through the ApacheBench metrics that can help you tune your web servers and optimize your applications, including: While ApacheBench was designed to benchmark the Apache web server, it is a fully fledged HTTP client that benchmarks actual connections, and you can use it to test the performance of any backend that processes HTTP requests. ApacheBench can help you determine how much traffic your HTTP server can sustain before performance degrades, and set baselines for typical response times. ApacheBench ( ab) is a benchmarking tool that measures the performance of a web server by inundating it with HTTP requests and recording metrics for latency and success.

When developing web services and tuning the infrastructure that runs them, you’ll want to make sure that they handle requests quickly enough, and at a high enough volume, to meet your requirements.
