Skip to content

Static File Benchmark on Windows: Nginx vs Apache vs Caddy vs Gruxi

Benchmark Windows Nginx Apache Caddy Gruxi

Serving static files is one of the most fundamental jobs a web server has, and performance here can have a direct impact on user experience. In this article, we benchmark four popular web servers, Nginx, Apache, Caddy, and Gruxi, serving static files on a Windows platform using dedicated hardware.

We do a comparison of how these web servers behave in one specific use case: serving static files. We focus on key metrics such as latency, latency curve, throughput, and stability under load to give a clearer picture of how each server handles static content.

This comparison was done in May 2026.

TL;DR

  • Caddy and Gruxi both performed well when serving static files. Caddy showed slightly better latency and throughput, but also used more memory and started dropping requests at higher concurrency levels. Gruxi remained competitive, maintained a tight latency curve, and avoided dropped requests in these tests, though Caddy had an edge in gzip compression performance.

  • Gruxi with the file cache enabled, which is the default but was disabled here for a fairer baseline comparison, performed exceptionally well, with very low latency and high throughput. That shows how much caching can matter when serving static files.

  • Apache performed decently at lower concurrency levels but struggled significantly as concurrency increased, with high latency spikes and a growing number of dropped requests.

  • Nginx had the weakest results in this benchmark, with significantly higher latency and lower throughput than the other servers, along with a high number of dropped requests at higher concurrency levels.

About the Web Servers

  • Nginx 1.29.8: A well-known web server used in many scenarios, Nginx is widely used for serving static content and as a reverse proxy.
  • Apache 2.4.67: A long-standing web server with a rich feature set, Apache is versatile but can be complex to configure and manage.
  • Caddy 2.11.2: A modern Go-based web server with automatic HTTPS, simple configuration, and integrated reverse proxy capabilities.
  • Gruxi 1.0.1: A newer Rust web server focused on ease of use and performance, Gruxi is designed to be a lightweight and efficient option for serving static files.

Why Performance Matters

The performance of a web server when serving static files can directly affect the user experience, especially for websites that rely heavily on static content such as images, CSS, and JavaScript files, as is the case for most sites. Faster response times can lead to improved user satisfaction, better SEO rankings, and increased engagement. Additionally, efficient handling of static files can reduce server load and improve scalability.

For most people, static file performance is a critical factor when choosing a web server because it directly affects the speed and responsiveness of their websites. At the same time, it is not the only factor. Ease of configuration, stability, security features, and support for dynamic content also matter.

Benchmark Setup

Each web server was configured to serve a directory of static files, and we used the Oha benchmarking tool to simulate concurrent requests and measure performance metrics. The tests were conducted on a Windows platform using dedicated hardware to ensure consistent and reliable results.

We used as much of the default configuration as possible for each server, with only minimal tuning to keep the comparison fair. However, we disabled features such as SendFile and the Gruxi file cache, which are designed to improve performance in production but can skew a benchmark focused on raw static file serving performance. The goal was to measure the baseline behavior of each server without optimizations that may not be relevant in every use case.

The specific configuration used for each test will be documented at the end of the article to provide transparency and allow for reproducibility of the results.

Test environment

  • CPU: AMD Ryzen 9 9950X3D 4300MHz
  • Memory: DDR5 4800 MHz
  • Storage: Samsung 9100 PRO 4TB
  • Operating system: Windows 11
  • Benchmark tool: Oha (https://github.com/hatoo/oha)

What We Are Looking For in the Results

We are primarily interested in these key performance metrics:

  • Latency: The time it takes for a server to respond to a request. Lower latency indicates faster response times.
  • Throughput: The number of requests a server can handle per second. Higher throughput indicates better performance under load.
  • Stability: Consistency of performance under heavy loads

Our expectation was that Apache might show higher latency and more variability due to its process-based architecture, while Nginx and Caddy would perform well in terms of latency and throughput. Gruxi, being a server written in Rust, was expected to deliver competitive latency and throughput, particularly in higher-concurrency scenarios.

Benchmark Results

We ran two test scenarios:

  • Small static HTML file, 46 bytes, no logging, no caching
  • Medium static JS file, 110 KB, no logging or caching, gzip enabled

Each was tested at concurrency levels of 100, 500, 1,000, and 1,500.

Small static HTML file, 46 bytes, no logging, no caching

We start with 100 concurrent connections.

Command used for the benchmark:

ps
.\oha-windows-amd64.exe -c 100 -n 1000000 -u ms --no-tui http://localhost/
ServerLatency (avg ms)Throughput (req/s)95% latency (ms)99% latency (ms)Stability
Caddy*8.17112,2349.97710.757OK
Apache8.72311,4558.93414.001OK
Nginx8.53511,71210.62111.729OK
Gruxi8.47011,82810.25210.616OK

* Caddy does not have an option to disable SendFile/TransmitFile kernel optimizations, so this skews the results slightly in its favor. We were trying to compare user-space performance without OS-level optimizations, but that is not possible with Caddy.

Looking at the results, we can see that all servers performed well in terms of latency and throughput, with Caddy showing the lowest average latency and highest throughput. Apache had slightly higher latency and lower throughput compared to Nginx and Gruxi, which performed similarly.

All four also performed well in terms of stability, with no significant degradation under load at 100 concurrent connections.

Now let's increase the concurrency level to 500 concurrent connections.

Command used for the benchmark:

ps
.\oha-windows-amd64.exe -c 500 -n 1000000 -u ms --no-tui http://localhost/
ServerLatency (avg ms)Throughput (req/s)95% latency (ms)99% latency (ms)Stability
Caddy*40.80712,24949.52850.61OK
Apache42.21211,8038.7621,863MIXED
Nginx43.1111,55852.954110.979OK
Gruxi42.111,87251.14352.305OK

* See the previous note about Caddy and SendFile, which slightly skews the results in Caddy's favor.

Again, performance is strong across the board.

Apache's latency increased significantly, and its 99th-percentile latency is very high, indicating that it struggled to handle the load effectively.

Nginx and Gruxi maintained results similar to the previous test, with slightly higher latency but still good throughput. That said, Nginx has started to show some variability in latency, with a significant increase in the 99th-percentile latency. We also saw some dropped requests from Nginx at this level, which is a sign of instability under load.

Gruxi's performance is particularly notable compared to Nginx, as it maintained low average latency and high throughput even under increased load. That remains true even at the 99th-percentile latency metric, which suggests that it is handling the load efficiently and consistently.

Let us see what happens at 1,000 concurrent connections.

Command used for the benchmark:

ps
.\oha-windows-amd64.exe -c 1000 -n 1000000 -u ms --no-tui http://localhost/
ServerLatency (avg ms)Throughput (req/s)95% latency (ms)99% latency (ms)Stability
Caddy*84.62711,812100.447102.318OK
Apache59.1711,3569.2192,235NOT GOOD
Nginx88.17811,036100.7106.034MIXED
Gruxi81.24112,303100.184102.63OK

* See the previous note about Caddy and SendFile, which slightly skews the results in Caddy's favor.

Here we see a significant increase in latency for Apache, with an average latency of 59.17 ms and a 99th-percentile latency of 2,235 ms, indicating that it is struggling to handle the load effectively and consistently. This is a clear degradation compared to the previous tests, and it suggests that Apache may not be suitable for high-concurrency static file workloads. In this test, it dropped 14,460 requests out of 1,000,000, which is around 1.4% of requests. For a web server, that is a significant failure rate.

Nginx is also showing problems at this level. Throughput and latency still look okay compared to the others, but this is the point where it started dropping requests: about 1,200 out of 1,000,000, or roughly 0.12%.

Gruxi and Caddy both maintained good performance, with Gruxi showing slightly lower average latency and higher throughput than Caddy. Both handled the increased load efficiently, with no significant degradation in performance or stability.

But again, this is not a completely fair comparison, as Caddy's performance is helped by the fact that it cannot disable SendFile optimizations, which can significantly improve static file performance. Even with that caveat, Gruxi still compared well in this scenario.

Finally, we test at 1,500 concurrent connections.

Command used for the benchmark:

ps
.\oha-windows-amd64.exe -c 1500 -n 1000000 -u ms --no-tui http://localhost/
ServerLatency (avg ms)Throughput (req/s)95% latency (ms)99% latency (ms)Stability
Caddy*119.15912,524124.757613.508OK
Apache61.78611,4529.4892,219NOT GOOD
Nginx95.2711,68496.673107.502NOT GOOD
Gruxi121.03312,378149.198152.847OK

* See the previous note about Caddy and SendFile, which slightly skews the results in Caddy's favor.

At this concurrency level, Apache's performance continues to degrade significantly, with 34,876 dropped requests out of 1,000,000, or around 3.48% of requests. For most production use cases, that would be difficult to accept.

Nginx also shows significant degradation, now dropping 14,400 requests out of 1,000,000, or around 1.44% of requests. That is also a meaningful failure rate.

Gruxi and Caddy maintained good performance, with no dropped requests and consistent latency and throughput. We do see a significant increase in the 99th-percentile latency for Caddy, which suggests that it is starting to struggle under the increased load, even though it still maintained good average latency and high throughput. Gruxi maintained a tight latency curve, with a 95th-percentile latency of 149.198 ms and a 99th-percentile latency of 152.847 ms. That suggests it is handling the load consistently even at this high concurrency level.

Now let's move on to serving a more realistic static file: a medium-sized JavaScript file of 110 KB, with gzip enabled but still no logging or caching. This should also force Caddy to go through user-space code instead of SendFile optimizations, which should make the comparison fairer.

Medium static file, 110 KB, with gzip compression enabled

This time we request a 110 KB JavaScript file and enable gzip compression on all servers, which should be a more realistic scenario for serving static files on the web, since many static files are compressed to reduce bandwidth usage and improve performance.

We include two versions of Gruxi, one with the default file cache disabled and one with the file cache enabled, to show the difference caching can make when serving static files.

Let's start with a concurrency level of 100 concurrent connections.

Command used for the benchmark:

ps
.\oha-windows-amd64.exe -c 100 -n 100000 -u ms --no-tui http://localhost/test.js
ServerLatency (avg ms)Throughput (req/s)95% latency (ms)99% latency (ms)Stability
Caddy11.2088,91221.64227.239OK
Apache15.0006,66017.04731.905OK
Nginx114.788871119.749201.945OK
Gruxi no cache35.342,82747.6754.33OK
Gruxi with cache0.677145,0001.111.387OK

Caddy and Apache performed in a similar range, with Caddy showing slightly better latency and higher throughput compared to Apache. Nginx had significantly higher latency and lower throughput compared to Caddy and Apache. That was not expected for Nginx.

Gruxi performed fairly well, with a tight latency curve, but was not as fast as Caddy or Apache by quite a margin. It could indicate that the gzip compression is not as optimized in Gruxi as it is in Caddy and Apache.

However, when we enabled the file cache in Gruxi, we saw a dramatic improvement in performance, with an average latency of 0.677 ms (677 µs) and a throughput of 145,000 req/s. That is because gzip only has to be done once and can then be served from memory.

Now let us increase the concurrency level to 500 concurrent connections.

Command used for the benchmark:

ps
.\oha-windows-amd64.exe -c 500 -n 100000 -u ms --no-tui http://localhost/test.js
ServerLatency (avg ms)Throughput (req/s)95% latency (ms)99% latency (ms)Stability
Caddy56.2558,86073.614230.273OK
Apache75.756,57517.483,228NOT GOOD
Nginx582.8362,031524.079533.697MIXED
Gruxi no cache245.7672,031317.86354.677OK
Gruxi with cache3.73695,9075.2128.11OK

Caddy maintained the best performance in this test, with an average latency of 56.255 ms and a throughput of 8,860 req/s. However, we see a significant increase in the 99th-percentile latency for Caddy, which indicates that it is starting to struggle under the increased load.

Apache actually did very well, but again we see a significant increase in the 99th-percentile latency for Apache, up to about 3.2 seconds, which indicates that it is struggling under the increased load. It can handle a certain load effectively, but it is not able to maintain consistent performance as concurrency increases.

Nginx's performance degraded significantly, with an average latency of 582.836 ms and a throughput of 2,031 req/s. We also started seeing 588 dropped requests out of 100,000, which is around 0.588% of the requests. If this already happens at 500 concurrent connections, it is likely that the performance will degrade even further at higher concurrency levels.

Gruxi's performance also degraded, though it kept a nice, tight latency curve. The average latency of 245.767 ms and throughput of 2,031 req/s are not ideal. As in the previous test, this points to gzip compression as a likely optimization target, since the slowdown is consistent even without dropped requests.

However, when we enabled the file cache (which is enabled by default) in Gruxi, we saw a significant improvement in performance, with an average latency of 3.736 ms and a throughput of 95,907 req/s. This shows that caching can significantly improve performance when serving static files, especially when gzip compression is involved.

Let us increase the concurrency level to 1,000 concurrent connections.

Command used for the benchmark:

ps
.\oha-windows-amd64.exe -c 1000 -n 100000 -u ms --no-tui http://localhost/test.js
ServerLatency (avg ms)Throughput (req/s)95% latency (ms)99% latency (ms)Stability
Caddy79.1448,55074.809244.989MIXED
Apache97.956,64117.833,947BAD
Nginx589.2151,094459.7791,123VERY BAD
Gruxi no cache519.8131,917651.441737.498OK
Gruxi with cache8.30490,88410.40818.746OK

At this concurrency level, we start seeing cracks in Caddy's performance, with the 99th-percentile latency rising to about 245 ms. That suggests it is starting to struggle under the increased load. Even so, it still maintained good average latency and high throughput. This is also the point where it started dropping requests, with 1,693 out of 100,000, or about 1.7%.

Same story with Apache. It did well in terms of average latency and throughput, but the 99th-percentile latency increased significantly to about 3.9 seconds, which indicates that it is struggling under the increased load. This is not the latency curve you want, as users might be served in 17 ms or in 4 seconds. It also started dropping requests at this level, with 2,698 out of 100,000, or about 2.7%.

Nginx deteriorated sharply at this concurrency level, with an average latency of 589.215 ms and a throughput of 1,094 req/s. We also see 21,792 dropped requests out of 100,000, which is around 21.7% of requests. That is a very high failure rate.

Gruxi's performance also degraded, with an average latency of 519.813 ms and a throughput of 1,917 req/s. That is not ideal, and again it suggests that gzip compression could be optimized, since the slowdown is consistent even without dropped requests. The latency curve remains tight, which suggests that it is still handling the load efficiently and consistently even if the per-request performance is not ideal.

When we enabled the file cache in Gruxi, we saw a significant improvement in performance, with an average latency of 8.304 ms and a throughput of 90,884 req/s. That is around 4 GB of compressed data per second for 1,000 concurrent connections, which is very good performance for serving static files with gzip compression at this concurrency level.

Finally, let us test at 1,500 concurrent connections.

Command used for the benchmark:

ps
.\oha-windows-amd64.exe -c 1500 -n 100000 -u ms --no-tui http://localhost/test.js
ServerLatency (avg ms)Throughput (req/s)95% latency (ms)99% latency (ms)Stability
Caddy96.568,89682.699538.849MIXED
Apache110.2076,79918.4044,026BAD
Nginx562.6481,360429.615464.432VERY BAD
Gruxi no cache789.8171,890985.6041,120OK
Gruxi with cache13.74687,48513.74625.215OK

At 1,500 concurrent connections, things get rough for the web servers, and we see significant performance degradation across the board.

They struggle in different ways, though. Caddy is dropping requests at this level, with 3,437 out of 100,000 requests, which is around 3.5% of the requests. Apache is dropping 5,447 out of 100,000 requests, which is around 5.4% of the requests. Nginx is dropping 36,398 out of 100,000 requests, which is around 36% of the requests. In this test, Gruxi was the only one of the four that did not drop requests at this level.

Caddy maintained good average latency and high throughput, but we also see a significant increase in the 99th-percentile latency to about 538 ms, along with dropped requests.

Apache actually did well in terms of average latency and throughput, but the 99th-percentile latency increased significantly to about 4 seconds, and so did the dropped requests.

Nginx deteriorated further at this concurrency level, which follows the pattern already visible at 1,000 concurrent connections.

Gruxi's performance had the same problem as at previous concurrency levels, with a good, tight latency curve but less-than-ideal per-request performance, indicating that compression should be optimized. Gruxi had no dropped requests even at this level, which suggests that the request/response handling is efficient and consistent, even under heavy load.

When we enabled the file cache in Gruxi, we saw a significant improvement in performance, with an average latency of 13.746 ms, a throughput of 87,485 req/s, and no problems with dropped requests.

Analysis of results

Conclusions on the first test scenario with small static file:

All web servers performed well at lower concurrency levels, but we already saw Nginx starting to drop some requests at 500 concurrent connections and Apache struggling significantly at 1,000 concurrent connections. That may be surprising to some readers, as Nginx is often praised for static file performance, but in this specific Windows-based test environment it did not scale as well as expected.

Caddy did very well overall, but its performance in this test is skewed by the fact that it cannot disable SendFile optimizations, which can significantly improve performance when serving static files because the data does not need to go through user-space code. Even with that caveat, it handled the workload well, with a good latency curve, though it started to show some strain at the highest concurrency level.

Gruxi performed competitively with Caddy, showing good latency and throughput even at high concurrency levels while maintaining a tight latency curve. That suggests the server is handling load in a consistent way.

Conclusions on the second test scenario with medium static file and gzip compression:

Compression with gzip forces user-space handling of files, which is a more realistic scenario for serving static files on the web, since many static assets are compressed to reduce bandwidth usage and improve performance. In this test, we saw a significant performance drop for all servers compared to the first test scenario, which is expected because compression adds overhead.

Caddy did pretty well overall, both in terms of latency curve and throughput. But already at 1,000 concurrent connections it started dropping requests. It also showed significantly higher memory usage than the other servers, which could be a concern in production environments: 2 to 4 times the memory usage of Gruxi, which was the second highest. Overall, it was a strong result, but the dropped requests and higher memory usage at higher concurrency levels are worth noting.

Apache also did pretty well overall, but we saw a significant increase, 3 to 4 seconds, in the 99th-percentile latency at concurrency levels above 500, which indicates that it is struggling to maintain consistent performance under load. It also started dropping requests at 1,000 concurrent connections, which is not ideal.

Nginx had the weakest results in this test, with significantly higher latency and lower throughput compared to Caddy and Apache. It also started dropping requests at 500 concurrent connections, and the failure rate increased significantly at higher concurrency levels. That was not the outcome many readers would expect from Nginx, but it may reflect the fact that this benchmark was run on Windows, whereas Nginx is primarily developed and tested on Unix-like systems.

Gruxi performed well in this test, with a tight latency curve, but it was not nearly as fast as Caddy or Apache. That could indicate that gzip compression in Gruxi is not yet as CPU-efficient as it is in Caddy or Apache.

When we enabled the file cache in Gruxi, we saw a dramatic improvement in performance across all concurrency levels, which shows that caching can significantly improve performance when serving static files, especially when gzip compression is involved.

Conclusion

Caddy and Gruxi were the strongest overall performers in these two test scenarios. Gruxi completed all tests with a tighter latency curve and no dropped requests, which suggests that it serves static files in a consistent and stable manner. Caddy was strongest when it came to efficient and fast gzip handling. At the same time, Caddy also showed wider latency spread, dropped requests at higher concurrency, and significantly higher memory usage, 2 to 4 times Gruxi's usage, which could matter in production.

Apache performed decently in both test scenarios, but it showed significant performance degradation at higher concurrency levels, with increased latency, a very wide latency curve, and dropped requests. A system serving users in 17 ms one moment and 4 seconds the next is hard to call predictable.

Nginx had the weakest results in both test scenarios, with significantly higher latency and lower throughput than the other servers, along with a high failure rate at higher concurrency levels.

Configurations used for the benchmark

Configurations for the first test scenario with small static file

Caddyfile for Caddy:

http://localhost

root public_html
file_server

httpd.conf for Apache:

Define SRVROOT "D:/dev/performance-http-test/GruxiPerfCompare/httpd-2.4.67-260504-Win64-VS18/Apache24"

ServerRoot "${SRVROOT}"

Listen 80

LoadModule access_compat_module modules/mod_access_compat.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule filter_module modules/mod_filter.so
LoadModule reqtimeout_module modules/mod_reqtimeout.so
LoadModule autoindex_module modules/mod_autoindex.so
LoadModule mime_module modules/mod_mime.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule alias_module modules/mod_alias.so
LoadModule dir_module modules/mod_dir.so
LoadModule env_module modules/mod_env.so
LoadModule headers_module modules/mod_headers.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule version_module modules/mod_version.so

<IfModule unixd_module>
User daemon
Group daemon

</IfModule>

ServerAdmin you@example.com

<Directory />
    AllowOverride none
    Require all denied
</Directory>

DocumentRoot "${SRVROOT}/htdocs"
<Directory "${SRVROOT}/htdocs">
    Options Indexes FollowSymLinks

    AllowOverride None

    Require all granted
</Directory>

<IfModule dir_module>
    DirectoryIndex index.html
</IfModule>

<Files ".ht*">
    Require all denied
</Files>

ErrorLog "logs/error_log"

LogLevel warn

<IfModule log_config_module>
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%h %l %u %t \"%r\" %>s %b" common

    <IfModule logio_module>
      LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
    </IfModule>

</IfModule>

<IfModule alias_module>

    ScriptAlias /cgi-bin/ "${SRVROOT}/cgi-bin/"

</IfModule>

<IfModule cgid_module>
</IfModule>

<Directory "${SRVROOT}/cgi-bin">
    AllowOverride None
    Options None
    Require all granted
</Directory>

<IfModule headers_module>
    RequestHeader unset Proxy early
</IfModule>

<IfModule mime_module>
    TypesConfig conf/mime.types

    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz

</IfModule>

<IfModule proxy_html_module>
Include conf/extra/proxy-html.conf
</IfModule>

<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>

nginx.conf for Nginx:

worker_processes  auto;

events {
    worker_connections  1024;
	multi_accept on;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;

        access_log  off;

        location / {
            root   html;
            index  index.html;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

}

Exported configuration from Gruxi in JSON format:

{
  "version": 8,
  "bindings": [
    {
      "id": "31788a62-c3c6-44ac-91e0-455b5460bb9a",
      "ip": "0.0.0.0",
      "port": 80,
      "is_admin": false,
      "is_telemetry": false,
      "is_tls": false
    },
    {
      "id": "1fb457c7-6133-45c6-9273-54991c00ae11",
      "ip": "0.0.0.0",
      "port": 443,
      "is_admin": false,
      "is_telemetry": false,
      "is_tls": true
    }
  ],
  "sites": [
    {
      "id": "f7711b80-f34d-4741-a5f7-ecef2264937f",
      "hostnames": [
        "*"
      ],
      "is_default": true,
      "is_enabled": true,
      "tls_automatic_enabled": false,
      "tls_cert_path": "D:\\dev\\gruxi\\target\\release\\certs/68ed9330-7f69-466d-b7b2-a75324f90e9a.crt.pem",
      "tls_cert_content": "",
      "tls_key_path": "D:\\dev\\gruxi\\target\\release\\certs/68ed9330-7f69-466d-b7b2-a75324f90e9a.key.pem",
      "tls_key_content": "",
      "rewrite_functions": [],
      "request_handlers": [
        "18d0e179-2a51-4c1f-866b-ff4c8239ce0e"
      ],
      "extra_headers": [],
      "access_log_enabled": false,
      "access_log_file": "",
      "force_tls": false,
      "force_tls_port": 443,
      "canonical_host": ""
    }
  ],
  "binding_sites": [
    {
      "binding_id": "31788a62-c3c6-44ac-91e0-455b5460bb9a",
      "site_id": "f7711b80-f34d-4741-a5f7-ecef2264937f"
    },
    {
      "binding_id": "1fb457c7-6133-45c6-9273-54991c00ae11",
      "site_id": "f7711b80-f34d-4741-a5f7-ecef2264937f"
    }
  ],
  "core": {
    "file_cache": {
      "is_enabled": false,
      "cache_item_size": 1000,
      "cache_max_size_per_file": 1048576,
      "cache_update_thread_interval": 30,
      "max_item_lifetime": 60,
      "forced_eviction_threshold": 80
    },
    "gzip": {
      "is_enabled": false,
      "compressible_content_types": [
        "text/",
        "application/javascript",
        "application/json",
        "application/xml",
        "application/xhtml+xml",
        "application/x-javascript",
        "application/x-yaml",
        "image/svg+xml",
        "application/font-woff",
        "application/font-woff2"
      ]
    },
    "server_settings": {
      "max_connection_duration_seconds": 90,
      "max_body_size": 10485760,
      "blocked_file_patterns": [
        ".tmp",
        ".config",
        ".php",
        ".sql",
        ".bak",
        ".old",
        ".orig",
        ".conf",
        ".ini",
        ".log",
        ".key",
        ".pem"
      ]
    },
    "admin_portal": {
      "is_enabled": true,
      "domain_name": "",
      "tls_automatic_enabled": false,
      "tls_certificate_path": "D:\\dev\\gruxi\\target\\release\\certs/f3121af6-b84b-4e14-88c3-2d315aae4eb7.crt.pem",
      "tls_key_path": "D:\\dev\\gruxi\\target\\release\\certs/f3121af6-b84b-4e14-88c3-2d315aae4eb7.key.pem"
    },
    "telemetry": {
      "bearer_token": null
    },
    "tls_settings": {
      "account_email": "",
      "use_staging_server": false
    },
    "http_caching": {
      "enabled_caching": true,
      "enable_header_etag": true,
      "enable_header_last_modified": true,
      "enable_header_expires": true,
      "enable_header_cache_control": true
    },
    "logging": {
      "log_rotation_enabled": true,
      "rotate_by_size": true,
      "max_log_file_size_mb": 100,
      "rotate_by_time": false,
      "log_time_rotation_type": "daily",
      "delete_old_logs": false,
      "max_log_age_days": 30
    }
  },
  "request_handlers": [
    {
      "id": "18d0e179-2a51-4c1f-866b-ff4c8239ce0e",
      "is_enabled": true,
      "name": "Static File Handler",
      "processor_type": "static",
      "processor_id": "bdfb6c82-0f59-4498-823c-db2f240818f8",
      "url_match": [
        "*"
      ]
    }
  ],
  "static_file_processors": [
    {
      "id": "bdfb6c82-0f59-4498-823c-db2f240818f8",
      "web_root": "D:/dev/gruxi/target/release/www-default",
      "web_root_index_file_list": [
        "index.html"
      ]
    }
  ],
  "php_processors": [],
  "proxy_processors": [],
  "php_cgi_handlers": []
}

Configurations for the second test scenario with gzipped static file

Caddyfile for Caddy:

http://localhost

encode gzip
root public_html
file_server

httpd.conf for Apache:

Define SRVROOT "D:/dev/performance-http-test/GruxiPerfCompare/httpd-2.4.67-260504-Win64-VS18/Apache24"

ServerRoot "${SRVROOT}"

Listen 80

LoadModule access_compat_module modules/mod_access_compat.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule deflate_module modules/mod_deflate.so
LoadModule filter_module modules/mod_filter.so
LoadModule reqtimeout_module modules/mod_reqtimeout.so
LoadModule autoindex_module modules/mod_autoindex.so
LoadModule mime_module modules/mod_mime.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule alias_module modules/mod_alias.so
LoadModule dir_module modules/mod_dir.so
LoadModule env_module modules/mod_env.so
LoadModule headers_module modules/mod_headers.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule version_module modules/mod_version.so

<IfModule unixd_module>
User daemon
Group daemon

</IfModule>

ServerAdmin you@example.com

<Directory />
    AllowOverride none
    Require all denied
</Directory>

SetOutputFilter DEFLATE

DocumentRoot "${SRVROOT}/htdocs"
<Directory "${SRVROOT}/htdocs">
    Options Indexes FollowSymLinks

    AllowOverride All

	AddOutputFilterByType DEFLATE text/html text/css application/javascript

    Require all granted
</Directory>

<IfModule dir_module>
    DirectoryIndex index.html
</IfModule>

<Files ".ht*">
    Require all denied
</Files>

ErrorLog "logs/error_log"

LogLevel warn

<IfModule log_config_module>
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%h %l %u %t \"%r\" %>s %b" common

    <IfModule logio_module>
      LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
    </IfModule>

</IfModule>

<IfModule alias_module>

    ScriptAlias /cgi-bin/ "${SRVROOT}/cgi-bin/"

</IfModule>

<IfModule cgid_module>
</IfModule>

<Directory "${SRVROOT}/cgi-bin">
    AllowOverride None
    Options None
    Require all granted
</Directory>

<IfModule headers_module>
    RequestHeader unset Proxy early
</IfModule>

<IfModule mime_module>
    TypesConfig conf/mime.types

    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz

</IfModule>

<IfModule proxy_html_module>
Include conf/extra/proxy-html.conf
</IfModule>

<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>

nginx.conf for Nginx:

worker_processes  auto;

events {
    worker_connections  1024;
	multi_accept on;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    keepalive_timeout  65;

    gzip  on;
	gzip_types      application/javascript;

    server {
        listen       80;
        server_name  localhost;

        access_log  off;

        location / {
            root   html;
            index  index.html;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

}

Exported configuration from Gruxi in JSON format:

{
  "version": 8,
  "bindings": [
    {
      "id": "31788a62-c3c6-44ac-91e0-455b5460bb9a",
      "ip": "0.0.0.0",
      "port": 80,
      "is_admin": false,
      "is_telemetry": false,
      "is_tls": false
    },
    {
      "id": "1fb457c7-6133-45c6-9273-54991c00ae11",
      "ip": "0.0.0.0",
      "port": 443,
      "is_admin": false,
      "is_telemetry": false,
      "is_tls": true
    }
  ],
  "sites": [
    {
      "id": "f7711b80-f34d-4741-a5f7-ecef2264937f",
      "hostnames": [
        "*"
      ],
      "is_default": true,
      "is_enabled": true,
      "tls_automatic_enabled": false,
      "tls_cert_path": "D:\\dev\\gruxi\\target\\release\\certs/68ed9330-7f69-466d-b7b2-a75324f90e9a.crt.pem",
      "tls_cert_content": "",
      "tls_key_path": "D:\\dev\\gruxi\\target\\release\\certs/68ed9330-7f69-466d-b7b2-a75324f90e9a.key.pem",
      "tls_key_content": "",
      "rewrite_functions": [],
      "request_handlers": [
        "18d0e179-2a51-4c1f-866b-ff4c8239ce0e"
      ],
      "extra_headers": [],
      "access_log_enabled": false,
      "access_log_file": "",
      "force_tls": false,
      "force_tls_port": 443,
      "canonical_host": ""
    }
  ],
  "binding_sites": [
    {
      "binding_id": "31788a62-c3c6-44ac-91e0-455b5460bb9a",
      "site_id": "f7711b80-f34d-4741-a5f7-ecef2264937f"
    },
    {
      "binding_id": "1fb457c7-6133-45c6-9273-54991c00ae11",
      "site_id": "f7711b80-f34d-4741-a5f7-ecef2264937f"
    }
  ],
  "core": {
    "file_cache": {
      "is_enabled": false,
      "cache_item_size": 1000,
      "cache_max_size_per_file": 1048576,
      "cache_update_thread_interval": 30,
      "max_item_lifetime": 60,
      "forced_eviction_threshold": 80
    },
    "gzip": {
      "is_enabled": true,
      "compressible_content_types": [
        "text/",
        "application/javascript",
        "application/json",
        "application/xml",
        "application/xhtml+xml",
        "application/x-javascript",
        "application/x-yaml",
        "image/svg+xml",
        "application/font-woff",
        "application/font-woff2"
      ]
    },
    "server_settings": {
      "max_connection_duration_seconds": 90,
      "max_body_size": 10485760,
      "blocked_file_patterns": [
        ".tmp",
        ".config",
        ".php",
        ".sql",
        ".bak",
        ".old",
        ".orig",
        ".conf",
        ".ini",
        ".log",
        ".key",
        ".pem"
      ]
    },
    "admin_portal": {
      "is_enabled": true,
      "domain_name": "",
      "tls_automatic_enabled": false,
      "tls_certificate_path": "D:\\dev\\gruxi\\target\\release\\certs/f3121af6-b84b-4e14-88c3-2d315aae4eb7.crt.pem",
      "tls_key_path": "D:\\dev\\gruxi\\target\\release\\certs/f3121af6-b84b-4e14-88c3-2d315aae4eb7.key.pem"
    },
    "telemetry": {
      "bearer_token": null
    },
    "tls_settings": {
      "account_email": "",
      "use_staging_server": false
    },
    "http_caching": {
      "enabled_caching": true,
      "enable_header_etag": true,
      "enable_header_last_modified": true,
      "enable_header_expires": true,
      "enable_header_cache_control": true
    },
    "logging": {
      "log_rotation_enabled": true,
      "rotate_by_size": true,
      "max_log_file_size_mb": 100,
      "rotate_by_time": false,
      "log_time_rotation_type": "daily",
      "delete_old_logs": false,
      "max_log_age_days": 30
    }
  },
  "request_handlers": [
    {
      "id": "18d0e179-2a51-4c1f-866b-ff4c8239ce0e",
      "is_enabled": true,
      "name": "Static File Handler",
      "processor_type": "static",
      "processor_id": "bdfb6c82-0f59-4498-823c-db2f240818f8",
      "url_match": [
        "*"
      ]
    }
  ],
  "static_file_processors": [
    {
      "id": "bdfb6c82-0f59-4498-823c-db2f240818f8",
      "web_root": "D:/dev/gruxi/target/release/www-default",
      "web_root_index_file_list": [
        "index.html"
      ]
    }
  ],
  "php_processors": [],
  "proxy_processors": [],
  "php_cgi_handlers": []
}