Kestrel: The High-Performance Web Server Powering ASP.NET Core

Introduction

When it comes to hosting ASP.NET Core applications, developers have no shortage of web server options to choose from. But standing out from the pack is Kestrel, a lightweight, cross-platform web server that comes bundled with the ASP.NET Core runtime.

As a Linux and proxy server expert, I‘ve had the opportunity to work with Kestrel in a variety of production scenarios. In this article, I‘ll share an in-depth look at what makes Kestrel tick, how it stacks up against other web servers in terms of features and performance, and tips for getting the most out of Kestrel in your ASP.NET Core applications.

Kestrel‘s Architecture

At its core, Kestrel is designed to be a bare-bones, high-performance web server. It doesn‘t try to be everything to everyone, but rather focuses on doing one thing really well: hosting ASP.NET Core applications.

Central to Kestrel‘s architecture is its use of the libuv library. Libuv is a multi-platform support library with a focus on asynchronous I/O. It was originally developed for Node.js but has since been used in a variety of other projects, including Kestrel.

By leveraging libuv, Kestrel is able to provide highly efficient, non-blocking I/O across different operating systems. This is key to its ability to scale and handle large volumes of concurrent requests without getting bogged down.

Kestrel‘s request processing pipeline is divided into two main components:

  1. Transport: This is the lowest level of the pipeline and is responsible for accepting incoming connections and reading/writing bytes to/from those connections. The default transport is based on libuv but Kestrel also supports other transports like Sockets.

  2. Middleware: Sitting above the transport is the middleware pipeline. This is where the majority of application logic lives, including routing, authentication, logging, error handling, etc.

Middleware components are chained together, with each one having the opportunity to process the request before passing it on to the next component in the chain. This model provides a high degree of flexibility and extensibility.

Kestrel Performance

One of Kestrel‘s claims to fame is its high performance. But just how fast is it?

To put some numbers behind this, let‘s look at some results from the TechEmpower web framework benchmarks. These benchmarks measure the performance of different web frameworks and servers under a variety of workloads.

In the plaintext benchmark, which measures the ability to serve a small, static payload as quickly as possible, Kestrel ranked in the top 10 across all tested frameworks and servers:

TechEmpower Plaintext Benchmark Results

Source: https://www.techempower.com/benchmarks/#section=data-r20&hw=ph&test=plaintext

In the JSON serialization benchmark, Kestrel also performed well, ranking 13th overall:

TechEmpower JSON Benchmark Results

Source: https://www.techempower.com/benchmarks/#section=data-r20&hw=ph&test=json

These benchmarks demonstrate that Kestrel is capable of delivering excellent performance across different types of workloads.

It‘s worth noting that while these benchmark results are impressive, they don‘t necessarily translate directly to real-world application performance. Factors like application architecture, database access patterns, network latency, etc. can all have a significant impact on overall performance.

That said, Kestrel provides a strong foundation to build high-performance applications on top of. Its efficient use of system resources and optimized request processing pipeline eliminate many of the bottlenecks and inefficiencies found in other web servers.

Configuring Kestrel for Performance

Out of the box, Kestrel is configured with reasonable defaults that will work well for many applications. However, to get the most out of Kestrel, it‘s important to understand some of the key configuration options and how they impact performance.

Threading

By default, Kestrel uses a single thread to process requests. This works well for many scenarios but if your application is CPU-bound (e.g. doing a lot of complex computations), you may benefit from increasing the number of worker threads.

This can be done by setting the ThreadCount property on the KestrelServerOptions class:

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseKestrel(options =>
            {
                options.ThreadCount = 4;
            });
            webBuilder.UseStartup<Startup>();
        });

Here we‘re setting the thread count to 4, which means Kestrel will use 4 threads to process requests. The optimal number of threads will depend on your specific workload and hardware but a good starting point is to set it to the number of logical processors on the machine.

Connection Pooling

Another important configuration option is the maximum number of concurrent connections Kestrel will accept. By default, this is set to 200.

If your application needs to handle a large number of concurrent requests, you may need to increase this limit. This can be done by setting the MaxConcurrentConnections property:

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseKestrel(options =>
            {
                options.Limits.MaxConcurrentConnections = 1000;
            });
            webBuilder.UseStartup<Startup>();
        });

Here we‘re increasing the max concurrent connections to 1000. Again, the right number will depend on your specific workload and hardware capabilities. Setting this number too high can actually degrade performance as it can lead to excessive context switching and resource contention.

It‘s also worth noting that the MaxConcurrentConnections limit is per Kestrel instance. If you‘re scaling out your application across multiple processes or machines, each instance will have its own connection limit.

Using Kestrel with a Reverse Proxy

While Kestrel can be used as an edge (Internet-facing) server, it‘s more commonly deployed behind a reverse proxy such as IIS, Nginx, or Apache.

A reverse proxy can offload responsibilities like SSL termination, static file caching, request filtering and routing, etc., allowing Kestrel to focus on its primary role of serving dynamic application content.

From a Linux perspective, Nginx and Apache are the two most popular reverse proxy options. Let‘s look at an example configuration for each.

Nginx Configuration

Here‘s a sample Nginx configuration file that proxies requests to a Kestrel server running on the same machine:

http {
    server {
        listen 80;

        location / {
            proxy_pass http://localhost:5000;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection keep-alive;
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
    }
}

This configuration tells Nginx to listen on port 80 and forward all requests to http://localhost:5000, which is where the Kestrel server is listening.

The proxy_http_version, proxy_set_header, and proxy_cache_bypass directives are used to ensure that HTTP/2 and WebSocket requests are proxied correctly.

Apache Configuration

Here‘s a similar configuration for Apache:

<VirtualHost *:80>
    ProxyPreserveHost On
    ProxyPass / http://127.0.0.1:5000/
    ProxyPassReverse / http://127.0.0.1:5000/
</VirtualHost>

This configuration uses the ProxyPass and ProxyPassReverse directives to forward requests to the Kestrel server running on http://127.0.0.1:5000.

The ProxyPreserveHost directive is used to ensure that the original Host header is passed through to Kestrel, which is important for applications that rely on this header for routing or other purposes.

Monitoring and Logging

Like any critical component in your application stack, it‘s important to monitor Kestrel to ensure it‘s healthy and performing as expected. Kestrel exposes a number of metrics that can give you insight into its performance and resource utilization.

Some key metrics to watch include:

  • Requests Per Second: The number of requests Kestrel is processing per second. A sudden drop in RPS can indicate a problem.
  • CPU Usage: The amount of CPU Kestrel is consuming. Consistently high CPU usage could indicate a performance bottleneck.
  • Memory Usage: The amount of memory Kestrel is using. Steadily increasing memory usage could signal a memory leak.
  • Active Connections: The number of active TCP connections. A spike in active connections could indicate a sudden influx of traffic or a DoS attack.

These metrics can be collected using the built-in ASP.NET Core Metrics middleware or through external monitoring tools like Prometheus, Datadog, etc.

In addition to metrics, it‘s also important to collect and centralize Kestrel‘s logs. By default, Kestrel logs all requests at an Information level but this can be configured using the Microsoft.AspNetCore.Server.Kestrel logger.

The request logs contain valuable information about each incoming request, including the HTTP method, URL, response status code, and the time taken to process the request. Having these logs available can help with troubleshooting issues and identifying performance bottlenecks.

Securing Kestrel

As with any Internet-facing component, it‘s critical to properly secure Kestrel to protect against attacks. Some key considerations:

  • Run Kestrel Behind a Firewall: Kestrel should not be directly exposed to the Internet. Instead, run it behind a firewall and use a reverse proxy to handle external traffic.

  • Keep Kestrel Up to Date: Make sure you‘re running the latest version of Kestrel and the ASP.NET Core runtime. Each release includes important security fixes and performance improvements.

  • Use HTTPS: Configure Kestrel to use HTTPS for all traffic. This will encrypt data in transit and help protect against man-in-the-middle attacks.

  • Configure Request Limits: Use Kestrel‘s request limit options to protect against large or malicious requests that could consume excessive resources.

  • Use a Dedicated User Account: Run Kestrel under a dedicated user account with limited privileges. This can help contain the damage if Kestrel is ever compromised.

By following these best practices, you can help ensure that your Kestrel-based applications are secure and resilient against attack.

Future of Kestrel

Looking forward, Kestrel will continue to evolve and improve as part of the larger ASP.NET Core ecosystem. Some areas of future development include:

  • Improved Performance: The Kestrel team is always looking for ways to squeeze out more performance and improve efficiency. Future releases will likely include further optimizations to the request processing pipeline and memory management.

  • Better Observability: As applications become more complex and distributed, it‘s important to have robust tools for monitoring and troubleshooting. Expect to see improved logging, tracing, and metrics capabilities in future versions of Kestrel.

  • Tighter Integration with Azure: As more applications move to the cloud, it‘s important that Kestrel integrates seamlessly with cloud platforms like Microsoft Azure. Expect to see closer ties and better tooling support for deploying and managing Kestrel applications in Azure.

  • Continued Cross-Platform Support: Kestrel‘s cross-platform support is one of its key strengths and this will continue to be a focus going forward. As new operating system versions and distributions emerge, the Kestrel team will work to ensure compatibility and optimal performance.

Overall, the future looks bright for Kestrel and the ASP.NET Core ecosystem as a whole. With its focus on performance, cross-platform support, and developer productivity, Kestrel is well-positioned to remain a top choice for hosting ASP.NET Core applications for years to come.

Conclusion

In this article, we‘ve taken a deep dive into Kestrel, the high-performance web server at the heart of ASP.NET Core.

We‘ve covered Kestrel‘s architecture and how it leverages the libuv library to provide efficient, cross-platform asynchronous I/O. We‘ve looked at Kestrel‘s performance characteristics and how it stacks up against other popular web servers in industry benchmarks.

We‘ve also explored some of the key configuration options for tuning Kestrel‘s performance, including threading and connection pooling settings. We‘ve seen how Kestrel can be used in conjunction with a reverse proxy server like Nginx or Apache to provide additional capabilities and security.

Finally, we‘ve touched on the importance of monitoring and securing Kestrel and looked ahead to some of the planned future developments.

Whether you‘re new to ASP.NET Core or a seasoned pro, I hope this article has given you a better understanding of what Kestrel is, how it works, and how you can leverage it to build high-performance, cross-platform web applications.

With its impressive performance, rich feature set, and active development community, Kestrel is a web server that‘s well worth considering for your next ASP.NET Core project.

Similar Posts