Securing Your APIs with Nginx auth_request: The Definitive Guide
APIs have become the backbone of modern application architectures, powering the communication between microservices, cloud services, and client apps. The explosive growth of APIs – Akamai reports a 30x increase in API traffic in 2021 – has made them a prime target for attackers. Gartner predicts that by 2022, API attacks will become the most frequent attack vector for web app breaches.
Protecting your API endpoints is crucial, but implementing security within each microservice is error-prone and inconsistent. Externalizing authentication and authorization with an API gateway or reverse proxy provides a central control point to secure your APIs with less code.
Nginx is the most popular web server and reverse proxy, powering over 30% of sites on the internet including the majority of the top 100,000 highest traffic sites. One of nginx‘s most powerful features for API security is the auth_request
module and its companion auth_request_set
directive. With auth_request, you can protect any backend API or web service by delegating authentication and authorization to a separate auth service, without changes to the backend code.
In this definitive guide, we‘ll dive deep into auth_request
to understand:
- How the auth_request flow works under the nginx hood
- Securing multiple APIs with a common auth layer
- Passing user context and metadata to your backend
- Scaling auth_request with caching and other optimizations
- Troubleshooting and monitoring your auth
Whether you‘re already using nginx to proxy your APIs or considering it, this guide will give you the knowledge and practical guidance to leverage auth_request
for stronger API security.
How auth_request works
At a high level, auth_request
allows nginx to verify authentication and authorization by making a subrequest to a designated auth service before allowing the original request to proceed upstream.
Here‘s the step-by-step:
- Client sends a request to an nginx-protected resource
- Nginx makes a subrequest to the configured auth endpoint, passing request headers/metadata
- Auth endpoint processes the request and returns a response:
- 2xx indicates the request is allowed
- 401 or 403 indicates the request is denied and nginx returns that status to the client
- On success, nginx allows the original request to the upstream API
- Upstream API handles the request as normal
With this flow, the auth logic is completely decoupled from the upstream API, allowing you to secure any existing API without code changes. The auth endpoint can be an external auth service, a simple HTTP endpoint in your environment, or even a lua script.
Some key features of auth_request
include:
- The auth subrequest is non-blocking, so nginx can handle other requests while waiting for the auth response. This prevents the auth service from starving nginx‘s request processing.
- Headers from the client request are passed to the auth service (except the request body by default). This allows the auth service to validate cookies, Authorization headers, etc.
- The auth response headers and status are available to the rest of the nginx config, allowing you to log auth failures, set custom error pages, and pass auth metadata upstream.
Securing multiple services
A common use case for auth_request
is securing multiple backend services with a common auth layer. For example, consider a typical microservice architecture:
In this setup, an nginx reverse proxy routes requests to various backend services based on URL path. Each service is focused on its specific functionality and shouldn‘t have to worry about authenticating users.
With auth_request
, we can add a centralized auth service and secure all the backends:
http {
upstream auth_service {
server auth.example.com;
}
upstream service_a {
server service-a.example.com;
}
upstream service_b {
server service-b.example.com;
}
upstream service_c {
server service-c.example.com;
}
server {
listen 80;
server_name example.com;
location /auth {
internal;
proxy_pass http://auth_service/validate;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
location /api/service-a {
auth_request /auth;
proxy_pass http://service_a;
}
location /api/service-b {
auth_request /auth;
proxy_pass http://service_b;
}
location /api/service-c {
auth_request /auth;
proxy_pass http://service_c;
}
}
}
This config will route any requests starting with /api/service-X
to the corresponding upstream service, but first make an auth subrequest to the /auth
path (which proxies to the auth_service
upstream). The auth service can check for a valid session cookie, API key, OAuth token, etc and return a 2xx status to indicate the request is allowed. If the auth fails, nginx will return a 401 or 403 to the client without ever hitting the upstream service.
By using a common auth endpoint, we‘ve drastically simplified management and ensured all services are consistently protected. And auth_request
makes it totally transparent to the backend services, so no code changes are needed.
Passing user context upstream
The auth_request
module includes a companion directive auth_request_set
which allows you to capture data from the auth response and set it as variables for use in the rest of the nginx config.
For instance, once a user is authenticated, you may want to pass their user ID to the upstream service so it knows who is making the request:
location /api/service-a {
auth_request /auth;
auth_request_set $user_id $upstream_http_x_user_id;
proxy_set_header X-User-ID $user_id;
proxy_pass http://service_a;
}
Here auth_request_set
is capturing the X-User-ID
header from the auth response and assigning it to the $user_id
variable. That variable is then added as a header on the request to the upstream service.
You can set multiple variables by chaining auth_request_set
directives, such as:
auth_request_set $user_scope $upstream_http_x_user_scope;
auth_request_set $user_email $upstream_http_x_user_email;
This allows the auth service to pass back rich user context to nginx, which can then be passed to upstream services or used to make dynamic routing/access control decisions.
Some examples of user metadata you may want to pass:
- User role or permissions to enable policy-based access control
- Token expiration or lifespan to prevent using expired credentials
- User group or organization to route requests to specific upstreams
- Original client IP for logging/auditing
By propagating auth data to upstreams, those services can stay lean and focused while nginx handles the common auth concerns.
Scaling auth_request
Since every request triggers an auth subrequest, the auth service can become a performance bottleneck. Some best practices to ensure your auth_request
flow scales:
- Cache auth responses – For APIs using short-lived tokens like JWTs, we can cache successful auth responses in nginx‘s in-memory cache to avoid repeated requests to the auth service:
http {
proxy_cache_path /path/to/cache keys_zone=my_auth_cache:10m;
server {
location /auth {
proxy_pass http://auth_service;
proxy_cache my_auth_cache;
proxy_cache_key "$http_authorization";
proxy_cache_valid 200 10m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
}
location /api {
auth_request /auth;
proxy_pass http://my_api;
}
}
}
This configuration will cache auth responses for 10 minutes based on the Authorization header. Subsequent requests with the same header will hit the nginx cache instead of making a new auth subrequest.
-
Optimize the auth service – Design your auth service to be lightweight and fast. Consider using a high-performance language, minimizing external dependencies, and optimizing any DB queries. The faster your auth responds, the less latency added to the overall request.
-
Leverage the network – If you‘re using an external auth service (e.g OAuth or a third-party access management system), choose one with a low-latency network path from your nginx instances. Colocating your auth service in the same datacenter or cloud region can make a big difference.
-
Fail fast – Configure nginx to quickly reject invalid requests before they consume resources. Avoid retrying auth failures.
To see the impact of auth_request
on your API performance, you can benchmark with a tool like wrk
or ab
. Compare the latency and throughput of requests with and without auth_request
to quantify the overhead.
Monitoring auth_request
auth_request adds a critical external dependency to your API request flow. It‘s important to monitor the health and performance of your auth layer to avoid outages. Some key nginx metrics to watch:
nginx_http_auth_request_total
(counter) – number of auth subrequests madenginx_http_auth_request_time_seconds
(histogram) – time spent waiting for auth responsenginx_http_auth_request_status_count
(counter) – count of auth response status codes (401s, 504s, etc)
You can export these metrics from nginx using the ngx_http_stub_status_module and track them in Prometheus, Datadog, or your favorite monitoring tool.
In addition to metrics, make sure to configure proper logging for the auth subrequests by adding an access_log
directive in the /auth
location:
location /auth {
proxy_pass http://auth_service;
access_log /var/log/nginx/auth.log;
}
This will give you granular logs for each auth request and response for auditing and debugging.
Some common failure modes to watch for:
-
Auth service 5xx errors or timeouts – could indicate a bug or overloaded auth service. Configure an
auth_request_set
variable to capture auth status and add to error logs. -
Excessive auth 401/403 responses – potential brute-force or credential stuffing attack. Consider rate limiting auth failures by client IP.
-
High latency for auth subrequests – monitor the distribution of
nginx_http_auth_request_time_seconds
and alert on outliers. Could indicate auth service performance issues or network saturation.
By monitoring the right metrics and logs, you can ensure your auth layer is performing properly and quickly respond to any issues.
Alternatives and complements
While auth_request
is a powerful tool for externalizing API auth, it‘s not the only option. Depending on your use case and environment, you may want to explore:
-
OAuth/OpenID Connect – If you‘re using an external identity provider like Auth0, Okta, or Google, many of them offer OAuth/OIDC integration plugins for nginx. These handle validating tokens locally without a subrequest. See nginx‘s OAuth plugin as an example.
-
JWT validation – For APIs using JSON Web Tokens for stateless auth, the nginx njs module allows validating JWTs directly in nginx config with JavaScript. lua-resty-jwt is another popular library for native JWT handling.
-
Service mesh – If you‘re running in Kubernetes, a service mesh like Istio can handle end-to-end auth between your services without nginx in the picture. The AuthorizationPolicy resource allows similar request-time validation of JWTs and integrates with external auth providers.
In many cases, a combination of approaches works best – using auth_request
for an initial layer of lightweight auth while using OAuth/OIDC and JWTs for more granular service<->service auth. The beauty of externalizing auth is you have the flexibility to evolve it independently of your services.
Looking ahead
We‘ve covered a lot of ground in this guide, but the potential of auth_request
is really only limited by your imagination. As APIs continue to power more critical aspects of our digital world, adaptive approaches to security and identity will become even more crucial.
The auth_request
module is part of the larger movement towards externalized security controls, allowing for easier central management and more flexible deployments. By pushing security to the edge (in this case nginx) and keeping it separate from application logic, you gain simpler services and enable portability between environments.
While auth_request
may seem like a small feature, it‘s a building block for powerful zero-trust API security. It allows decoupling authentication from individual services, standardizing auth across your APIs, and dynamically augmenting requests with verified user context. As you modernize your APIs, keep auth_request
in your toolkit to ensure ironclad auth with minimal service changes.
To dive even deeper into auth_request
and API security in general, check out:
- The official nginx auth_request docs
- Authenticating API Clients with Nginx Plus (NGINX blog)
- The OWASP API Security Top 10 (OWASP)
- Securing Microservices: Externalize Authorization (Oso blog)
As always, ensure your services are kept up-to-date and follow secure coding best practices. No amount of edge security can completely protect vulnerable endpoints. But by layering security, and externalizing controls with tools like auth_request
, you‘ll be well on your way to unbreakable APIs.
Stay safe out there!