nginx限速_NGINX限速简而言之

尉迟禄
2023-12-01

nginx限速

by Sébastien Portebois

通过塞巴斯蒂安·波特博瓦

NGINX限速简而言之 (NGINX rate-limiting in a nutshell)

NGINX is awesome… but I found its documentation on rate limiting to be somewhat… limited. So I’ve written this guide to rate-liming and traffic shaping with NGINX.

NGINX很棒……但是我发现其有关速率限制的文档有些有限。 因此,我已经使用NGINX编写了有关速率限制和流量整形的指南。

We’re going to:

我们将要:

  • describe the NGINX directives

    描述NGINX指令
  • explain NGINX’s accept/reject logic

    解释NGINX的接受/拒绝逻辑
  • help you visualize how a real burst of traffic is processed using various settings: rate-limiting, traffic policy, and allowing small bursts

    帮助您直观地了解如何使用各种设置处理实际的流量突发:速率限制,流量策略和允许小型突发

As a Bonus, I’ve included a a GitHub repo and the resulting Docker image so you can experiment and reproduce the tests. It’s always easier to learn by doing!

作为奖励,我提供了一个GitHub存储库和生成的Docker映像,以便您可以试验和重现测试。 边做边学总是容易的!

NGINX速率限制指令及其作用 (NGINX rate-limit directives and their roles)

This post focuses on the ngx_http_limit_req_module, which provides you with the limit_req_zone and limit_req directives. It also provides the limit_req_status and limit_req_level. Together these allow you to control the HTTP response status code for rejected requests, and how these rejections are logged.

这篇文章重点介绍ngx_http_limit_req_module ,它为您提供limit_req_zonelimit_req指令。 它还提供了limit_req_statuslimit_req_level 。 这些一起使您可以控制拒绝请求的HTTP响应状态代码以及如何记录这些拒绝。

Most confusion stems from the rejection logic.

大多数混乱源于拒绝逻辑。

First, you need to understand the limit_req directive, which needs a zone parameter, and also provides optional burst and nodelay parameters.

首先,您需要了解limit_req指令,该指令需要一个zone参数,并且还提供可选的 burstnodelay参数。

There are multiple concepts at play here:

这里有多个概念在起作用:

  • zone lets you define a bucket, a shared ‘space’ in which to count the incoming requests. All requests coming into the same bucket will be counted in the same rate limit. This is what allows you to limit per URL, per IP, or anything fancy.

    zone可让您定义一个存储桶,一个共享的“空间”,用于在其中计算传入的请求。 所有进入相同存储桶的请求都将计入相同的速率限制。 这就是您可以限制每个URL,每个IP或其他任何限制的原因。

  • burst is optional. If set, it defines how many exceeding requests you can accept over the base rate. One important thing to note here: burst is an absolute value, it is not a rate.

    burst是可选的。 如果设置,它将定义您可以在基本费率上接受的超出数量的请求。 这里要注意的一件事: 突发是绝对值,不是速率

  • nodelay is also optional and is only useful when you also set a burst value, and we’ll see why below.

    nodelay也是可选的,仅在您还设置了burst值时才有用,我们将在下面说明原因。

NGINX如何确定请求是被接受还是被拒绝? (How does NGINX decide if a request is accepted or rejected?)

When you set a zone, you define a rate, like 300r/m to allow 300 requests per minute, or 5r/s to allow 5 requests each second.

设置区域时,您可以定义一个速率,例如300r/m以允许每分钟300个请求,或5r/s以允许每秒5个请求。

For instance:

例如:

  • limit_req_zone $request_uri zone=zone1:10m rate=300r/m;

    limit_req_zone $request_uri zone=zone1:10m rate=300r/m;

  • limit_req_zone $request_uri zone=zone2:10m rate=5r/s;

    limit_req_zone $request_uri zone=zone2:10m rate=5r/s;

It’s important to understand that these 2 zones have the same limits. The rate setting is used by NGINX to compute a frequency: what is the time interval before to accept a new request? NGINX will apply the leaky bucket algorithm with this token refresh rate.

重要的是要了解这两个区域具有相同的限制。 NGINX使用该rate设置来计算频率:接受新请求之前的时间间隔是多少? NGINX将以此令牌刷新率应用漏桶算法。

For NGINX, 300r/m and 5r/s are treated the same way: allow one request every 0.2 seconds for this zone. Every 0.2 second, in this case, NGINX will set a flag to remember it can accept a request. When a request comes in that fits in this zone, NGINX sets the flag to false and processes it. If another request comes in before the timer ticks, it will be rejected immediately with a 503 status code. If the timer ticks and the flag was already set to accept a request, nothing changes.

对于NGINX,以相同的方式处理300r/m5r/s :对此区域每0.2秒允许一个请求。 在这种情况下,每隔0.2秒,NGINX将设置一个标志以记住它可以接受请求。 当一个适合该区域的请求进入时,NGINX将标志设置为false并对其进行处理。 如果在计时器计时之前出现另一个请求,它将立即被503状态代码拒绝。 如果计时器计时并且标志已设置为接受请求,则没有任何变化。

您需要限速或流量整形吗? (Do you need rate-limiting or traffic-shaping?)

Enter the burst parameter. In order to understand it, imagine the flag we explained above is no longer a boolean, but an integer: the max number of requests NGINX can allow in a burst.

输入burst参数。 为了理解它,想象一下我们上面解释的标志不再是布尔值,而是一个整数:NGINX可以允许一个突发中的最大请求数。

This is no longer a leaky bucket algorithm, but a token bucket. The rate controls how fast the timer ticks, but it’s no longer a true/false token, but a counter going from 0 to 1+burst value. Every time the timer ticks, the counter is incremented, unless it is already at its maximum value of b+1. Now you should understand why the burst setting is a value, and not a rate.

这不再是泄漏存储桶算法,而是令牌存储桶。 rate控制计时器的计时速度,但它不再是一个true / false令牌,而是一个从01+burst value的计数器。 每次计时器计时时,计数器都会递增,除非它已经达到最大值b+1 。 现在您应该了解为什么burst设置是一个值而不是一个速率。

When a new request comes in, NGINX checks if a token is available (i.e. the counter is > 0), if not, the request is rejected. If there is a token, then the request is accepted and will be treated, and that token will be consumed (the counter is decremented).

当有新请求进入时,NGINX会检查令牌是否可用(即计数器> 0),否则,请求将被拒绝。 如果存在令牌,则请求将被接受并被处理,并且令牌将被消耗(计数器递减)。

Ok, so NGINX will accept the request if a burst token is available. But when will NGINX process this request?

好的,因此如果突发令牌可用,NGINX将接受请求。 但是NGINX何时会处理此请求?

You asked NGINX to apply a maximum rate of 5r/s, NGINX accepts the exceeding requests if burst tokens are available, but will wait for some room to process them within that max rate limit. Hence these burst requests will be processed with some delay, or they will time out.

您要求NGINX施加5r/s的最大速率,如果突发令牌可用,则NGINX接受超出的请求,但将等待在最大速率限制内处理它们的空间。 因此, 这些突发请求将被延迟处理 ,否则将超时。

In other words, NGINX will not go over the rate limit set in the zone declaration, and will therefore queue the extra requests and process them with some delay, as the token-timer ticks and fewer requests are received.

换句话说,NGINX将不会超过在区域声明中设置的速率限制,因此将把额外的请求排入队列,并在一定的延迟下处理它们,因为令牌计时器会滴答作响,收到的请求更少。

To use a simple example, let’s say you have a rate of 1r/s, and a burst of 3. NGINX receives 5 requests at the same time:

举一个简单的例子,假设您的速率为1r/s ,突发为3 。 NGINX同时接收5个请求:

  • The first one is accepted and processed

    第一个被接受并处理
  • Because you allow 1+3, there’s 1 request which is immediately rejected, with a 503 status code

    因为您允许1 + 3,所以有1个请求被立即拒绝,状态代码为503
  • The 3 other ones will be treated, one by one, but not immediately. They will be treated at the rate of 1r/s to stay within the limit your set. If no other request comes in, already consuming this quota. Once the queue is empty, the burst counter will start to be incremented again (the token bucket starts to be filled again)

    其他三个将被一一处理,但不会立即进行。 它们将以1r/s的速率处理,以保持在您设置的限制内。 如果没有其他请求,则已经消耗了此配额。 队列为空后,突发计数器将再次开始递增(令牌桶开始再次填充)

If you use NGINX as a proxy, the upstream will get the request at a maximum rate of 1r/s, and it won’t be aware of any burst of incoming requests, everything will be capped at that rate.

如果您使用NGINX作为代理,则上游将以1r/s的最大速率接收请求,并且它不会知道传入请求的爆发,因此所有内容都将以该速率限制。

You just did some traffic shaping, introducing some delay to regulate bursts and produce a more regular stream outside of NGINX.

您只是做了一些流量调整,引入了一些延迟来调节突发并在NGINX之外产生更规则的流。

输入nodelay (Enter nodelay)

nodelay tells NGINX that the requests it accepts in the burst window should be processed immediately, like regular requests.

nodelay告诉NGINX,它在突发窗口中接受的请求应像常规请求一样被立即处理。

As a consequence, the spikes will propagate to NGINX upstreams, but with some limit, defined by the burst value.

结果,尖峰将传播到NGINX上游,但有一定限制,由burst值定义。

可视化速率限制 (Visualizing rate limits)

Because I believe the best way to remember this is to experience it in a hands-on fashion, I set up a small Docker image with a NGINX config exposing various rate-limit settings to see the responses to a basic rate limited location, to a burst-enabled rate-limited location, and to burst with nodelay rate-limited location, let’s play with that.

因为我相信记住这一点的最好方法是亲身体验,所以我使用NGINX配置设置了一个小型Docker映像,该映像公开了各种速率限制设置,以查看对基本速率限制位置的响应,以及启用了burst速率限制的位置,并使用nodelay速率限制的位置进行burst ,让我们开始吧。

These samples use this simple NGINX config (which we’ll provide a Docker image for at the end of this post so you can more easily test this):

这些示例使用此简单的NGINX配置(在本文结尾处将为您提供Docker映像,因此您可以更轻松地对其进行测试):

limit_req_zone $request_uri zone=by_uri:10m rate=30r/m;

server {
    listen 80;

    location /by-uri/burst0 {
        limit_req zone=by_uri;
        try_files $uri /index.html;
    }

    location /by-uri/burst5 {
        limit_req zone=by_uri burst=5;
        try_files $uri /index.html;
    }

    location /by-uri/burst5_nodelay {
        limit_req zone=by_uri burst=5 nodelay;
        try_files $uri /index.html;
    }
}

Starting with this config, all the samples below will send 10 concurrent requests at once. Let’s see:

从此配置开始,以下所有示例将一次发送10个并发请求。 让我们来看看:

  • how many get rejected by the rate-limit?

    有多少人被速率限制所拒绝?
  • what’s the processing rate of the accepted ones?

    接受的处理速度是多少?

向速率受限的端点发送10个并行请求 (Sending 10 parallel requests to a rate-limited endpoint)

That config allows 30 requests per minute. But 9 out of 10 requests are rejected in that case. If you followed the previous steps, this should make sense: The 30r/m means that a new request is allowed every 2 seconds. Here 10 requests arrive at the same time, one is allowed, the 9 other ones are seen by NGINX before the token-timer ticks, and are therefore all rejected.

该配置允许每分钟30个请求。 但是在这种情况下,十分之九的请求将被拒绝。 如果您按照前面的步骤进行操作,这应该很有意义: 30r/m表示每2秒允许一次新请求。 这里有10个请求同时到达,一个被允许,另外9个请求在令牌计时器计时之前被NGINX看到,因此全部被拒绝。

但是我可以容忍某些客户端/端点的爆发 (But I’m OK to tolerate some burst for some client/endpoints)

Ok, so let’s add the burst=5 argument to let NGINX handle small bursts for this endpoint of the rate-limited zone:

好的,让我们添加burst=5参数,以使NGINX为速率受限区域的此端点处理小脉冲:

What’s going on here? As expected with the burst argument, 5 more requests are accepted, so we went from 1 /10 to 6/10 success (and the rest is rejected). But the way NGINX refreshed its token and processed the accepted requests is quite visible here: the outgoing rate is capped at 30r/m which is equivalent to 1 request every 2 seconds.

这里发生了什么? 正如期望的burst参数一样,又接受了5个请求,因此我们从1/10成功变为6/10成功(其余都被拒绝了)。 但是NGINX刷新其令牌并处理接受的请求的方式在这里非常明显:传出速率上限为30r/m ,相当于每2秒发送1个请求。

The first one is returned after 0.2 seconds. The timer ticks after 2 seconds, and one of the pending requests is processed and returned, with a total roundtrip time of 2.02 seconds. 2 seconds later, the timer ticks again, processing another pending request, which is returned with a total roundtrip time of 4.02 seconds. And so on and so forth…

0.2秒后返回第一个。 计时器在2秒后滴答,并且处理和返回一个待处理的请求,往返总时间为2.02秒。 2秒后,计时器再次计时,处理另一个待处理的请求,此请求的往返总时间为4.02秒。 等等等等…

The burst argument just lets you turn NGINX rate-limit from some basic threshold filter to a traffic shaping policy gateway.

使用burst参数,您可以将NGINX速率限制从某些基本阈值过滤器转换为流量整形策略网关。

我的服务器有一些额外的容量。 我想使用一个速率限制来防止它超过此容量。 (My server has some extra capacity. I want to use a rate-limit to prevent it from going over this capacity.)

In this case, the nodelay argument will be helpful. Let’s send the same 10 requests to a burst=5 nodelay endpoint:

在这种情况下, nodelay参数将很有帮助。 让我们将相同的10个请求发送到burst=5 nodelay端点:

As expected with the burst=5 we still have the same number of status 200 and 503. But now the outgoing rate is no longer strictly constrained to the rate of 1 requests every 2 seconds. As long as some burst tokens are available, any incoming request is accepted and processed immediately. The timer tick rate is still as important as before to control the refresh/refill rate of these burst tokens, but accepted requests no longer suffer any additional delay.

正如期望的那样,在burst=5我们仍然具有相同数量的状态200和503。但是现在出站速率不再严格限制为每2秒1个请求的速率。 只要某些突发令牌可用,任何传入请求都将被接受并立即处理。 计时器滴答速率仍然和以前一样重要,以控制这些突发令牌的刷新/重新填充速率,但是接受的请求不再遭受任何额外的延迟。

Note: in this case, the zone uses the $request_uri but all the following tests work exactly the same way for a $binary_remote_addr config which would rate-limit by client IP. You’ll be able to play with this in the Docker image.

注意 :在这种情况下, zone使用$request_uri但以下所有测试对于$binary_remote_addr配置的工作方式都完全相同,这将受客户端IP的速率限制。 您将可以在Docker映像中使用它。

让我们回顾一下 (Let’s recap)

If we try to visualize how NGINX accepts the incoming requests, then processes them depending on the rate, burst, and nodelay parameter, here’s a synthetic view.

如果我们尝试可视化NGINX如何接受传入的请求,然后根据rateburstnodelay参数处理它们,这是一个综合视图。

To keep things simple, we’ll show the number of incoming requests (then accepted or rejected, and processed) per time step, the value of the time step depending on the zone-defined rate limit. But the actual duration of that step doesn’t matter in the end. What is meaningful is the number of requests NGINX has to process within each of these steps.

为简单起见,我们将显示每个时间步的传入请求数(然后接受或拒绝并处理),时间步的值取决于区域定义的速率限制。 但是该步骤的实际持续时间最后并不重要。 有意义的是,在每个步骤中NGINX必须处理的请求数。

So here is the traffic we’ll send through various rate limit settings:

因此,这是我们将通过各种速率限制设置发送的流量:

Without using the burst (i.e. burst=0), we saw that NGINX acts as a pure rate-limit/traffic-policy actor. All requests are either immediately processed, if the rate timer has ticked, or immediately rejected otherwise.

在不使用突发(即burst=0 )的情况下,我们看到NGINX充当纯速率限制/流量策略角色。 如果速率计时器已打勾,所有请求将立即处理,否则立即被拒绝。

Now if we want to allow the small burst to use the unused capacity under the rate-limit, we saw that adding a burst argument lets use do that, which implies some additional delay in processing the requests consuming the burst tokens:

现在,如果我们要允许小脉冲串在速率限制下使用未使用的容量,我们看到添加一个burst参数可以使用它,这意味着在处理使用脉冲串令牌的请求时会有一些额外的延迟:

We can see that the overall number of rejected requests is lower, and NGINX processes more requests. Only the extra requests when no burst tokens are available are rejected. In this setup, NGINX performs some real traffic-shaping.

我们可以看到,被拒绝的请求总数较低,而NGINX处理的请求更多。 仅当没有突发令牌可用时,额外的请求才会被拒绝。 在此设置中,NGINX执行一些实际的流量整形。

Finally, we saw that NGINX can be used to either do some traffic-policy or to limit the size of the burst, but still propagates some of these bursts to the processing workers (upstreams or local), which, in the end, does generate less stable outgoing rate, but with a better latency, if you can process these extra requests:

最终,我们看到NGINX可用于执行某些流量策略或限制突发的大小,但仍将其中一些突发传播给处理工作者(上游或本地),最终它们会生成如果您可以处理以下额外请求,则不稳定的传出速率会降低,但延迟会更长:

自己玩限速沙盒 (Playing with the rate limit sandbox yourself)

Now you can go explore the code, clone the repo, play with the Docker image, and get your hands on it real quick to better solidify your understanding of these concepts. https://github.com/sportebois/nginx-rate-limit-sandbox

现在,您可以浏览代码,克隆存储库,使用Docker映像,并快速上手操作,以更好地巩固您对这些概念的理解。 https://github.com/sportebois/nginx-rate-limit-sandbox

更新(2017年6月14日) (Update (June 14th, 2017))

NGINX published a few days ago their own detailed explanation of their rate-limiting mechanism. You can now learn more about it in their Rate Limiting with NGINX and NGINX Plus blog post.

NGINX在几天前发表了自己的限速机制详细说明。 现在,您可以通过NGINX和NGINX Plus博客文章在其速率限制中了解更多信息。

翻译自: https://www.freecodecamp.org/news/nginx-rate-limiting-in-a-nutshell-128fe9e0126c/

nginx限速

 类似资料: