Outbound API rate limits: the nginx way

Bartosz Pietrzak

Outbound API rate limits: the nginx way

Implementing external API rate limits can be painful. There are some solutions that aim at this problem - let's take Slow Web for example - but things tend to complicate when we start using background jobs, or even oldschool cron-rake setup. Slow web takes care of one environment at a time, not to mention that it relies on Net::HTTP so you can forget using more powerful stuff like Typhoeus.

Recently we started working on a project that uses heavily an API that limits our development calls to one per 500ms and production calls to one per 200ms (our first thought and initial information was that we can do five requests per second, but those are not the same). Inspired by a great post, „nginx JSON hacks”, an idea came to our minds. What if we could use nginx to implement limiting the API? Here comes the built-in nginx module, the HttpLimitReqModule. Let's take a quick look.

http {
  limit_req_zone  $binary_remote_addr  zone=one:10m   rate=110r/m;

  server {
    listen       8080;
    server_name  localhost;

    location / {
      limit_req   zone=one  burst=100;

      proxy_pass        http://api.example.com/;
    }
  }
}

(Please bear in mind that this configuration is merely an example and you probably want a more comprehensive setup. You can talk to our RoR experts to do the job for you).

We set a simple proxy on localhost:8080, configure the limit_req_zone option, apply the zone, set the burst rate to 100 (depending on how you want to handle huge load, this could be smaller or bigger). We set the rate to 110 requests per minute for a little margin. Our benchmarks went fine, as we expected - no API calls were dropped. Setting it to exact 120r/m (or 2r/s) gave us some 403s, though.

Okay, so what about those cool examples in the post mentioned above? Let's apply some caching!

http {
    limit_req_zone  $binary_remote_addr  zone=one:10m   rate=110r/m;
    proxy_cache_path  /tmp/nginx_cache levels=1:2 keys_zone=STATIC:64m inactive=60m max_size=128m;

    server {
        listen       8080;
        server_name  localhost;

        location /without_cache {
          limit_req   zone=one  burst=100;

          proxy_pass        http://api.example.com/;
        }

        location / {
            proxy_pass        http://localhost:8080/without_cache;

            proxy_cache STATIC;
            proxy_cache_methods POST GET PUT; # allow POST caching which is not allowed by default
            proxy_cache_valid 10s; # cache every API request for 10 seconds
            proxy_cache_key "$host$request_uri$request_body";
        }
    }
}

Why two locations, you may ask? It's because we don't want to have a rate limit for our cached version. In this case, when the first request finishes, every other gets an ultra-speed boost from being stored in a local cache.

We don't have to worry about implementing API rate limits in every place in the app. We don't have to worry about sharing those limits between different environments. All we need to do is to change the API host and enjoy pure coding goodness.

Meet Ruby on Rails experts

We’ve been doing Ruby on Rails since 2010 and our services quality and technical expertise are confirmed by actual clients. Work with experts recognized as #1 Ruby on Rails company in the World in 2017 by Clutch.

Talk to our team and confidently build your next big thing.

Bartosz Pietrzak avatar
Bartosz Pietrzak