Configuration: URL signing

Origin shield configuration

URL signing: Varnish Cache

In the following example configuration

#!/bin/bash

md5 -s "unifiedrocks"
MD5 ("unifiedrocks") = 0bc40bb3b87483a25010f21609470bcc
curl -v \
  'http://localhost/vod/tears-of-steel/tears-of-steel-en.ism/.mpd?vbegin=60&token=0bc40bb3b87483a25010f21609470bcc' > /dev/null
curl -v \
  'http://localhost/vod/tears-of-steel/tears-of-steel-en.ism/.mpd?vbegin=30&token=0bc40bb3b87483a25010f21609470bcc' > /dev/null
vcl 4.1;

import std;
import digest;

sub verify_signature {
    # Block all requests that do not contain a token as query parameter
    if (req.url !~ ".*token=[a-z-0-9]+.*$")
    {
            std.log("No token provided");
            return(synth(403));
    }

    if (req.url ~ ".*[.mpd\?|.m3u8\?|Manifest\?|.f4m\?].*" && req.url ~ ".*vbegin=[0-9]+.*$")
    {
        set req.http.vbegin = regsub(req.url, ".*vbegin=([0-9]+).*$", "\1");
        set req.http.token = regsub(req.url, ".*token=([a-z-0-9]+).*$", "\1");

        # Generation Signature based on:
        # (1) private phrase, (2) request HOST header, (3) vbegin value, and (4) token value
        set req.http.X-Signature = digest.hmac_sha256(
            "changeme", req.http.host + req.http.vbegin + req.http.token);

        ## Create the correct token that the client supposed to have generated
        set req.http.X-Token-md5 = digest.hash_md5("unifiedrocks");

        # Verify if the signature meet our value requirements where
        # 'vbegin' is equals to 60.
        if (req.http.X-Signature != digest.hmac_sha256(
            "changeme", req.http.host + "60" + req.http.X-Token-md5))
        {
            std.log("Signature not valid");
            return(synth(403));
        }
        unset req.http.X-Token-md5;
        unset req.http.vbegin;
        unset req.http.token;
    }
}

sub vcl_recv {
    # Call this early in vcl_recv.
    call verify_signature;
}

Configuration: Rate Limiting

Origin shield configuration

Nginx

Note

This section is an extract from an Nginx Blog

Rate limiting provides the capability to limit the number of requests per user that can be made within a given period of time. Rate limiting can help to prevent DDos attacks but most commonly it is used to protect the upstream_server (origin) from being overloaded with too many simultaneous requests.

Nginx rate limiting uses the leaky bucket algorithm to process requests based upon a first‑in‑first‑out (FIFO) method. Once the number of requests exceeds the given threshold the remaining requests are simply discarded and a failure response provided (5xx) leaving your upstream origin protected.

This can be configured using two main directives: limit_req_zone and limit_req.

limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s

upstream origin {
    server origin.unified-streaming.com:82;
    keepalive 32;
  }

  server {
    listen 0.0.0.0:80;
    server_name edge.unified-streaming.com;

    location / {
      limit_req zone=mylimit;
      proxy_pass http://origin;
      proxy_cache edge-cache;

      proxy_http_version 1.1;
      proxy_set_header Connection "";

      add_header X-Cache-Status $upstream_cache_status;
      add_header X-Handled-By $proxy_host;
    }

The limit_req_zone directive defines the parameters for rate limiting while limit_req enables rate limiting within the context where it appears.

The limit_req_zone directive is typically defined in the http block allowing it to be used with multiple context/locations.

It takes the following three parameters:

  • Key – Defines the request characteristic against which the limit is applied. In the example it is the Nginx variable $binary_remote_addr, which holds a binary representation of a client’s IP address. This means we are limiting each unique IP address to the request rate defined by the third parameter.

  • Zone – Defines the shared memory zone used to store the state of each IP address and how often it has accessed a request‑limited URL. Keeping the information in shared memory means it can be shared among the Nginx worker processes. The definition has two parts: the zone name identified by the zone= keyword, and the size following the colon. State information for about 16,000 IP addresses takes 1 megabyte, so our zone can store about 160,000 addresses.

  • Rate – Sets the maximum request rate. In the example, the rate cannot exceed 10 requests per second. Nginx actually tracks requests at millisecond granularity, so this limit corresponds to 1 request every 100 milliseconds (ms). Because we are not allowing for bursts (see Nginx Rate Limiting), this means that a request is rejected if it arrives less than 100ms after the previous permitted one. The limit_req_zone directive sets the parameters for rate limiting and the shared memory zone, but it does not actually limit the request rate. For that you need to apply the limit to a specific location or server block by including a limit_req directive there. In the example, we are rate limiting requests to /.

So now each unique IP address is limited to 10 requests per second for / or more precisely, cannot make a request for that URL within 100ms of its previous one.

Note

For further details and use cases please see the Nginx Blog mentioned earlier and Nginx Rate Limiting documentation.

Varnish Cache

Varnish Cache and Varnish Enterprise offers Vsthrottle and Vsthrottle Enterprise module, respectively. These modules can be used to reduce the number of requests hitting Unified Origin.