Amazon S3 with authorization

When using Remote Storage as for instance with Amazon S3, it might be needed to secure access to the S3 buckets used instead of using 'public' access.

This means that there will be no anonymous access, but also that the requests to the content in the bucket needs to be authenticated.

There are two ways to do this:

  • use the Authorization header
  • send a signature as a URL-encoded query-string parameter.

The following example will use the second method.

More information can be found in Amazon's Signing and Authenticating REST Requests documentation.

Using query parameters

Amazon specifies the use of the following parameters:

Name Description
AWSAccessKeyId Your AWS access key ID.
Expires The time when the signature expires (in seconds since 00:00:00 UTC 1970).
Signature The URL encoding of the Base64 encoding of the HMAC-SHA1 of StringToSign.

Authentication is the process of proving your identity to AWS Simple Storage Solution (S3). Identity is an important factor in Amazon S3 access control decisions. Requests are allowed or denied in part based on the identity of the requester. As a developer, you'll be making requests that invoke these privileges, so you'll need to prove your identity to the system by authenticating your requests. This section shows you how.

The creation of the signature is specified as follows:

Signature = URL-Encode( Base64( HMAC-SHA1( YourSecretAccessKeyID, UTF-8-Encoding-Of( StringToSign ) ) ) );

StringToSign = HTTP-VERB + "\n" +
    Content-MD5 + "\n" +
    Content-Type + "\n" +
    Expires + "\n" +
    CanonicalizedAmzHeaders +
    CanonicalizedResource;

The Amazon S3 REST API uses a custom HTTP scheme based on a keyed-HMAC (Hash Message Authentication Code) for authentication. To authenticate a request, you first concatenate selected elements of the request to form a string. You then use your AWS secret access key to calculate the HMAC of that string. This process is called "signing the request," and the output of the HMAC algorithm is called the signature, because it simulates the security properties of a real s ignature. Finally, you add this signature as a parameter of the request by using the syntax described in this section.

When the system receives an authenticated request, it fetches the AWS secret access key and uses it in the same way to compute a signature for the message it received. It then compares the signature it calculated against the signature presented by the requester. If the two signatures match, the system concludes that the requester must have access to the AWS secret access key. If the two signatures do not match, the request is dropped and the system responds with an error message.

Here is an example in BASH on how to generate and sends a signature as a URL-encoded query-string parameter:

#!/bin/sh

BUCKET=your-bucket
FILE=your-ismv-file

ACCESS_KEY=YOUR_ACCESS_KEY
SECRET_KEY=YOUR_SECRET_KEY

TIMESTAMP=$(php -r "print (time() + 3600);")

STRING="GET\n\n\n${TIMESTAMP}\n/${BUCKET}/${FILE}"
SIGNATURE=$(php -r "print urlencode(base64_encode((hash_hmac(\"sha1\",utf8_encode(\"${STRING}\"),'${SECRET_KEY}',TRUE))));")
ISMV_URL="http://${BUCKET}.s3.amazonaws.com/${FILE}?AWSAccessKeyId=${ACCESS_KEY}&Expires=${TIMESTAMP}&Signature=${SIGNATURE}"
echo ${ISMV_URL}

At this point we have defined a URL ($ISMV_URL in above example) which we can use as input for mp4split to generate a server manifest.

We can extend the above script and create a server manifest based on the file we just authenticated from the S3 bucket.

As an example:

mp4split -o your-server-manifest.ism "$ISMV_URL"

Further details on how to use a URL as input for server manifest creation is outlined here: Remote Storage.

This can be downloaded as s3-authorization.sh.

Using webserver directives for S3 authentication

New in version 1.7.2.

Instead of signing the ismv URL to be used when creating the manifest, as shown in the section above, it is also possible to use two new directives and let USP sign the request.

This covers one specific use case: both content (mp4/ismv) and server manifest (ism) are placed in the S3 bucket and the manifest references the content locally (no paths or URLs, just the filename).

The directives for Apache are the following:

Option Description
S3SecretKey The AWS secret key.
S3AccessKey The AWS access key.

The directives for Nginx are the following:

Option Description
s3_secret_key The AWS secret key.
s3_access_key The AWS access key.

The keys can be created in the AWS IAM portal and managed there as well (active/inactive, delete).

There is one limitation, the directives need to be placed in a location mapping to the directory that is setup to use IsmProxyPass.

The reason for this lays in Apache's handle chain where Directory is processed differently from Location so the S3 keywords are not available when placed in Directory.

Apache example

<Location "/s3_auth">
  S3SecretKey SECRET_KEY
  S3AccessKey ACCESS_KEY
</Location>

<Directory "/var/www/test/s3_auth/auth-remote-manifest/">
  IsmProxyPass http://unified-streaming-auth.s3-eu-west-1.amazonaws.com/
</Directory>

An explanation of the parts of the paths used in Location and Directory:

Path Description
/var/www/test The document root
/s3_auth The location mapping applying the S3 directives
/auth_remote_manifest The directory that maps IsmProxyPass into place

The following curl call will return the client manifest created from the securely fetched server manifest.

curl -v http://test.unified-streaming.com/s3_auth/auth-remote-manifest/oceans.ism/manifest

Playout will follow the same pattern: Apache will translate the request into S3 and apply the signature.

Nginx

With Nginx, a Directory directive in the virtual host is not needed, a location will suffice.

The S3 authentication can also be done by Authentication Header where the header looks like the following:

Authorization: AWS AWSAccessKeyId:Signature

This header can be generated in a location by for instance an Nginx module [1], lua [2] or Nginx variables and functions [3].

In general, an authorization header can be generated this way and passed upstream, the Hitachi content platform uses a similar authorization header.

Footnotes

[1]https://github.com/jamesmarlowe/lua-resty-s3#generate_auth_headers
[2]https://github.com/anomalizer/ngx_aws_auth
[3]https://dodwell.us/using-nginx-to-proxy-private-amazon-s3-web-services/