Remote Storage

New in version 1.4.33.

It is possible to host the manifest in various ways:

  • local manifest and local content
  • local manifest and remote content
  • remote manifest and remote content

The following sections outline how to setup second and third option.

You will also see that the 'data reference' (dref) mp4 used for instance for progressive download can be hosted similarly.

The following examples use the EC2/S3 combination, but any storage supporting HTTP range request can be used. Below the use of Amazon S3, Azure Storage, GCE, Scality or HCP is outlined as well as an outline on how to optimise performance.

Using local manifests

Store your video files on any storage server that supports HTTP range requests. The USP webserver module requests only the necessary data from the storage server to serve the request. This makes it possible to use for example an EC2/S3 combination or access your storage server over HTTP instead of using mount points.

The request flow:

                [origin]  <-- client request
 [storage] <-- video.ism
video1.ismv
video1.ismv
audio1.isma

The information of where the audio and video files are stored is specified in the server manifest file. Using MP4Split you can generate the server manifest file with the files stored on e.g. an S3 storage account. Using our S3 account 'unified-streaming' as an example we have the following three files:

http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-1.ismv
http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-2.ismv
http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-3.ismv
http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-64k.isma

The previous URLs are passed to MP4Split as input to generate the server manifest file.

#!/bin/bash

mp4split -o tears-of-steel.ism \
  http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-1.ismv \
  http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-2.ismv \
  http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-3.ismv \
  http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-64k.isma

If you open the server manifest file, you'll see that the audio and video sources now point to the files at S3:

<audio src="http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-64k.isma" systemBitrate="64000" systemLanguage="eng">

Local mp4

Store the .ismv file on the HTTP storage (e.g. S3) and generate the dref mp4 locally:

#!/bin/bash

mp4split -o tears-of-steel.mp4 --use_dref \
  http://usp-s3-storage.s3.amazonaws.com/tears-of-steel/tears-of-steel-1.ismv

Request the MP4 video from the webserver with:

http://www.your-webserver.com/tears-of-steel.mp4

Note that there is no need for a server manifest file in this case, nor setting up any additional proxying (IsmProxyPass) statements. The new functionality is that it is now possible to store 'full URLs' as a reference in the .mp4 video, where as before it could only reference files relative to the stored .mp4 video.

Using remote manifests

The webserver module must have access to the server manifest file. You can either store the manifest file on the webserver so that it has local file access to it, or you can store it in the S3 bucket.

The latter is only supported by the Apache and Nginx version of the webserver module as it uses the IsmProxyPass configuration.

            GET
 [storage]  <-- [origin]  <-- client request
 video.ism    IsmProxyPass
video1.ismv
video1.ismv
audio1.isma

This configuration will tell the webserver that the content should be read from S3 instead of from local disk.

Both fragment mp4 or mp4 can be used as source. An example with mp4:

The first step then is to create a server manifest:

#!/bin/bash

mp4split -o tears-of-steel.ism \
  tears-of-steel-1.ismv \
  tears-of-steel-2.ismv \
  tears-of-steel-64k.isma

Then upload the files to your S3 bucket.

To stream this you need to setup IsmProxyPass in the virtual host:

<Directory "/var/www/test/usp-s3-storage" >
  IsmProxyPass http://usp-s3-storage.s3-eu-west-1.amazonaws.com/
</Directory>

With this setting you can stream the S3 based content. An example:

http://demo.unified-streaming.com/usp-s3-storage/tears-of-steel/tears-of-steel.ism/Manifest

The origin will make the mapping from request to S3 via the virtual path, usp-s3-storage in above example (but other names could equally be chosen).

Remote mp4

Using IsmProxyPass it is possible to store the dref mp4 in your HTTP storage (for instance S3).

Generate the dref mp4 locally and then store it in the 'usp-s3-storage' bucket in S3:

#!/bin/bash

mp4split -o tears-of-steel.mp4 --use_dref \
  tears-of-steel-1.ismv

# copy tears-of-steel.mp4 to http://your-bucket.s3-eu-west-1.amazonaws.com/

To stream the MP4 you need to setup IsmProxyPass in the virtual host:

<Directory "/var/www/test/usp-s3-storage" >
  IsmProxyPass http://usp-s3-storage.s3-eu-west-1.amazonaws.com/
</Directory>

Request the MP4 video from the webserver with:

http://www.your-webserver.com/usp-s3-storage/tears-of-steel.mp4

Amazon S3

First setup the AWS cli interface.

The cli interface allows to list available buckets:

#!/bin/bash

aws s3api list-buckets --region eu-central-1

as well as create buckets

#!/bin/bash

aws s3 mb s3://usp-s3-storage --region eu-central-1

copy content to the bucket:

#!/bin/bash

aws s3 cp tears-of-steel s3://usp-s3-storage/tears-of-steel --recursive --region eu-central-1

or delete content from the bucket:

#!/bin/bash

aws s3 rm s3://usp-s3-storage/tears-of-steel --recursive --region eu-central-1

The endpoint then is the following:

http://usp-s3-storage.s3.eu-central-1.amazonaws.com

Please note that permissons should be set on content before it can be accessed.

One way is setting an ACL on the bucket, the following makes the bucket world readable:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "MakeItPublic",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::usp-s3-storage/*"
    }
  ]
}

But Amazon S3 with authorization can be followed as well.

For further options and possibilities please refer to the AWS documentation.

Following the HTTP Proxy documentation, the just created bucket and uploaded content can be streamed by adding the following to the virtual host config:

<Directory "/var/www/usp/usp-s3-storage/">
  IsmProxyPass https://s3.eu-central-1.amazonaws.com/usp-s3-storage/
</Directory>

Azure storage

First setup the Azure cli interface.

The cli interface allows to list available storage accounts:

#!/bin/bash

azure storage account list

as well as create accounts and containers:

#!/bin/bash

azure storage account create uspazurestorage
azure storage container create -a uspazurestorage -k KEY tears-of-steel

To list your KEY the following can be used:

#!/bin/bash

azure storage account keys list uspazurestorage

Files have to be copied individually to copy them as blob to the container:

#!/bin/bash

cd tears-of steel
for f in *; do
  azure storage blob upload -a uspazurestorage -k KEY "${f##*/}" tears-of-steel "${f##*/}"
done

Containers can also be deleted:

#!/bin/bash

azure storage container delete -a uspazurestorage -k KEY tears-of-steel

The endpoint then is the following:

https://uspazurestorage.blob.core.windows.net

To list containers and blobs:

#!/bin/bash

azure storage container list -a uspazurestorage -k KEY
azure storage blob list -a uspazurestorage -k KEY tears-of-steel

Please note that permissons should be set on content before it can be accessed:

#!/bin/bash

azure storage container set -a uspazurestorage -k KEY -p Container tears-of-steel

For further options and possibilities please refer to the Azure documentation.

Following the HTTP Proxy documentation, the just created bucket and uploaded content can be streamed by adding the following to the virtual host config:

<Directory "/var/www/usp-azure-storage/">
  IsmProxyPass https://uspazurestorage.blob.core.windows.net/
</Directory>

Google storage

First setup the GCE cli interface.

The cli allows for creating/deleting buckets or copy/move/delete content to a bucket or between buckets.

After uploadding content you should set permissions so content can be viewed (the default is 'private').

Following the HTTP Proxy documentation, the just created bucket and uploaded content can be streamed by adding the following to the virtual host config:

<Directory "/var/www/usp-gce-storage/">
  IsmProxyPass http://storage.googleapis.com/unified-streaming/
</Directory>

The url to the content then becomes:

http://demo.unified-streaming.com/usp-gce-storage/elephantsdream/elephantsdream.ism/manifest

where demo.unified-streaming.com is the webserver running USP and has the previous vhost snippet.

Hitachi content platform

HCP supports the standard HTTP commands as wel as range requests.

Available commands are:

Command Description
PUT Stores objects, versions, empty directories, annotations, ACLs and copies objects.
POST Changes metadata values.
HEAD Checks existence of objects, versions, directories, annotations, ACLs or retrieves metadata for objects or versions.
GET Retrieves the contents and metadata of an object.
DELETE Deletes objects, versions, empty directories, annotations, ACLs and symbolic links.

An example with tears-of-steel from Verifying Your Setup looks like the following:

PUT the content:

#!/bin/bash

curl -v -k \
 -H "Authorization: HCP dXNlcg==:76a2173be6393254e72ffa4d6df1030a" \
 -H "Host: <namespace>.<tenant>.content.us-nj1.cloud.hds.com" \
 -T $1
 https://<namespace><tenant>.content.us-nj1.cloud.hds.com/rest/test/$1

Similar to the Azure Storage, files have to be copied individually to their location.

GET the content:

#!/bin/bash

curl -v -k \
 -H "Authorization: HCP dXNlcg==:76a2173be6393254e72ffa4d6df1030a" \
 -H "Host: <namespace>.<tenant>.content.us-nj1.cloud.hds.com" \
 https://<namespace><tenant>.content.us-nj1.cloud.hds.com/rest/test/$1

Once the content is copied to HCP it can be accessd by the origin just like with S3, Azure or GCE.

The configuration of the origin is similar:

<Directory "/var/www/hcp/">
  IsmProxyPass http://hcp.unified-streaming.com/
</Directory>

The content then can be accessed by players:

http://example.com/hcp/tears-fo-steel/tears-of-steel.ism/manifest

Authorization

The authorization header is mandatory on each call to HCP, as well as the use of SSL. In order to apply the header a proxy virtual host can be setup, on the same server:

       origin  <-- client
          |
hcp <-- proxy

When the request comes in the origin calls the proxy which in turn adds the request header and proxies the request over SSL to HCP.

The proxy virtual host config looks like the following:

<VirtualHost *:80>
    ServerAdmin dirk@unified-streaming.com
    ServerName hcp.unified-streaming.com

    # if not specified, the global error log is used
    ErrorLog /var/log/apache2/hcp.unified-streaming.com-error.log
    CustomLog /var/log/apache2/hcp.unified-streaming.com-access.log combined
    LogLevel info

    # don't loose time with IP address lookups
    HostnameLookups Off

    # needed for named virtual hosts
    UseCanonicalName On

    RequestHeader set Authorization "HCP dXNlcg==:76a2173be6393254e72ffa4d6df1030a"

    ProxyPass / https://<namespace>.<tenant>.content.us-nj1.cloud.hds.com/rest/test/
    ProxyPassReverse / https://<namespace>.<tenant>.content.us-nj1.cloud.hds.com/rest/test/

    SSLProxyEngine on
    SSLProxyCheckPeerCN off
    SSLProxyCheckPeerName off
</VirtualHost>

Scality

Scality offers a software-based storage solution that provides great performance across content distribution and other media workloads.

Standard storage interfaces for files and objects like NFS, HTTP, SCP, and OpenStack enable applications to run without modification.

In the following section shows how to work with the HTTP interface, similar to S3/Azure/GCE as described above.

Scality RING supports the standard HTTP commands as wel as range requests.

Available commands are:

Command Description
PUT Write or update an object, providing data, and optionally, metadata.
HEAD Retrieves object metadata only.
GET Retrieves the contents and metadata of an object.
DELETE Sent to delete an object on the Ring.

An example with tears-of-steel from Verifying Your Setup looks like the following:

PUT the content:

#!/bin/bash

curl -v -XPUT -H "Expect:" \
  --data-binary @tears-of-steel/tears-of-steel.ism \
  http://localhost:81/proxy/mychord/tears-of-steel/tears-of-steel.ism

Similar to the Azure Storage, files have to be copied individually to their location.

GET the content:

#!/bin/bash

curl -v http://localhost:81/proxy/mychord/tears-of-steel/tears-of-steel.ism

Once the content is copied to the RING it can be accessd by the origin just like with S3, Azure or GCE.

The configuration of the origin is similar:

<Directory "/var/www/scality-storage/">
  IsmProxyPass http://localhost:81/proxy/mychord/
</Directory>

The content then can be accessed by players:

http://example.com/scality-storage/tears-fo-steel/tears-of-steel.ism/manifest