Microsoft Azure

We provide a fully installed USP AMI in the Azure Marketplace.

Choosing an Azure instance

There many Azure Instance Types, choosing depends on the following:

  • is the content SD or HD?
  • what is the expected dominant output (HLS, Smooth, etc)?
  • is DRM needed?
  • is the output Live or VOD?
  • if Live, what are the bitrates ingested?
  • if Live what is the DVR window size (and would you plan a RAM disk for that)?

In short, ingest is IO bound, and egress first IO bound, then network and lastly CPU.

With AWS you have the option to use a few high performance instances or many small instances, the benefit of the latter is that spreading the ingest over more nodes will make it easier to cope with errors, if at all they occur.


When you start the instance you need to use a security group where port 80 and port 22 are enabled. The image user name is 'ubuntu' and has sudo access.

To login you need to use ssh (or similar like putty on Windows):


ssh -i your-ssh-key ubuntu@your-azure-instance


Once the instance has launched and is marked online you should be able to direct your browser to the instance's public DNS name; such a name looks like this:

The instance can be tested as described in Verify Your Setup.

On startup the instance tries to set it's external hostname as ServerName for Apache and set the same hostname in the 'index.html' file to address all files and links (the 'index.html' file you can find in /var/www/usp-evaluation).

Using Azure Blob Storage

First setup the Azure cli interface.

The cli interface allows to list available storage accounts:


az storage account list

as well as create accounts and containers:


az storage account create -n your-bucket -g your-bucket -l westeurope --sku Standard_LRS
az container create -g your-bucket -n your-bucket --image myimage:latest --cpu 1 --memory 1

You can configure the default subscription using;


az account set -s NAME_OR_ID

Files have to be copied individually to copy them as blob to the container:


cd tears-of steel
for f in *; do
  az storage blob upload -a your-bucket -k KEY "${f##*/}" tears-of-steel "${f##*/}"

Containers can also be deleted:


az container delete --name tears-of-steel --resource-group your-bucket

The endpoint then is the following:

To list containers and blobs:


az storage container list -a your-bucket -k KEY
az storage blob list -a your-bucket -k KEY tears-of-steel

Please note that permissions should be set on content before it can be accessed:


az storage container set -a your-bucket -k KEY -p Container tears-of-steel

For further options and possibilities please refer to the Azure documentation.

Following the storage proxy Installation documentation, the just created bucket and uploaded content can be streamed by adding the UspEnableSubreq directive and defining <Proxy> sections for each remote storage server used.

<Location "/">
  UspHandleIsm on
  UspEnableSubreq on

SSLProxyEngine on

<Proxy "">
  ProxySet connectiontimeout=5 enablereuse=on keepalive=on retry=0 timeout=30 ttl=300

The URL to the content then becomes the following, for instance for MPEG-DASH:

where is the webserver running USP and has the previous vhost snippet (and the tears-of-steel content in 'your-bucket' used with both IsmProxPass and Proxy directives.

For guidelines on howto use Unified Packager with Azure Storage see How to write directly to Azure Blob Storage.