Google Compute Engine (GCE)

Unlike Amazon or Azure, GCE offers no Marketplace for images. The following sections will show how to create and run a USP enabled GCE instance.

  1. Sign up for Google Cloud Platform

Go to and use your Google account to sign up for Google Cloud Platform and complete the guided instructions.

  1. Create a Project

Next, go to the console at and create a new Project. Make sure to select your new Project if you are not automatically directed to the Project.

Projects are a way of grouping together related users, services, and billing. You may opt to create multiple Projects and the remaining instructions will need to be completed for each Project.

  1. Enable the Google Compute Engine service

In your Project, either just click Compute Engine to the left, or go to the APIs & auth section and APIs link and enable the Google Compute Engine service.

  1. Create a Service Account

To set up authorization, navigate to APIs & auth section and then the Credentials link and click the CREATE NEW CLIENT ID button. Select Service Account and click the Create Client ID button. This will automatically download a .json file containing the auth settings.

You will need the project id and the json account file in the following section.

Creating a GCE image

Packer is easy to use and automates the creation of any type of machine images.

Packer uses a json template file consisting of builders, provisioners and other (optional) elements.

usp-packer.json contains the template to build an image and install USP in GCE.

The following snippet lists the builder for an instance install in the European zone.

  "variables": {
    "account_file": "YOUR_ACCOUNT_FILE",
    "project_id": "YOUR_PROJECT_ID"
    "type": "googlecompute",
    "account_file": "{{user `account_file`}}",
    "project_id": "{{user `project_id`}}",
    "source_image": "YOUR_IMAGE",
    "zone": "europe-west1-c",
    "image_name": "YOUR_IMAGE_NAME",
    "image_description": "",
    "machine_type": "n1-standard-1",
    "ssh_username": "root"

Make sure to replace YOUR_ACCOUNT_FILE and YOUR_PROJECT_ID with your actual account file and project id, same for YOUR_IMAGE and YOURIMAGE_NAME.

Adding the license key

Add the following line as the first line in the provisioner described below:


"sudo sh -c 'echo "YOUR_KEY" > /etc/usp-license.key'"

Make sure to replace YOUR_KEY with your actual License Key.

Install USP and demo content

USP and demo content can be installed by using a provisioner as follows:

    "type": "shell",
    "execute_command":  "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
    "inline": [
      "sudo sh -c 'echo \"YOUR_KEY\" > /etc/usp-license.key'",
      "sudo wget",
      "sudo chmod +x",
      "sudo ./ -y",
      "sudo rm",
      "chmod +x",
      "sudo ./",
      "sudo rm"
    "inline_shebang": "/bin/sh -x"


To use the hostname of the instance (so it's ready after booting) you would need to replace the default hostname - which can be done as follows.


hn=$(curl -H "Metadata-Flavor: Google")
/bin/sed -i -e "s/$hn/g" /var/www/tears-of-steel/features.json

This then needs to be run when the instance is starting up, so for instance as conf file in /etc/init.

Create image

To start a build of the image as do the following:


packer build usp-build.json

When the build is finished you can go to your project in the console and start the image.

Using Google Storage

First setup the GCE cli interface.

The cli allows for creating/deleting buckets or copy/move/delete content to a bucket or between buckets.

After uploading content you should set permissions so content can be viewed (the default is 'private').

Following the storage proxy Installation documentation, the just created bucket and uploaded content can be streamed by adding the UspEnableSubreq directive and defining <Proxy> sections for each remote storage server used.

<Location "/">
  UspHandleIsm on
  UspEnableSubreq on

<Proxy "">
  ProxySet connectiontimeout=5 enablereuse=on keepalive=on retry=0 timeout=30 ttl=300

The URL to the content then becomes the following, for instance for MPEG-DASH:

where is the webserver running USP and has the previous vhost snippet (and the tears-of-steel content in 'your-bucket' used with both IsmProxPass and Proxy directives.


For guidelines on how to use Unified Packager with Google Storage see How to write directly to Google Cloud Storage.

If authentication is required, please use Google's Interoperability API allowing for storage access using S3 like methods. Then reconfigure your vhost with the required webserver directives defined in AWS S3 with Authentication.