Adaptive Bitrate (ABR) Streaming

The streaming module also can act as a Publishing Point.

A Publishing Point is simply a URL that accepts input streams from one or more software/hardware encoders.

The encoder uses the HTTP Smooth Streaming Protocol to send the audio/video fragments to the webserver. See the Factsheet for an overview of the supported encoders.

All the different versions of the webserver modules support playout of LIVE, but only Apache (for Windows and Linux), the IIS5/6 module and Nginx support LIVE ingest. Lighttpd does not support LIVE ingest.

Creation of a server manifest file

If you’re publishing LIVE streams to the webserver module, and no server manifest file is available then the default settings are used.

If you want to change any settings (e.g. adding on-the-fly encryption) you have to generate the server manifest file before starting the encoder.

In its simplest form the command for creating a LIVE server manifest file is:

#!/bin/bash

mp4split -o /var/www/live/channel1/channel1.isml

Alternatively, the LIVE server manifest can also be created if the webserver module Live API is enabled as explained in Publishing Point API.

#!/bin/bash

mp4split -o http://api.example.com/live/channel1/channel1.isml

Note

The extension of the LIVE server manifest is .isml

Note the absence of any input files in the LIVE case. When the encoder pushes a live stream to the webserver module for ingest, it is the Unified Origin that updates the LIVE server manifest file to include the stream information announced by the encoder.

Options available for both VOD also apply to LIVE and are described in Options for VOD and LIVE streaming.

The Publishing Point API documentation outlines the available commands with which you can control a publishing point.

Output fMP4 HLS

New in version 1.8.3.

You can change the HLS output format so that it uses fMP4 instead of Transport Streams by adding the --hls.fmp4 command-line parameter when creating the Live server manifest.

For example:

#!/bin/bash

mp4split -o /var/www/live/channel1/channel1.isml --hls.fmp4

..note

To stream HEVC and/or HDR content over HLS according to Apple's specification,
using fMP4 is a requirement.

File permissions

The webserver module needs permission to read and write the DocumentRoot. Depending on your setup you may have to add read/write permissions to the directory and files.

On Linux you change the permissions of the file to allow read and write for all by using chmod:

#!/bin/bash

chmod ua+w /var/www/live

On Windows:

IIS: Select wwwroot properties
=> select Security
=> select Internet Guest Account
=> Tick Allow 'Read' and 'Write'.

Encoder URL

The url to be passed to the encoder has the following format:

http://<server>/<pubpoint>/Streams(<identifier>)

All professional encoders append the /Streams(<identifier>) section themselves as specified by the Live Ingest section in HTTP Smooth Streaming Protocol.

This means that you can simply use the url ending in .isml with your encoder. For example:

http://live.example.com/channel1/channel1.isml

With FFmpeg the /Streams(identifier) should be added to the URL:

http://live.example.com/channel1/channel1.isml/Streams(ID)

Note the trailing /Streams(ID) where ID is a placeholder for your own identifier, which could be ‘channel1’ etc.

Using query parameters to control LIVE ingest

An alternative way to setup options for a publishing point is to pass the options as query parameters. Options that are related to DRM cannot be passed.

Note that the encoder must allow for specifying a URL with query parameters, which is not supported by all encoders.

Taking the Pure LIVE as example:

#!/bin/bash

mp4split -o http://live.unified-streaming.com/channel1/channel1.isml \
  --archive_segment_length=60 \
  --dvr_window_length=30 \
  --archiving=0

The publishing point URL then becomes:

http://localhost/live/channel1.isml?archive_segment_length=60&dvr_window_length=30&archiving=0

Options for LIVE ingest

–dvr_window_length

Length of DVR moving window (default 30 seconds).

Attention

The dvr_window_length must be shorter than the archive_length.

–archive_length

The length of archive to be kept (in seconds).

Attention

The archive_length must be longer than the dvr_window_length.

–archive_segment_length

If specified, the live presentation is archived in segments of the specified length (default to 0 seconds, meaning no segmentation takes place).

–archiving

When archive_segment_length is set, setting this variable to 1 keeps the archives stored on disk. (defaults to 0, not archiving, so only the last two segments are kept on disk).

–database_path

Specifies the location of the .db3 file, so both ingest and playout share the same database. The path to the .db3 file must be absolute and is specified like this:

#!/bin/bash

mp4split --database_path=/var/www/live/channel00/channel00.db3 -o test.isml

–restart_on_encoder_reconnect

Used when creating the server manifest for the publishing point so when an encoder stops it can start again and publish to the same publishing point (provided the stream layout is the same and the next timestamps are higher).

The encoder needs to be configured to use Coordinated Universal Time (UTC) as the time it uses, please refer to the Encoder Settings section or the encoder manual on how to configure this.

–time_shift

The time shift offset (in seconds). Defaults to 0.

Schematically

The options are related as depicted by the following picture:

                                        dvr_window_length
                                                |
                |-------------------------------*------|
         archive_length                                ^
                                                   live point
                            < time_shift |
                                         ^
                                 (new 'live' point)


1. each '-' is an archive segment, set by 'archive_segment_length'

2. 'archive_length' is used to set the total length of the archive

3. 'archiving' is used to turn the feature on or off (without archiving only two segments are kept on disk)

4. 'time_shift' offsets the live point (and DVR window) backwards but within the 'archive_length'

Pure LIVE

The following commands create a LIVE server manifest file for presentations where no full archive is being kept (–archiving=0), only two segments of 60 seconds are stored on disk, and the DVR window available is 30 seconds:

#!/bin/bash

mp4split -o http://live.example.com/channel1/channel1.isml \
  --archive_segment_length=60 \
  --dvr_window_length=30 \
  --archiving=0

Pure LIVE with archiving

Another example is when you are publishing a stream 24/7 and would like to keep each day in a separate archived file so you can make this available as VOD afterwards:

#!/bin/bash

mp4split -o http://live.example.com/24-7/24-7.isml \
  --archive_segment_length=86400 \
  --dvr_window_length=30 \
  --archiving=1

DVR with archiving

Let’s create a server manifest that keeps a 1 hour archive (--archive_length), writes the content to disk in 1 minute chunks (--archive_segment_length) and allows the viewer to rewind 10 minutes back in time (--dvr_window_length).

#!/bin/bash

mp4split -o http://live.example.com/channel1/channel1.isml \
  --archiving=1 \
  --archive_length=3600 \
  --archive_segment_length=60 \
  --dvr_window_length=600 \
  --restart_on_encoder_reconnect

See the Starting with Live section for a full example.

Event ID

To make re-using an existing publishing point possible, a unique ID must be specified for each Live presentation. This ‘EventID’ allows for the restart of a publishing point that is in a stopped state, which is impossible otherwise.

To add an EventID to a Live presentation, an encoder should specify it in the URL of the publishing point to which it POSTs the livestream. This is done like so (where <EventID> should be replaced with the actual identifier for the event):

http(s)://<domain>/<path>/<ChannelName>/<ChannelName>.isml/Events(<EventID>)/Streams(<StreamID>)

Starting an encoding session with a specified EventID will add an extra line to a server manifest, referring to the EventID:

<meta name="event_id" content="2013-01-01-10_15_25">

When using an EventID, Unified Origin will archive the media files of the session associated with the ID in a subdirectory, of which the name is equal to the ID. Given the example above, the following subdirectory would be added to the directory of the publishing point after starting the new event:

2013-01-01-10_15_25/

As conflicting names of archive directories makes restarting a publishing point impossible, a unique EventID must be used for each Live session. The best method to do this is to use a date and timestamp as an EventID.

Because a new subdirectory is created for each EventID, a new encoding session with a unique EventID will not remove files from a previous session. The files that are associated with older EventIDs can be used for other purposes or they can be removed. The latter can be done by setting up a simple script that removes the files after a restart or after a certain period.

Note

When a publishing point is re-used with a new EventID, the server manifest will be associated with the new instead of the old event. Thus, from then on, all requested client manifests will be associated with the new event. However, without any additional changes, playout of the old event will still be possible, as the media segments will remain available (e.g., through requests based on a client manifest that was cached before the new event was published).

Specifics for Expression Encoder 4

Expression Encoder 4 has a built-in option for using EventIDs. However, using this feature in combination with USP will cause the encoder to not reconnect for a new session. Therefore, it is advised not to use the EE built-in option for EventIDs.