Adaptive Bitrate (ABR) Streaming¶
Table of Contents
The streaming module also can act as a Publishing Point.
A Publishing Point is simply a URL that accepts input streams from one or more software/hardware encoders.
All the different versions of the webserver modules support playout of LIVE, but only Apache (for Windows and Linux), the IIS5/6 module and Nginx support LIVE ingest. Lighttpd does not support LIVE ingest.
If you're publishing LIVE streams to the webserver module, and no server manifest file is available then the default settings are used.
If you want to change any settings (e.g. adding on-the-fly encryption) you have to generate the server manifest file before starting the encoder.
In its simplest form the command for creating a LIVE server manifest file is:
#!/bin/bash mp4split -o /var/www/live/channel1/channel1.isml
Alternatively, the LIVE server manifest can also be created if the webserver module Live API is enabled explained in Publishing Point API.
#!/bin/bash mp4split -o http://api.example.com/live/channel1/channel1.isml
The extension of the LIVE server manifest is .isml
Note the absence of any input files in the LIVE case. When the encoder pushes a live stream to the webserver module for ingest, it is the Unified Origin that updates the LIVE server manifest file to include the stream information announced by the encoder.
Options available for both VOD also apply to LIVE and are described in Options for VOD and LIVE streaming.
The Publishing Point API documentation outlines the available commands with which you can control a publishing point.
The webserver module needs permission to read and write the DocumentRoot. Depending on your setup you may have to add read/write permissions to the directory and files.
On Linux you change the permissions of the file to allow read and write for all by using chmod:
#!/bin/bash chmod ua+w /var/www/live
IIS: Select wwwroot properties => select Security => select Internet Guest Account => Tick Allow 'Read' and 'Write'.
The url to be passed to the encoder has the following format:
This means that you can simply use the url ending in .isml with your encoder. For example:
With FFmpeg the
/Streams(identifier) should be added to the URL:
Note the trailing
ID is a placeholder for your
own identifier, which could be 'channel1' etc.
Using query parameters to control LIVE ingest¶
An alternative way to setup options for a publishing point is to pass the options as query parameters. Options that are related to DRM cannot be passed.
Note that the encoder must allow for specifying a URL with query parameters, which is not supported by all encoders.
Taking the Pure LIVE as example:
#!/bin/bash mp4split -o http://live.unified-streaming.com/channel1/channel1.isml \ --archive_segment_length=60 \ --dvr_window_length=30 \ --archiving=0
The publishing point URL then becomes:
or for FFmpeg (adding 'Streams(channel1)' as FFmpeg does not add the channel identifier itself):
Length of DVR moving window (default 30 seconds).
The dvr_window_length must be shorter than the archive_length.
The length of archive to be kept (in seconds).
The archive_length must be longer than the dvr_window_length.
If specified, the live presentation is archived in segments of the specified length (default to 0 seconds, meaning no segmentation takes place).
When archive_segment_length is set, setting this variable to 1 keeps the archives stored on disk. (defaults to 0, not archiving, so only the last two segments are kept on disk).
Used when creating the server manifest for the publishing point so when an encoder stops it can start again and publish to the same publishing point (provided the stream layout is the same and the next timestamps are higher).
The time shift offset (in seconds). Defaults to 0.
The options are related as depicted by the following picture:
dvr_window_length | |-------------------------------*------| archive_length ^ live point < time_shift | ^ (new 'live' point) 1. each '-' is an archive segment, set by 'archive_segment_length' 2. 'archive_length' is used to set the total length of the archive 3. 'archiving' is used to turn the feature on or off (without archiving only two segments are kept on disk) 4. 'time_shift' offsets the live point (and DVR window) backwards but within the 'archive_length'
The following commands create a LIVE server manifest file for presentations where no full archive is being kept (--archiving=0), only two segments of 60 seconds are stored on disk, and the DVR window available is 30 seconds:
#!/bin/bash mp4split -o http://live.example.com/channel1/channel1.isml \ --archive_segment_length=60 \ --dvr_window_length=30 \ --archiving=0
Another example is when you are publishing a stream 24/7 and would like to keep each day in a separate archived file so you can make this available as VOD afterwards:
#!/bin/bash mp4split -o http://live.example.com/24-7/24-7.isml \ --archive_segment_length=86400 \ --dvr_window_length=30 \ --archiving=1
Let's create a server manifest that keeps a 1 hour archive (
writes the content to disk in 1 minute chunks (
allows the viewer to rewind 10 minutes back in time (
#!/bin/bash mp4split -o http://live.example.com/channel1/channel1.isml \ --archiving=1 \ --archive_length=3600 \ --archive_segment_length=60 \ --dvr_window_length=600 \ --restart_on_encoder_reconnect
See the Starting with Live section for a full example.
A solution to re-using an existing publishing point is to specify a unique ID for the live presentation (the EventID).
This allows you to restart a publishing point from a stopped state. When the encoder stops broadcasting or loses connectivity with your ingest for some other reason, the publishing point switches to a stopped state. The encoder does not allow you to re-connect again to the same publishing point if it is in stopped state. The reason is that when your broadcast stops and you have a DVR window configured, the broadcast would be directly available for your end users. During a stopped or started state a server manifest will be still associated with the publishing point and therefore blocking a restart.
An Event ID must be unique for each live session. Otherwise the publishing point may fail to start due to name conflict of archive session directories. Using a different EventID, every session will allow you to restart a publishing point. The best method to do this is to use a date and timestamp as an EventID. This will make sure that the EventID is unique and therefore you can restart the publishing point.
To demonstrate how to implement a unique EventID, we extend the script used in the Using FFmpeg for LIVE encoding tutorial with 1 added new option and 1 adjusted.
# use this for adding time + date timestamp EVENT_ID=$(date +%Y-%m-%d-%H_%M_%S) # change this so EventID timestamp will be used PUBPOINT_OPTIONS="/Events($EVENT_ID)/Streams(video)"
Starting an encoding session with the above example will add an extra line into your server manifest, which is referring to your current EventID. If you look in the server manifest you will find the following is being added:
<meta name="event_id" content="2013-01-01-10_15_25">
In your publishing point directory you will find a new mapping which is called:
In this directory you will find the archived video files specific to the event.
When the session is restarted with a new EventID a new mapping is created and previous files will not be touched. Restarting your encoding session based on using EventID does not remove your files from a previous session. The files can be used for other purposes or those files can be removed. That can be done by setting up a simple script which removes the files after a restart or after a certain period.
Specifics for Expression Encoder 4¶
Expression Encoder 4 has a built-in option for using EventID's, however, using this feature in combination with USP will cause the encoder to not reconnect for a new session. It is advised not to use the EE built-in option for EventID's.