Adaptive Bitrate (ABR) Streaming¶
Table of Contents
The streaming module can also act as a Publishing Point.
A Publishing Point is simply a URL that accepts input streams from one or more software/hardware encoders.
The encoder should follow interface 1 of the profile one of the DASH Specificiation of Live Media Ingest (i.e., CMAF ingest) to send the audio/video fragments to the webserver. See the Supported Encoders section in the factsheet and LIVE Ingest for an overview.
Apache should be used for Live streaming (ingest and egress).
The client manifest that Origin generates for a live stream will switch from being 'dynamic' (i.e., Live) to being 'static' in two circumstances:
- When a live stream has ended and the encoder has send an End of Stream signal (more info: Overview of possible publishing point 'states')
- When the end time of a virtual subclip (more info: Virtual subclips) from a livestream goes from being in the future to being in the past
This behavior is as expected and according to spec (e.g., see section 4.6 'Provisioning of Live Content in On-Demand Mode' of the DASH-IF Interoperability Points).
If you're publishing LIVE streams to the webserver module, and no server manifest file is available then the default settings are used.
If you want to change any settings (e.g. adding on-the-fly encryption) you have to generate the server manifest file before starting the encoder.
In its simplest form the command for creating a LIVE server manifest file is:
#!/bin/bash mp4split -o /var/www/live/channel1/channel1.isml
Alternatively, the LIVE server manifest can also be created if the webserver module Live API is enabled as explained in Publishing Point API.
#!/bin/bash mp4split -o http://api.example.com/live/channel1/channel1.isml
The extension of the LIVE server manifest is .isml
Note the absence of any input files in the LIVE case. When the encoder pushes a live stream to the webserver module for ingest, it is the Unified Origin that updates the LIVE server manifest file to include the stream information announced by the encoder.
Options available for both VOD also apply to LIVE and are described in Options for VOD and LIVE streaming.
The Publishing Point API documentation outlines the available commands with which you can control a publishing point.
Output fMP4 HLS¶
New in version 1.8.3.
You can change the HLS output format so that it uses fMP4 instead of Transport
Streams by adding the
--hls.fmp4 command-line parameter when creating the
Live server manifest.
#!/bin/bash mp4split -o /var/www/live/channel1/channel1.isml --hls.fmp4
To stream HEVC and/or HDR content over HLS according to Apple's specification, using fMP4 is a requirement.
The webserver module needs permission to read and write the DocumentRoot. Depending on your setup you may have to add read/write permissions to the directory and files.
On Linux you change the permissions of the file to allow read and write for all by using chmod:
#!/bin/bash chmod ua+w /var/www/live
IIS: Select wwwroot properties => select Security => select Internet Guest Account => Tick Allow 'Read' and 'Write'.
The URL to be passed to the encoder has the following format:
Encoders append the
/Streams(<identifier>) section themselves
as specified in profile one of the DASH Specificiation of Live Media Ingest.
This means that you can simply use the URL ending in .isml with your encoder. For example:
With FFmpeg the
/Streams(identifier) should be added to the URL:
Note the trailing
ID is a placeholder for your
own identifier, which could be 'channel1' etc.
Using query parameters to control LIVE ingest¶
An alternative way to setup options for a publishing point is to pass the options as query parameters. Options that are related to DRM cannot be passed.
Note that the encoder must allow for specifying a URL with query parameters, which is not supported by all encoders.
Taking the Pure LIVE as example:
#!/bin/bash mp4split -o http://live.unified-streaming.com/channel1/channel1.isml \ --archive_segment_length=60 \ --dvr_window_length=30 \ --archiving=0
The publishing point URL then becomes:
Length of DVR moving window (default 30 seconds).
The dvr_window_length must be shorter than the archive_length.
The length of archive to be kept (in seconds). Archive segments beyond this range (measured from the live edge) will be automatically purged to free up disk storage. Note that the Origin will always have one (partial) 'open' live archive segment that it is writing to, which will not be purged.
The archive_length must be longer than the dvr_window_length.
If specified, the live presentation is archived in segments of the specified length (default to 0 seconds, meaning no segmentation takes place).
Do not make changes to this option on a publishing point that is in use, as it will break it. If you want to make changes to your archive's segment length, you should set up a whole new publishing point.
When archive_segment_length is set, setting this variable to 1 keeps the archives stored on disk. (defaults to 0, not archiving, so only the last two archive segments are kept on disk).
It is highly recommended to always enable archiving (
--archiving=1) and to
set all archiving related options explicitly, instead of relying on their
Specifies the location of the .db3 file, so both ingest and playout share the same database. The path to the .db3 file must be absolute and is specified like this:
#!/bin/bash mp4split --database_path=/var/www/live/channel00/channel00.db3 -o test.isml
When this option is enabled an encoder can reconnect and keep posting to a stream even after that stream was 'stopped' by an End of Stream (EOS) signal (provided the stream layout is the same and the next timestamps are higher).
This is crucial when the encoder falls over and 'accidentally' sends the EOS
signal. If the
--restart_on_encoder_reconnect option is not enabled in such
circumstances, the encoder will not be able to continue posting the livestream
without a reset of the publishing point. Therefore, enabling this option is
The time shift offset (in seconds). Defaults to 0.
Because the use of
time_shift only affects which segments are
announced in the client manifest and the correct behavior for DASH clients is
to calculate which segments are available based on the 'current' time, simply
time_shift may not result in the expected behavior (i.e., a DASH
client may request the segments closest to the live edge irrespective of the
To ensure correct behavior by DASH clients, offset the
MPD@availabilityStartTime equal to the
This not only results in the latest segment announced in the client manifest being 3600 seconds behind the actual live edge, but also shifts the entire MPD timeline 3600 seconds into the future without changing the addressing of the actual segments. So when a DASH client calculates the latest media segment that is available in this scenario, it will now request content from about an hour ago.
The options are related as depicted by the following picture:
dvr_window_length | |-------------------------------*------| archive_length ^ live point < time_shift | ^ (new 'live' point) 1. each '-' is an archive segment, set by 'archive_segment_length' 2. 'archive_length' is used to set the total length of the archive 3. 'archiving' is used to turn the feature on or off (without archiving only two segments are kept on disk) 4. 'time_shift' offsets the live point (and DVR window) backwards but within the 'archive_length'
The following commands create a LIVE server manifest file for presentations where only a very short archive of two segments of 60 seconds is stored on disk, and the DVR window available is 30 seconds:
#!/bin/bash mp4split -o http://live.example.com/channel1/channel1.isml \ --archive_segment_length=60 \ --dvr_window_length=30 \ --archive_length=120 \ --archiving=1
Another example is when you are publishing a stream 24/7 and would like to keep each day in a separate archived file so you can make this available as VOD afterwards:
#!/bin/bash mp4split -o http://live.example.com/24-7/24-7.isml \ --archive_segment_length=86400 \ --dvr_window_length=30 \ --archiving=1
Let's create a server manifest that keeps a 1 hour archive (
writes the content to disk in 1 minute chunks (
allows the viewer to rewind 10 minutes back in time (
#!/bin/bash mp4split -o http://live.example.com/channel1/channel1.isml \ --archiving=1 \ --archive_length=3600 \ --archive_segment_length=60 \ --dvr_window_length=600 \ --restart_on_encoder_reconnect
See the Getting Started with Origin - Live section for a full example.
A live stream is considered an event when all content is archived on disk and nothing is purged. That is, the archive will be as long as the duration of the entire event.
In general, events have an infinite DVR window as well, so that it's always possible to scrub back to the beginning of the event.
#!/bin/bash mp4split -o channel1.isml \ --archiving=1 \ --dvr_window_length=0 \ --archive_length=0
When you specify an infinite DVR window (
the HLS Media Playlist will contain specific signaling to indicate that the
stream is an event: '#EXT-X-PLAYLIST-TYPE:EVENT'. See also
Apple's HLS documentation on 'Event Playlist Construction'.
To make re-using an existing publishing point possible, an 'EventID' can be specified for a Live presentation. When using an EventID, Unified Origin will store the stream's Live archive and SQLite database in a subdirectory, of which the name is equal to the EventID. This allows you to stop a live stream with one EventID, and start a new live stream pointed at the same publishing point using a different EventID.
To add an EventID to a Live presentation, an encoder should specify the EventID in the URL of the publishing point to which it POSTs the live stream. This is done like so (where <EventID> should be replaced with the actual identifier for the event):
Starting an encoding session with a specified EventID will add an extra line to a server manifest, referring to the EventID:
<meta name="event_id" content="2013-01-01-10_15_25">
Given the example above, the stream's Live archive and SQLite database would be stored within the following (automatically created) subdirectory of the publishing point:
Do note that a unique EventID must be used for each Live presentation that makes use of the same publishing point. The best way to achieve this is to use a stream's start date and time as its EventID.
Playout of streams with different EventIDs¶
When a publishing point is re-used with a new EventID, the server manifest will be associated with the new instead of the old event. Thus, from then on, all requests for client manifests will be associated with the new event, if no specific EventID is specified in the request.
To specify an EventID in a request, use the following syntax (where
should be replaced with the actual EventID and
Manifest may be replaced to
specify any other output format):
Specifics for Expression Encoder 4¶
Expression Encoder 4 has a built-in option for using EventIDs. However, using this feature in combination with USP will cause the encoder to not reconnect for a new session. Therefore, it is advised not to use the Expression Encoder's built-in option for EventIDs.