Table of Contents
The IBC 2014 demo offers a good insight into the way in which you can create a fully automated workflow for HD catch-up generation.
The HD Live to VOD and Playout Automated Workflow provides background and also outlines the technologies that are used together with Unified Capture.
When a Live presentation is streamed by Unified Origin and the Origin is set up to archive the presentation on disk, Unified Capture can create VOD clips from this Live archive. You simply specify a time range on the Live timeline, after which this time range is captured and stored as a VOD item.
By default, beginning and end times are aligned to video key frames. To bypass
this limitation, you can use Unified Capture’s
Since the encoder is timestamping the audio/video fragments, it is the encoder that creates the timeline of the Live feed.
Often the default of the encoder is to start at a zero time point. That is, whenever you start an encoding session, the timeline starts at zero instead of a more meaningful time.
To have the encoder create a more meaningful timeline, you have to set the encoder to use Coordinated Universal Time (UTC). This gives the timeline a useful reference point so that you can match an existing EPG (Electronic Program Guide) to the timeline, for example.
The format used for specifying a time range is the ISO 8601 date-time format (“2013-03-31T12:34:56.000”). Capturing a half hour show on the 31st of March from noon until 12:30 looks like this:
#!/bin/bash unified_capture -o news-at-noon.ismv \ "http://live.unified-streaming.com/channnel01/channel01.isml/manifest?t=2013-03-31T12:00:00.000-2013-03-31T12:30:00.000"
Your shell may require quoting the input URL when it contains special characters.
The end time may be set in the future, that is, after the current live point.
Unified Capture captures up to the live point and will then continue capturing the fragments that become available from the publishing point in real-time.
When either the end time is reached or the presentation is closed, the capture will end as well.
This allows for close to realtime publishing of catch-up content: as soon as the show finishes the catch-up version can be put live as well.
New in version 1.6.6.
Using the ‘stitch’ functionality it is possible to create a new file that is a selection of clips from the original.
For instance two clips of n seconds can be concatenated to one, by using the begin and end time of the clip.
The following table lists the begin and end points of two clips. The duration then is end - begin.
This information can be passed to Unified Capture using a SMIL file. This file should contain the begin and end times (begin + duration) of the clips that need be stitched into a new clip.
Such a SMIL file looks like this:
<?xml version="1.0" encoding="utf-8"?> <smil xmlns="http://www.w3.org/2001/SMIL20/Language"> <head> </head> <body> <seq> <video src="http://usp-test/video.out/rtl8/rtl8.ism/Manifest" clipBegin="wallclock(2014-01-30T15:02:45.960Z)" clipEnd="wallclock(2014-01-30T15:02:50.960Z)" /> <video src="http://usp-test/video.out/rtl8/rtl8.ism/Manifest" clipBegin="wallclock(2014-01-30T15:17:52.680Z)" clipEnd="wallclock(2014-01-30T15:17:57.680Z)" /> </seq> </body> </smil>
The SMIL file can be passed to Capture on the command line, not unlike previous examples:
#!/bin/bash unified_capture -o video.ismv \ clips.smil
All the bitrates in the stream are captured into one single file, which can be moved into Local Storage so that it can be used for catch-up or other purposes.