Table of Contents
The IBC 2014 demo gives a good insight of the possibilities to create a fully automated workflow for HD catch-up generation.
The HD Live to VOD and Playout Automated Workflow provides the background and outlines the technologies used together with Unified Capture.
When a Live presentation is ingested by the USP Webserver module and it is set up to archive the presentation to disk, then Unified Capture creates a VOD clip from the live archive. You specify a time range ([begin, end>) on the live timeline and this range is captured and stored as a VOD item.
The beginning and end times are aligned to fragment boundaries, since for proper playback the newly created video has to start on a key / IDR frame. Also see Capturing LIVE.
Since the encoder is timestamping the audio/video fragments, it is the encoder creating the timeline of the live feed.
Often the default of the encoder is to start at a zero time point. That is, whenever you start an encoding session, that is your time equals zero point.
This is not a useful timeline, you have to set the encoder to use Coordinated Universal Time (UTC) This gives the timeline a useful reference point and you can for example match an existing EPG (Electronic Program Guide) to the timeline.
The format used for specifying a time range is the ISO 8601 date-time format ("2013-03-31T12:34:56.000"). Capturing a half hour show on the 31st of March from noon till 12:30 is:
#!/bin/bash unified_capture -o news-at-noon.ismv \ "http://live.unified-streaming.com/channnel01/channel01.isml/manifest?t=2013-03-31T12:00:00.000-2013-03-31T12:30:00.000"
Your shell may require quoting the input URL if it contains special characters.
The end time may be set in the future, that is, after the current live point.
Unified Capture captures up to the live point and then continues capturing in real-time the fragments that become available from the publishing point.
When either the end time is reached or the presentation is closed the capture will end as well.
This allows for close to realtime publishing of catch-up content: as soon as the show finishes the catch-up version can be put live as well.
New in version 1.6.6.
Using the 'stitch' functionality it is possible to create a new file which is a selection of clips from the original.
For instance two clips of n seconds can be concatenated to one, by using the begin and end time of the clip.
The following table lists begin and end point of two clips, the duration then is end - begin.
To pass this information to capture a SMIL file containing begin and end time (begin + duration) for each clip to be stitched into the new one needs to be created.
Such a SMIL file looks like this.
<?xml version="1.0" encoding="utf-8"?> <smil xmlns="http://www.w3.org/2001/SMIL20/Language"> <head> </head> <body> <seq> <video src="http://usp-test/video.out/rtl8/rtl8.ism/Manifest" clipBegin="wallclock(2014-01-30T15:02:45.960Z)" clipEnd="wallclock(2014-01-30T15:02:50.960Z)" /> <video src="http://usp-test/video.out/rtl8/rtl8.ism/Manifest" clipBegin="wallclock(2014-01-30T15:17:52.680Z)" clipEnd="wallclock(2014-01-30T15:17:57.680Z)" /> </seq> </body> </smil>
The SMIL file then is passed to capture on the commandline, not unlike previous examples:
#!/bin/bash unified_capture -o video.ismv \ clips.smil
All the bitrates in the stream are captured into one single file, which file then can be move into Local Storage to be used for catch-up or other purposes.