Using FFmpeg for LIVE encoding

An example to set up FFmpeg (1.2) to encode to a publishing point:


FFMPEG_OPTIONS="-movflags isml+frag_keyframe -f ismv -threads 0"
AUDIO_OPTIONS="-c:a libfaac -ac 2 -b:a 64k"
VIDEO_OPTIONS="-c:v libx264 -preset fast -profile:v baseline -g 48 -keyint_min 48 -sc_threshold 0"
MAP="-map 0:v -b:v:0 477k -s:v:0 368x152
  -map 0:v -b:v:1 331k -s:v:1 288x120
  -map 0:v -b:v:2 230k -s:v:2 224x92
  -map 0:a:0"

CMD="-y -re
  -i $1

ffmpeg $CMD

The input (the $1 in above example) can be a VOD clip as for instance an mp4.

The -re switch (read input at native frame rate) is added to simulate a live event.

Please note that other mappings are possible too, for instance with higher bitrates ans scale. An example:


MAP="-map 0:v -b:v:0 2877k -s:v:0 1280x720
  -map 0:v -b:v:1 1872k -s:v:1 720x404
  -map 0:v -b:v:2 1231k -s:v:2 704x396
  -map 0:v -b:v:3 830k -s:v:3 640x360"

For FFmpeg please consult the FFmpeg documentation and FFmpeg wiki, and for Libav, see Libav.

For other encoder, please consult their manuals to set this up.

Continuous timestamps

FFmpeg has no -timestamp now option. This means you cannot stop and restart FFmpeg to the same publishing point without recreating the publishing point because FFmpeg will always start with t=0, using the Unix Epoch as start time.

There is a patch for libav that adds the -ism_offset option.

With this patch installed you then can restart 'avconv' with -ism_offset now * 10000000 where 'now' is the current time in seconds; in bash this amount to the following:


-ism_offset $(($(date +%s)*10000000))

This way using --restart_on_encoder_reconnect will also work with FFmpeg.

The patch can be downloaded: ism_movenc.patch and applied as follows:


git clone --depth 1 . && \
git apply ism_movenc.patch


Another way of handling the restart of an encoder is to use Event ID.

Encoder URLs

FFmpeg does not add the '/Streams(<identifier>)' section to the url as described in Encoder URL.

You therefore have to add it yourself:

Ingest RTMP

As RTMP is a TCP/IP based protocol which FFmpeg supports with librtmp Using this it is possible to use FFmpeg as an intermediary step to translate, or even transcode, RTMP streams to HTTP streams.

FFmpeg can ingest RTMP and HTTP POST fragmented MP4 to a webserver capable of ingesting fragmented MP4, like USP enabled Apache.

The following commandline works with FFmpeg 1.2:


FFMPEG_OPTIONS="-movflags isml+frag_keyframe -f ismv -threads 0"
AUDIO_OPTIONS="-acodec copy"
VIDEO_OPTIONS="-vcodec copy"

ffmpeg -y -re \
 -i $1 \

The input (the -i $1) in the above example then can be an rtmp stream, which looks like for instance the following:


In this case the RTMP stream uses h264/aac, so it is not necessary to transcode. In other cases it might be different.


Please note that FFmpeg is not distributed by Unified Streaming, it is an OSS tool. Support for FFmpeg may be found in the FFmpeg community, please consult the FFmpeg documentation and the FFmpeg wiki. If you are using Libav, please consult the Libav documentation..

Monitoring FFmpeg live encoding

It can be useful to monitor the FFmpeg encoding process. For example to automatically start a live channel or restart a channel in case of an encoding failure.

Add the following lines to your Monit configuration:

check process usp_live_loop with pidfile /var/run/
  start program = "/etc/init.d/usp_live_loop start"
  stop program  = "/etc/init.d/usp_live_loop stop"

Create a bash script in /etc/init.d/usp_live_loop:


case $1 in
 echo $$ > /var/run/;
 su typo -c "make -C /opt/usp -B loop"
 kill `cat /var/run/` ;;
 echo "usage: usp_live_loop {start|stop}" ;;

Please note that to use the above you will need a makefile with the loop target. This makefile is not presented here, but it creates the publishing point. Instead of make, you can use any general purpose scripting language (Perl, Python etc).