We (me and Vysakh Premkumar) wanted a livestream solution for FOSSMeet'24 which was light weight and manageable. In FOSSMeet'23, Vysakh had managed to setup an instance of PeerTube and had a last minute hacky setup to record speakers sessions. I recall him sitting in front of Aryabhatta with his laptop connected to a DSLR (in loose contact of course) in a what seemed to be a tense jungle. The end result was that we had many video recordings which were broken apart, some didn’t have audio while others were not properly focussed.

PeerTube seemed to be good but then it was a very heavy piece of software, all what we needed was one live stream up and running. Vysakh came to me after doing some research and he found out that RTMP (Real Time Messaging Protocol) is the popular protocol in the streaming scene (used by giants like yt and twitch). I also read up about RTSP (Real Time Streaming Protocol) and it sounds more like it is designed for streaming but apparently RTMP has been more popular historically.

In order for nginx to support RTMP, we need an rtmp module in nginx which also supports HLS. You would have to rebuild nginx with this module added to its make config.

HLS stands for HTTP-Live-Streaming, it is well supported and can be practically used in any device. The problem with RTMP is that RTMP streams are not natively supported by browsers. I am not quite sure how the giants use it but I assume that they have wrappers over this protocol (RTMP) to transmit streams and play in the browser. HLS is relatively newer, supports adaptive bitrate, and has lower latency than RTMP.

The rtmp module linked above supports placing the recieved data as an HLS stream. For this, in the nginx config which we used as a template notice

application hls {
            live on;
            hls on;
            hls_fragment_naming system;
            hls_fragment 5;
            hls_playlist_length 10;
            hls_path /opt/data/hls;
            hls_nested on;

            hls_variant _720p2628kbs BANDWIDTH=2628000,RESOLUTION=1280x720;
            hls_variant _480p1128kbs BANDWIDTH=1128000,RESOLUTION=854x480;
            hls_variant _360p878kbs BANDWIDTH=878000,RESOLUTION=640x360;
            hls_variant _240p528kbs BANDWIDTH=528000,RESOLUTION=426x240;
            hls_variant _240p264kbs BANDWIDTH=264000,RESOLUTION=426x240;
        }

This block basically parses the incoming RTMP traffic and generates HLS files (hls on) from the same. The HLS files are stored in hls_path.

We used a command like

ffmpeg -re -i ~/vid.mp4 -vcodec copy -loop -1 -f flv rtmp:live.fosscell.org/hls/streamkey

(google and learn the switches :D) to initiate an RTMP connection to our server. The server then proceeds to create *.ts and index.m3u8 files inside hls_path/streamkey directory (autogenerated within the web server host).

As long as the stream is live, these files will exist and hold the data of the current stream seek location.

Now we need to enable access to these files using http. In the same config file,

http {
    server {
        listen 80;

        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
                video/mp2t ts;
            }
            root /opt/data;;
        }
    }
}

This basic nginx server block ensures the correct root path so that the files can be accessed correctly. From here, in our example, live.fosscell.org/hls/streamkey/index.m3u8 gave us our stream.

There were however issues regarding CORS, (Cross Origin Request Sharing) wherein you are trying access resources from another origin (in our case another domain). We intended the livestream link to be embedded inside FOSSMeet’s website. FOSSMeet’s website had a different domain than our streaming server. The following headers helps mitigate that.

add_header Cache-Control no-cache;
add_header Access-Control-Allow-Origin 

In addition to this, we didn’t use HTTPS in our initial config, so Mixed Active Content errors also prevented from loading HTTP resources alongside HTTPS resources. We used Certbot certonly for certificates and mounted them in our nginx-rtmp container. HTTP site was now redirected to HTTPS and this issue was also solved. The following image summarizes the dataflow

architecture

When a stream is being pushed, it creates a directory with app_name in the hls_path specified in the rtmp block of your config. In that directory, another directory with the name stream_key will be generated. This directory is dynamic in nature and only exists when there is a stream corresponding to the stream key.

The associated .ts files and index.m3u8 is what hls players use for playback. We didnt use multiple bitrates for our stream, hence the number of ts files are less. If you choose to use multiple quality (or bitrates) (like in the config snippet above with 728p etc) there will be multiple files generated by the rtmp module and the client should appropriately write logic to select the quality.

directory

One security issue which I noticed was that in the default config, the http server’s root is the same as the rtmp streams files directory. This would mean the users would be using the same stream key to get the files via http. Instead of this, I created another root directory for the http server called /opt/hlsdata and a directory named live which symlinks to the root directory of the rtmp stream (with streamkey). This symlink will be broken if the stream is not live, and will serve the ts, and m3u8 files if the stream is live.