How I Improved Video Streaming Efficiency with FFmpeg and Node.js
5 3 votes
Article Rating

How I Improved Video Streaming with FFmpeg and Node.js

Generating a thumbnail, compressing a video, generating a preview clip, and generating HLS segments using FFmpeg

Introduction

Nowadays, videos are everywhere. If you think to entertain yourself, you would more likely watch a movie, if you want to learn a new thing, you would more likely go with a visual tutorial, and so on …

But is dealing with videos as easy as it sounds? Unfortunately, not. For developers, serving large video files could become a nightmare, it can lead to significant performance issues and user frustration.

Therefore, optimizing videos is crucial for providing a smooth user experience.

Disclaimer
For sure video processing is a big topic that could be considered a specialization in itself, so here I am sharing just my experience and the ways that work for me after much trial and error.

👨‍💻 You can find the used code and a complete example in this repo.

FFmpeg to the rescue

In this article, I’ll share with you how I optimized uploaded videos using Node.js. For this purpose, I relied on a powerful multimedia processing tool called FFmpeg which enabled me to handle some interesting use cases like:

  • Generating a thumbnail.
  • Video compression.
  • Generating a preview clip.
  • Generating the HLS segments.

And I will walk you through them later in this article, but first, let’s introduce our main dependencies.

  1. ffmpeg-static: This package provides static binaries of FFmpeg for various operating systems, including macOS, Linux, and Windows. It allows you to easily integrate FFmpeg into your applications without needing to install it separately on the system.
  2. fluent-ffmpeg: This is a wrapper around FFmpeg that simplifies its usage in Node.js applications. It provides a more user-friendly API for constructing and executing FFmpeg commands, making it easier to manipulate audio and video files.

Well, after introducing our dependencies, let’s jump into our use cases.

Generating a thumbnail

In this case, I am trying to generate a single thumbnail for a video, have a look at the following code.

ffmpeg('/path/to/video.mp4')
  .outputOptions([
    '-ss 00:00:01', // Seek to 1 second into the video for the thumbnail
    '-vframes 1', // Capture one frame
    '-q:v 5', // Set output quality (lower is better)
    '-vf scale=300:-1' // Scale while maintaining aspect ratio
  ])
  .save('/path/to/thumbnail.jpg')
  .on('end', () => {
    console.log('Thumbnail has been generated successfully!');
  })
  .on('error', (err) => {
    console.log(`Error: ${err.message}`);
  });

Let’s break down the passed options:

  • -ss 00:00:01: This option tells FFmpeg to seek the 1-second mark of the video before capturing a frame. It’s useful for selecting a specific moment in the video for the thumbnail.
  • -vframes 1: This option specifies that only one frame should be captured from the video. Essentially, it tells FFmpeg to stop processing after it has extracted this single frame.
  • -q:v 5: This sets the output quality of the image. The value ranges from 1 (highest quality) to 31 (lowest quality). A value 5 indicates a good balance between quality and file size, making it suitable for thumbnails.
  • -vf scale=300:-1: This applies a video filter (vf) to scale the output image. The width is set to 300 pixels, and -1 means that the height will be automatically calculated to maintain the original aspect ratio of the video.

For sure, there is another use case related to this one, you can generate multiple thumbnails at specified intervals, by looping through the intervals. For example, you can generate one every 10 seconds.

Video compression

In this use case, I am trying to minimize the video size as much as possible by compressing it while keeping in mind some factors like size, time, and quality. This is the code I used.

ffmpeg('/path/to/video.mp4')
  .outputOptions([
    '-c:v libx264', // Video codec
    '-preset veryfast', // Fast encoding with reasonable quality and file size
    '-movflags +faststart', // Optimize for web streaming
    '-crf 27', // Constant Rate Factor for quality
    '-tag:v avc1' // Tag for QuickTime compatibility
  ])
  .save('/path/to/compressed/video.mp4')
  .on('end', () => {
    console.log('Video has been compressed successfully!');
  })
  .on('error', (err) => {
    console.log(`Error: ${err.message}`);
  });

Let’s break down the passed options:

  • -c:v libx264: This sets the video codec to libx264, which is a widely used codec for encoding H.264 video. It provides a good balance of quality and compression efficiency.
  • -preset veryfast: The preset option controls the speed of encoding. The veryfast preset allows for quicker encoding times while still maintaining reasonable quality and file size. The presets range from ultrafast (least compression, fastest) to veryslow (most compression, slowest).
  • -crf 27: The Constant Rate Factor (CRF) controls the quality of the output video. Lower values result in better quality, typical values range from 18 (high quality) to 28 (lower quality). A CRF of 27 indicates a focus on smaller file sizes at the expense of some quality.
  • -movflags +faststart: This option is useful for optimizing videos for streaming over the web. It moves the metadata to the beginning of the file, allowing playback to start before the entire file is downloaded.
  • -tag:v avc1: This sets a specific tag for the video stream, which can improve compatibility with certain players, especially QuickTime. The avc1 tag indicates that the video stream uses H.264 encoding.

When talking about compression, you can consider other options but you have to weigh the tradeoffs for each:

  • Two-Pass compression: The encoding process is divided into two separate passes over the video file which improves quality. On the other hand, it increases the encoding time and it needs more CPU and memory resources.
  • Using H.265 codec instead of H.264: This can lead to better compression efficiency and support for higher resolutions like (8K). On the other hand, you may end up with compatibility issues, increasing the compression time, and needing more computational resources.

Generating a preview clip

Maybe when you hover over the video thumbnail, you want to play a preview clip for this video. To do so, I used the following implementation.

ffmpeg('/path/to/video.mp4')
  .setStartTime('00:00:00')
  .duration('00:00:03')
  .outputOptions([
    '-c:v libx264', // Video codec
    '-preset veryfast', // Fast encoding with reasonable quality and file size
    '-movflags +faststart', // Optimize for web streaming
    '-crf 27', // Constant Rate Factor for quality
    '-tag:v avc1' // Tag for QuickTime compatibility
  ])
  .save('/path/to/preview-clip.mp4')
  .on('end', () => {
    console.log('The preview clip has been successfully!');
  })
  .on('error', (err) => {
    console.log(`Error: ${err.message}`);
  });

But what about the used methods:

  • .setStartTime('00:00:00'): This method specifies the starting point for the clip. In this case, it starts from the very beginning of the video (0 seconds). You can adjust this value to start from any point in the video.
  • .duration('00:00:03'): This method sets the duration of the output clip. Here, it specifies that the clip should last for 3 seconds. The resulting preview will include only this segment of the original video.

And for the passed options, they are the same as the compression case, because it would be much better if you compress the generated preview clip as well.

Generating the HLS segments

HLS (HTTP Live Streaming) is a protocol developed by Apple for delivering video and audio content over the internet. But how it works?

  1. Segmentation: The video file is divided into small chunks (usually around 10 seconds long).
  2. Playlist: An index file (called an M3U8 playlist) is created, which lists these chunks and their order.
  3. Delivery: When you play a video, your device downloads these segments one by one, allowing for smooth playback.

So then, how can this be useful for our video optimization process? Actually, it has two big advantages:

  1. HLS allows users to watch videos online without needing to download them first. Instead of one large file, the video is split into smaller segments that can be downloaded one by one.
  2. It adjusts the quality of the video based on the viewer’s network fluctuations. If your connection is slow, HLS can lower the video quality to prevent buffering. If your connection improves, it can switch back to a higher quality.

Interesting, right? So let’s try to generate the HLS segments for two resolutions (360, and 720):

const createMasterPlaylist = () => {
  const masterPlaylistContent = `
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-STREAM-INF:BANDWIDTH=1500000,RESOLUTION=1280x720
720p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=640x360
360p.m3u8
`.trim();

  fs.writeFileSync(`/path/to/master.m3u8/file`, masterPlaylistContent);
};

for (const width of [360, 720]) {
  ffmpeg('/path/to/video.mp4')
    .outputOptions([
      '-c:v libx264', // Video codec
      '-preset veryfast', // Fast encoding with reasonable quality and file size
      '-movflags +faststart', // Optimize for web streaming
      '-crf 27', // Constant Rate Factor for quality
      '-tag:v avc1', // Tag for QuickTime compatibility

      '-f hls', // Output format
      '-hls_time 10', // Segment duration
      '-hls_list_size 0', // Include all segments in playlist
      '-hls_flags independent_segments' // Each segment can be decoded independently
    ])
    .output(`/path/to/${width}p.m3u8/file`)
    .videoFilter(`scale=${width}:-2`) // Scale width and maintain aspect ratio
    .on('progress', () => {
      console.log(`An HLS ${width}p segment has been generated successfully!`);
    })
    .on('end', () => {
	    if (width === 720) {
		    // Create the master manifest that includes the playlists details
		    createMasterPlaylist();
	    }
    
      console.log(`All HLS segments for ${width}p has been generated successfully!`);
    })
    .on('error', (err) => {
      console.log(`Error: ${err.message}`);
    })
    .run();
}

Let’s break down the passed options:

  • -f hls: Specifies that the output format should be HLS.
  • -hls_time 10: Sets the duration of each HLS segment to 10 seconds. Each segment will be approximately this length, which helps in smooth playback.
  • -hls_list_size 0: Indicates that there should be no limit on the number of entries in the playlist (M3U8 file). All segments will be included in the playlist.
  • -hls_flags independent_segments: When this flag is set, it ensures that each segment starts with a keyframe (I-frame) and can be played back without needing to reference previous segments. This is beneficial for seeking and improving playback performance, especially in adaptive streaming scenarios where users might jump around within the video.

And for the used methods:

  • .videoFilter(scale=${width}:-2): FFmpeg will resize the video to have a width of ${width} pixels and will calculate the corresponding height based on the original aspect ratio. And using the value -2 will ensure that the height is an even number (often required for certain codecs). This ensures that your video does not get distorted or stretched, which can happen if you set both dimensions manually without considering their ratio.
  • .run() method starts executing the FFmpeg command with all the specified options and settings.

Some considerations you have to keep in mind

  • Video processing is a time-consuming operation. Having that said, in my use case of handling posts in a timeline, I adopted the eventual consistency approach, which means, the post will not be available immediately but after its corresponding processing operation finishes.
  • Video processing is a resource-intensive operation, so, it is better to handle it in a separate CPU-optimized server dedicated just for this task.
  • In my opinion, using another multi-threaded programming language for video processing like C++ or Rust instead of Node.js would be way better.

Conclusion

After introducing the challenges you might face when streaming video files, I hope you have a good idea about how important you should optimize the video files for a better user experience.

And for that, I introduced FFmpeg as a powerful tool for video processing and presented how can you use it to generate thumbnails, compress videos, create preview clips, and produce HLS segments for adaptive streaming.

In the end, you have to keep in mind that there is no one-fit-all solution, everything in our software world has tradeoffs, and you have to weigh between them and make your decision based on your circumstances.

Think about it

If you liked this article please rate and share it to spread the word, really, that encourages me a lot to create more content like this.

If you found my content helpful, it is a good idea to subscribe to my newsletter. Make sure that I respect your inbox so I'll never spam you, and you can unsubscribe at any time!

You can check out more articles as well:

Thanks a lot for staying with me up till this point. I hope you enjoy reading this article.

5 3 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Scroll to Top