====== Final Rendering in x264 with ffmpeg ====== Whether you are working with **Ffmepg**, **Avidemux** or **Olive Video Editor**, you should be wondering what parameters to use to final render a video montage. Here are my choices for some specific purpose. ===== Encoding for High Definition Video ===== There is no standardized meaning for //high-definition//, generally the minimum standard is **720p**, a progressive HD signal where each frame is sized 1280x720 pixels. Many mid-sized TV sets produced in the early 2000s, known as //HD Ready//, were capable of **1280x720**, **1360x768** or **1366x768**. The next evolution step, called //Full HD// or **1080p**, have a frame size of **1920x1080** is common in interlaced or progressive mode. At first we aimed to produce videos for the entry level High Definition Video, so we choosed a target resolution of **1366x768** pixel. After all, we have a 32-inch television capable only of 1366x768 pixels. Our main source of videos is the Xiaomi Yi action camera, which records full HD videos **1920x1080 pixels** at a variable bitrate of **12.0 Mb/s**. So we re-encode the final montage with the following settings: ^ Video codec | MPEG-4 AVC (x264) | ^ Video filter | swresize, 1366x768, Bilinear | ^ Basic x264 | Preset: **slow** (or less), Tuning: **film**, Profile: **High**, IDC Level: **Auto** | ^ Video encoding | Average Bitrate (Two Pass), Average Bitrate 4096 kb/s (about 1.8 Gb per hour) | ^ Pixel format | Our source videos use **yuvj420p pixel format**, so we use the same for the final rendering (see the considerations below). | ^ Bits per sample | Our source videos use **8 bits per raw sample**, so we stay on this. | ^ Color range | We use the **full range** of colors. The **j** letter in the //yuvj420p// pixel format means that YUV values are stored as full 0-255 range (8 bits), as in JPEG. This is better than the so called //studio swing//, with Y in the 16-235 range and UV in the 16-240 range. | ^ Audio codec | Lame MP3 Vorbis | ^ Audio bitrate | CBR 192 (or higher) | We can use **Avidemux** to make the final rendering (re-encoding). For a **command line only** solution you can consider **ffmpeg** to perfomr the re-encoding and to make the merge (mux) all into a Matroska container. #!/bin/sh TITLE="Balcani, maggio 2022" ffmpeg \ -i "video-high-quality.mkv" \ -i 'audio-music.ogg' -i 'audio-live.ogg' \ -map '0:v:0' -map '1:a:0' -map '2:a:0' \ -metadata title="$TITLE" -metadata:s:v:0 title="$TITLE" \ -metadata:s:a:0 title="Accompagnamento musicale" \ -metadata:s:a:1 title="Audio in presa diretta" \ -filter:v "scale=1366x768" -aspect "16:9" \ -vcodec 'libx264' -pix_fmt 'yuvj420p' -preset 'veryslow' -tune 'film' -profile:v 'high' -level:v 5 \ -acodec copy \ "2022-05_balcani.mkv" ===== Pixel format considerations ===== We used the **yuvj420p pixel format**. What does it means? First of all consider the **420** code; this means tat for each matrix of 4x2 pixels the stream encode all the values for the luminance, only 2 values for the chrominance on the X axis and zero values for the chrominance on the Y axis. This figure explains clearly the 4:4:4, 4:2:2 and 4:2:0 subsampling methods: {{subsampling.png?400|Chroma Subsampling}} Is file size is not a concern, we might ask ourselves whether a pixel format with less loss of chroma information would be preferable. Obviously is useless to add more chroma information if the final rendering has the same resolution of the original video, but if we are **scaling down** the video resolution we may **retain chroma information** using a different pixel format. The fact is that **in movies 4:2:0 is almost lossless visually**, which is why it can be found used in Blu-ray discs and a lot of modern video cameras. There is virtually no advantage to using 4:4:4 for consuming video content. Furthermore, it may happen that some players (software or hardware) are not compatible with pixel formats other than 4:2:0. ===== How to probe a video ===== How to get the **piexel format**: ffprobe -loglevel error \ -show_entries stream=pix_fmt \ -select_streams v YDXJ4050.mp4 How to get the **bits per raw sample**: ffprobe -loglevel panic \ -show_entries stream=bits_per_raw_sample \ -select_streams v YDXJ4050.mp4 ===== Web Resources ===== * **[[https://stackoverflow.com/questions/32829514/which-pixel-format-for-web-mp4-video|Which pixel format for web mp4 video?]]** * **[[https://stackoverflow.com/questions/56829755/using-ffmpeg-or-ffprobe-to-get-the-pixel-bit-depth-of-a-video|Using ffmpeg or ffprobe to get the pixel bit depth of a video]]** * **[[https://www.rtings.com/tv/learn/chroma-subsampling|Chroma Subsampling]]** * **[[https://en.wikipedia.org/wiki/YCbCr|YCbCr]]**