User Tools

Site Tools


doc:appunti:linux:video:ffmpeg

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
doc:appunti:linux:video:ffmpeg [2023/01/07 15:20] – [Final rendering (re-encoding)] niccolodoc:appunti:linux:video:ffmpeg [2023/11/13 11:22] – [Final rendering (re-encoding)] niccolo
Line 262: Line 262:
 ./kenburst-1-make-images "$INPUT_FILE" "$KBURST_GEO" "$DURATION_S" \ ./kenburst-1-make-images "$INPUT_FILE" "$KBURST_GEO" "$DURATION_S" \
     | ./kenburst-2-make-video "$OUTPUT_MP4"     | ./kenburst-2-make-video "$OUTPUT_MP4"
-</code> 
- 
-====== Final rendering (re-encoding) ====== 
- 
-The video stream recorded by the Xiaomi Yi camera is **1920x1080 pixels** at a variable bitrate of **12.0 Mb/s**. Because we watch it on a simple TV set capable only of 1366x768 pixels, we we re-encode it with the following settings: 
- 
-^ Video codec     | MPEG-4 AVC (x264)  | 
-^ Video filter    | swresize, 1366x768, Bilinear  | 
-^ Basic x264      | Preset: **slow** (or less), Tuning: **film**, Profile: **High**, IDC Level: **Auto** | 
-^ Video encoding  | Average Bitrate (Two Pass), Average Bitrate 4096 kb/s (about 1.8 Gb per hour)  | 
-^ Audio codec     | <del>Lame MP3</del> Vorbis  | 
-^ Audio bitrate   | CBR 192 (or higher)  | 
- 
-We can use **Avidemux** to make the final rendering (re-encoding). For a **command line only** solution you can consider **ffmpeg** to perfomr the re-encoding and to make the merge (mux) all into a Matroska container. 
- 
-<code bash> 
-#!/bin/sh 
-TITLE="Balcani, maggio 2022" 
-ffmpeg \ 
-    -i "video-high-quality.mkv" \ 
-    -i 'audio-music.ogg' -i 'audio-live.ogg' \ 
-    -map '0:v:0' -map '1:a:0' -map '2:a:0' \ 
-    -metadata title="$TITLE" -metadata:s:v:0 title="$TITLE" \ 
-    -metadata:s:a:0 title="Accompagnamento musicale" \ 
-    -metadata:s:a:1 title="Audio in presa diretta" \ 
-    -filter:v "scale=1366x768" -aspect "16:9" \ 
-    -vcodec 'libx264' -pix_fmt 'yuvj420p' -preset 'veryslow' -tune 'film' -profile:v 'high' -level:v 5 \ 
-    -acodec copy \ 
-    "2022-05_balcani.mkv" 
 </code> </code>
  
Line 326: Line 297:
 The gamma correction for the three RGB channels was determined with the GIMP, using the //Colors// => //Levels// => //Pick the gray point for all channels// tool. The use of MPEG-TS clips allowed the montage of the final video by just concatenating them. The gamma correction for the three RGB channels was determined with the GIMP, using the //Colors// => //Levels// => //Pick the gray point for all channels// tool. The use of MPEG-TS clips allowed the montage of the final video by just concatenating them.
  
-==== AVC (x264) is better than ASP (xvid4) ====+===== AVC (x264) is better than ASP (xvid4) =====
  
 See this page: **[[https://www.avidemux.org/admWiki/doku.php?id=general:common_myths|Common myths]]** to understand the differences between formats (standards) and codecs (pieces of software). Read also this simple page: **[[https://www.cyberlink.com/support/product-faq-content.do?id=1901|Difference between MPEG-4 AVC and MPEG-4 ASP]]**. See also the Wikipedia article about **[[wp>Advanced Video Coding]]**. See this page: **[[https://www.avidemux.org/admWiki/doku.php?id=general:common_myths|Common myths]]** to understand the differences between formats (standards) and codecs (pieces of software). Read also this simple page: **[[https://www.cyberlink.com/support/product-faq-content.do?id=1901|Difference between MPEG-4 AVC and MPEG-4 ASP]]**. See also the Wikipedia article about **[[wp>Advanced Video Coding]]**.
Line 552: Line 523:
 </code> </code>
  
 +====== ffmpeg: leggere la sequenza di VOB da un DVD ======
 +
 +Nella directory **VIDEO_TS** di un DVD la traccia principale è normalmente suddivisa in file numerati sequenzialmente, ad esempio: ''VTS_01_0.VOB'', ''VTS_01_1.VOB'', ...
 +
 +In teoria è sufficiente concatenare i file in un solo file destinazione e quindi trattarlo come un normale file audio/video. Tuttavia è possibile indicare i singoli file come input senza la necessità di occupare ulteriore spazio disco con questa sintassi:
 +
 +<code bash>
 +SOURCE="concat:VTS_01_1.VOB|VTS_01_2.VOB|VTS_01_3.VOB|VTS_01_4.VOB|VTS_01_5.VOB"
 +ffmpeg -i "$SOURCE" ...
 +</code>
 +
 +====== ffmpeg: impostare un ritardo sui sottotitoli durante il muxing ======
 +
 +Se un flusso di sottotitoli (ad esempio nel formato Picture based DVD) non indica correttamente l'offset iniziale di riproduzione è possibile dire ad ffmpeg di impostarlo opportunamente in fase di muxing. In questo esempio il primo sottotitolo appare a 44.5 secondi:
 +
 +<code bash>
 +ffmpeg -i video-stream.mkv -i audio-stream.mkv -itsoffset 44.5 -i subtitles-stream.mkv ...
 +</code>
 +
 +In generale dovrebbe essere possibile scoprire l'offset quando ffmpeg legge l'intero stream, al momento in cui trova il prmio frame dei subtitles mostra qualcosa del genere sulla console:
 +
 +<code>
 +[mpeg @ 0x55f98bb2c6c0] New subtitle stream 0:7 at pos:14755854 and DTS:44.5s
 +</code>
  
 ====== Doppiaggio audio con Ardour ====== ====== Doppiaggio audio con Ardour ======
  
 Vedere la pagina dedicata: **[[ardour_dubbing]]**. Vedere la pagina dedicata: **[[ardour_dubbing]]**.
doc/appunti/linux/video/ffmpeg.txt · Last modified: 2023/11/13 11:24 by niccolo