This is needed to satisfy the condition in 1. Take note of the format constraint defined by -hwaccel_output_format vaapi. The intel: prefix can be dropped, but its' often useful to identify what render node was used by a vendor name in an environment where more than one VAAPI-capable device exist, such as a rig with an Intel IGP and an AMD GPU. init_hw_device vaapi=intel:/dev/dri/renderD128 initializes a hardware device named vaapi (that can be called up later via the -hwaccel_device and -filter_hw_device as demonstrated above) bound to the DRM render node /dev/dri/renderD128. Note that the internal format of the surface will be derived from the format of the hwupload input, so additional format filters may be required to make everything work, as shown in the snippet above: VAAPI-based encoders can only take input as VAAPI surfaces, so it will typically need to be preceeded by a hwupload instance to convert a normal frame into a vaapi format frame.Where $encoder_name matches the encoders on the list above. With the omission of the mjpeg encoder (as its' not of interest in this context), and each of these encoders' documentation can be accessed via: ffmpeg -hide_banner -h encoder=$encoder_name For your reference, there are four available video encoders in FFmpeg as at the time of writing, namely: i. Defined video bitrates ( $video_bitrate$unit, where $unit can be either K or M, as you see fit) and audio bitrates ( $audio_bitrate$unit, where $unit should be in K, for AAC LC-based encodings) as shown above, with appropriate encoder settings passed to the vaapi encoders. Encoding to an output udp stream packaged as an MPEG Transport stream (see the muxer in use, mpegts), with the necessary parameters matching the output IP and port pairing respectively. We are taking a udp input, where $ingest_ip:$port_ip corresponds to a known UDP input stream, matching the IP and port pairing respectively, with a defined fifo size (as indicated by the '?fifo_size=n' parameter). VAAPI is available, and we will bind the DRM node /dev/dri/renderD128 to the encode session, and flags -global_header -fflags +genpts -f mpegts 'udp://$feed_ip:$feed_port' c:a aac -b:a $audio_bitrate$unit -ar 48000 -ac 2 \ c:v h264_vaapi -b:v $video_bitrate$unit -maxrate:v $video_bitrate$unit -qp:v 21 -sei +identifier+timing+recovery_point -profile:v main -level 4 \ init_hw_device vaapi=intel:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device intel -filter_hw_device intel \ Unless explicitly stated, Intel® is not responsible for the contents of such links, and no third-party endorsement of Intel or any of its products is implied.You'll need to initialize your hardware accelerator correctly, as shown in the documentation below (perhaps we should create a wiki entry for this in time?):Īssume the following snippet: ffmpeg -re -threads 4 -loglevel debug \ Links to third-party sites and references to third-party trademarks are provided for convenience and illustrative purposes only. You will need to check with the FFmpeg app support for more help. We can only confirm the hardware encoder is there as you can check here under Processor Graphics, Intel® Quick Sync Video: Since you have usability questions, they need to be addressed by the app developer or even perhaps their own community: After double-checking internally, we understand that you are looking to transcode a video using QuickSync, mainly the function "scale_qsv", however access to the hardware encoder is provided through the FFmpeg application.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |