I need to create a video with following conditions:
- It can only be made from a sequence of JPEG files, each having its timestamp (milliseconds) in the name. The images’ durations are not the same, they differ, so i cannot just concat them all and use particular fps
- There are several tar archives with the images sequences, the archives are kind of huge so I read them from a file storage as an async steam of data and cannot save them on the disk as files. The frames are read and right away put to ffmpeg running process stdin.
- The images may have different aspect ratios so it’s required to make a NxN square and scale the images to fit in with filling the empty space with pads
My current solution:
ffmpeg -r $someFpsValue -i - -vf scale=w=$w:h=$h:force_original_aspect_ratio=1,pad=$w:$h:(((ow-iw)/2)):(((oh-ih)/2)) result.mp4
As you can see, it doesn’t let me concat the images with correct durations. I know that the concat demuxer can solve the problem of merging images with different durations but seemingly it doesn’t work with pipe protocol. I have an idea of evaluating an average fps as (videoFramesCount) / (videoDurationInSeconds) for -r
argument, or maybe even counting the fps for each video’s second and then getting the avg, but maybe there is a more reliable solution (like some concat demuxer analogue)?
Thanks in advance 🙂
Denis Fevralev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.