Archive

Posts Tagged ‘FFmpeg’

FFmpeg Watermark.c Alpha Channel Patch

September 10, 2009 Leave a comment

Finally, something actually useful on this blog……


If you need to apply a watermark, or Digital Onscreen Graphic (DOG), to a file during the transcode process, the only way to currently achieve this with FFmpeg is to use the vhook watermark.c filter. Unfortunately, vhooks no longer work with the latest SVN snapshots of FFmpeg, as everyone is supposed to be writing new filters for the AVFilter framework.

Unfortunately, again, there’s not always time to write brand new filters in C from scratch. Sometimes, a quicker solution is required. At the major UK media content distribution company that I work for, we needed to transcode approximately 5,000 VC-1 5Mbps files to 2-pass H.264 at 1.5Mbps and 500kbps, within 6 weeks. We decided to find all the spare PCs we could get out hands on, install Debian Lenny and FFmpeg, then start transcoding. However, we also needed to apply a DOG to each and every file. How to do this with FFmpeg….. use the watermark.c vhook, which fortunately does still work with the 0.5 release of FFmpeg. Great news. Well, almost.

To achieve a really nice looking DOG, we wanted to use a PNG file with Alpha Channels. The existing watermark.c code did not support this. Therefore, we’ve written a patch.

This patch means watermark.c now obeys the alpha channel in a PNG file. The -m option is the mode, this must be 2 for alpha blending. The watermark image is applied to the input video and then scaled with the input video to the output video’s dimensions. So best to make an image the same dimensions as the input video, otherwise you’ll get horrible scaling effects.


Usage:

ffmpeg … -vhook ‘/usr/local/lib/vhook/watermark.so -m 2 path/to/image.png’ …


(replace /usr/local/lib/vhook with wherever your watermark.so is.)


Patch is available here –  watermark.patch

(Maybe see links below for files on Github instead)


Example screen grab: View image


We’ve also posted back to the FFmpeg Devel mailing list.


Actual credit for this patch goes to my colleague Tim MacFarlane - http://refractalize.blogspot.com/


Tim has also now added the files to Github:


Just the patch here.

The whole of watermark.c with patch applied here.

Categories: FFmpeg, Video Tags: , ,

BBC R&DTV – Creative Commons Tech TV

April 18, 2009 1 comment

In an interesting, and to be applauded, move from the BBC, they are now releasing a technology based television programme under a Creative Commons non-commercial attribution licence. R&DTV’s first episode is now available for free download in a number of file formats. There is a full 30 minute version available, a shorter 5 minute highlight version, as well as a complete Asset Bundle, which includes rushes that may not have made it into the final programme versions.

The BBC’s RAD blog has a launch announcement about this, followed up by another post 24 hours later outlining some small fixes.

The programme is PAL 720×576. The aspect appears to be 14:9 anamorphic. The little person inside me who wants the greatest and the best all the time, wonders why the filming wasn’t done in HD, even HDV would do.

I thought the “formats” described on the R&DTV website were a bit vague. What does QuickTime format and Matroska format really mean? Sure, I know about QuickTime and Matroska containers, but this doesn’t say anything about the video and audio essence contained therein. The best way to find out about this is to download each video and let FFmpeg take a look.

QuickTime Format (461.3MB):

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘RDTV_ep1_5mins.mov’:
Duration: 00:05:59.08, start: 0.000000, bitrate: 10777 kb/s
Stream #0.0(eng): Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
Stream #0.1(eng): Video: h264, yuv420p, 720×576, 25 tbr, 25 tbn, 50 tbc

That’s H.264 video with PCM audio. Strange they didn’t use AAC audio in a QuickTime file. Looking at that 10Mbps bitrate though, I’m guessing perhaps the BBC is expecting people to use this version for editing. But then why use H.264, rather than something that’s I-Frame only like IMX50? There’s also an Uncompressed version and another QuickTime version, which we’ll come to later.
 
Matroska Format (28.4MB):

Input #0, matroska, from ‘RDTV_ep1_5mins.mkv’:
Duration: 00:05:59.04, start: 0.000000, bitrate: N/A
Stream #0.0(eng): Video: mpeg4, yuv420p, 720×576 [PAR 1:1 DAR 5:4], 25 tbr, 1k tbn, 25 tbc
Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16

Generic mpeg4 video this time (Xvid perhaps) and here’s our AAC audio!

MP4 Format (65.4MB):

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘RDTV_ep1_5mins.mp4′:
Duration: 00:05:59.10, start: 0.000000, bitrate: 1526 kb/s
Stream #0.0(eng): Video: h264, yuv420p, 720×576 [PAR 1:1 DAR 5:4], 25 tbr, 48k tbn, 50 tbc
Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16

H.264 video again and AAC audio again. When opening this file with Totem to view, the Comments section says “HandBrake 0.9.3 2008121800″. Nice to know the BBC is using Open Source software for at least some of their video transcoding.

AVI Format (63MB):

Input #0, avi, from ‘RDTV_ep1_5mins.avi’:
Duration: 00:05:59.04, start: 0.000000, bitrate: 1470 kb/s
Stream #0.0: Video: mpeg4, yuv420p, 720×576 [PAR 1:1 DAR 5:4], 25 tbr, 25 tbn, 25 tbc
Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 160 kb/s

Generic mpeg4 video again, but this time with mp3 audio.

FLV Format (37.4MB)

Input #0, flv, from ‘RDTV_ep1_5mins.flv’:
Duration: 00:05:59.07, start: 0.000000, bitrate: 844 kb/s
Stream #0.0: Video: vp6f, yuv420p, 1024×576, 716 kb/s, 25 tbr, 1k tbn, 1k tbc
Stream #0.1: Audio: mp3, 44100 Hz, stereo, s16, 128 kb/s

VP6 for the video codec and mp3 for the audio. No surprises there then. The bitrate is quite low though for VP6 content, quality will suffer.

Ogg Format:

Input #0, ogg, from ‘RDTV_ep1_5mins.ogg’:
Duration: 00:05:59.08, start: 0.000000, bitrate: 683 kb/s
Stream #0.0: Video: theora, yuv420p, 720×576, PAR 1:1 DAR 5:4, 25 tbr, 25 tbn, 25 tbc
Stream #0.1: Audio: vorbis, 48000 Hz, 5.1, s16, 516 kb/s

Theora for the video and vorbis for the audio, again no surprises there. 5.1 audio is a nice touch though. However, again, the bitrate is very low. Why would the BBC do this? The MP4 version, with H.264 video at a higher bitrate, is going to look far superior.

QuickTime 2 Format (155MB):

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘RDTV_ep1_5mins_2.mov’:
Duration: 00:05:59.08, start: 0.000000, bitrate: 3627 kb/s
Stream #0.0(eng): Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
Stream #0.1(eng): Video: h264, yuv420p, 720×576, 25 tbr, 25 tbn, 50 tbc

H.264 video and PCM audio. This second QuickTime file is found only on the FTP site and not linked to directly from the main page. The bitrate is much lower than the previous QuickTime file.

QuickTime Uncompressed Format (7GB):

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘RDTV_ep1_5mins_uncompressed.mov’:
Duration: 00:05:59.08, start: 0.000000, bitrate: 167428 kb/s
Stream #0.0(eng): Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
Stream #0.1(eng): Video: rawvideo, uyvy422, 720×576, 25 tbr, 25 tbn, 25 tbc

There we go, raw video in the 4:2:2 colour space at 165Mbps, with PCM audio again. I wonder whether the content was filmed at anywhere near this resolution. Given that the programme is only SD, I’m guessing that the highest quality recording would have been done direct to Digital Betacam, which is only the equivalent of 90Mbps, unless of course the whole thing was done tapeless, which I must admit to doubting.

One last puzzlement is why a Dirac version wasn’t supplied, given that this is the BBC’s own R&D developed codec.
 

Categories: FFmpeg, Video Tags: , , , ,

Interview with FFmpeg Developers

March 11, 2009 Leave a comment

Two posts in two days after such a long silence, who’d have thought it…… And again it’s about FFmpeg.

This time Phoronix has posted an interesting interview summary with Diego Biurrun,
Baptiste Coudurier, and Robert Swain, three of the many, but very key, developers working on the FFmpeg project. The interview covers some interesting topics about the future of FFmpeg, the difficulties of maintaining such a large project, managing developer motivation for writing codecs and the limited corporate sponsorship the project has so far received.

I’ve known Baptiste for a year or so, having met at the National Association of Broadcasters (NAB) convention in Las Vegas in April 2008. I’d like to personally thank him for the work he has done on implementing DNxHD in FFmpeg.

Anyway, read the interview and learn something about behind the scenes at FFmpeg.

Also worth a read, which I just found today, is the Phoronix tests on NVIDIAs VDPAU drivers on a cheap chipset and graphics card.

FFmpeg Makes an Official Release!

March 10, 2009 Leave a comment

It’s been a long while since I’ve posted on this blog, but finally today something has spurned me into action. 

The FFmpeg team have finally made a release – version 0.5 – with a silly long name. Previously, users were always told to download and compile the latest SVN version of FFmpeg, if they expected any support from the mailing lists.
Now it would seem that there is a stable release, only a few years since the last one, that can be used by software developers and packagers everywhere. I still expect that many mailing list issues will be dealt with by the instruction to download from the SVN or Git repository and compile. I also expect that bug fixes and enhancements will make it into SVN quite quickly, but that also the next release might be some time away.
Release notes are available on the FFmpeg changelog (long!) and there’s a lively, as always, Slashdot discussion around this momentous event.

New FFMpeg Changes Headers Location; Breaks Stuff

May 19, 2008 Leave a comment

Recently, and it’s hard to say exactly which SVN snapshot this occured in, the FFMpeg project changed the location of a number of its header files. This has caused soem havoc with other applications that use FFmpeg for video decoding or encoding.

Amongst other things, Open Movie Editor complained that certain libraries were not installed, which they plainly were. This could be seen from running a simple “ffmpeg -i” command to see what which libraries FFmpeg had been configured again.

Trying to re-compile Open Movie Editor from source struck some problems, in that OME was looking for FFmpeg headers in the wrong place. To overcome this issue, so that OME would compile and then install correctly, I made the following changes.

The first crash will be with regards to avformat.h in the file nle_main.cxx

nle_main.cxx
and the other two files you need to make some small edits to can be
found in the “src” directory created when OME is unpacked.

There are three files you’ll need to edit in the text editor of your choice:

nle_main.cxx
VideoFileFfmpeg.H
AudioFileFfmepg.H

Open each of those files and near the beginning (around line 35) will be references that look something like this:

#include <ffmpeg/avformat.h>

You’ll need to find where avformat.h, avcodec.h and swscale.h are residing on your machine.

You can do this by using the following command:

>sudo find / avformat.h

On my machine, a build of Debian Lenny, these files can all be found in /usr/local/include

I edited the files so the code looks like this (example from VideoFileFfmpeg.H):

#include </usr/local/include/libavcodec/avcodec.h>
#include </usr/local/include/libavformat/avformat.h>
#ifdef SWSCALE
    #include </usr/local/include/libswscale/swscale.h>

Once you’ve saved those files, OME should now be able to find the FFmpeg header files and build correctly.

Hopefully a new version of Open Movie Editor will soon be available where these issues have been rectified in the source.

FFmpeg: Update Installing on Ubuntu Hardy

May 1, 2008 1 comment

Having recently installed Xubuntu Hardy Heron on a laptop, I also
needed to install FFmpeg. This post is really just a couple of notes
for myself, updating my earlier How-To post regarding installation of
FFmpeg on Ubuntu Gutsy.

New apt-get install line:

sudo apt-get install liblame-dev libfaad2-dev libfaac-dev
libxvidcore4-dev liba52-0.7.4 liba52-0.7.4-dev libx264-dev libdts-dev
libswscale-dev checkinstall build-essential subversion

Here I’ve added the swscale development libraries. Swscale is used for scaling videos.

If
you are ever stuck behind a firewall or proxy, especially one that you
have no control over and which does not understand certain SVN
commands, there is a nightly Subversion snapshot available for download
from the FFmpeg website. This alleviates the need to checkout the
source with SVN.

New configure line:

./configure
–enable-gpl –enable-libvorbis –enable-libtheora –enable-liba52
–enable-libdc1394 –enable-libgsm –enable-libmp3lame –enable-libfaad
–enable-libfaac –enable-libxvid –enable-pthreads –enable-libx264
–enable-shared –enable-swscale –enable-avfilter –enable-postproc
–enable-avfilter-lavf

Here I’ve removed –enable-pp as it is no longer recognised. And I’ve
added –enable-swscale, –enable-avfilter, –enable-avfilter-lavf and
–enable-postproc

Avfilter is the new FFmpeg library that replaces the deprecated vhook functionality.

One last note to self is to investigate the possibilities of AVIsynth scripting and FFmpeg.

Read more…

Categories: Uncategorized Tags: ,

Kino 1.3.0 Released

February 25, 2008 Leave a comment

Yesterday a new version of popular Linux video editing tool, Kino, was released. The new version is 1.3.0 and contains the following changes:

  • Updated export scripts for FFmpeg changes (x264, mp3)
  • Improved speed on SMP systems by enabling FFmpeg multi-threaded codecs
  • Improved import (DV conversion) progress dialog
  • Added gstreamer-based Ogg Theora to the blip.tv publishing script
  • Added quality level option to the blip.tv publishing script
  • Updated Hungarian translation
  • Added Ukranian translation by Yuri Chornoivan

Congratulations to Dan Kennedy and the team.

The new source files can be downloaded directly from here.

Categories: FFmpeg, Kino, Video Tags: ,

Real World Open Source Video Editing

February 7, 2008 1 comment

A short while ago I wrote a review about Open Movie Editor. Essentially this review was written after a couple of hours testing various video clips and assessing the functionality within OME. Now, I can write about what OME is like on a real editing assignment.

Recently I was given a DVD full of PAL DV material and asked to create a compilation from the individual clips. A fun little project that should only take a day or two. Open Movie Editor was the obvious tool for the job.

The good news I can report is that even after 10 to 12 hours of constant video editing, OME is still a very stable piece of software. I only managed to induce two crashes – once when trying to undo multiple edits in a row and once when vigorously moving clips around on the timeline. Other than that, Open Movie Editor was easily up to the task.

I’m not an advanced video editor, happy within my comfort zone using something like Adobe Premiere, but also not using all the intricate features. However, Open Movie Editor does still lack a few basic features, that would have greatly increased my productivity. Changing playback speed of a clip is not possible within OME. I needed to change the framerate of target clips using FFmpeg and mjpeg tools to achieve this effect. While fade transitions are easy enough, I’m sure they could have been even quicker if such a function was built into OME. Precise frame editing, for splitting clips for example, would also make life easier.

There are some really nice features in Open Movie Editor though. Audio automations are a breeze, the media browser window provides easy access to your video library and the list of render options is quite vast – dependent on FFMpeg, Libquicktime and other shared video libraries.

So what did I produce in my 12 hours of work? A fun 4 minute clip, which is still a little rough around the edges, but generally a good laugh. Here’s a link for your viewing pleasure:

http://kapitalmototv.co.uk/play-183-0.html

Edited in Open Movie Editor, with some clip transformations using FFmpeg and mjpeg tools. Follow this with final transcoding to x264, again with FFmpeg for more finite control, and you have an Open Source Editing project.

The Kapital Moto TV site uses open source products where possible. The server runs on Debian Etch, the site is served with Apache, built largely with PHP and data is stored in a MySQL database. Content is a mix of QuickTime generated H.264 and FFmpeg generate x264 video files. The Flash player is not open source, but is free as in beer.

How-To: Alter Video Speed with FFmpeg and mjpegtools

February 6, 2008 2 comments

Unfortunately my Linux based non-linear editing tool of choice, Open Movie Editor, doesn’t currently support directly altering video playback speed. For example, if you wanted a portion of your new compilation to run at 200% of original recorded speed, it can’t be done within OME. This exact functionality was something I needed for an existing editing project.

After some thought and investigation, such changes can be achieved through using a combination of FFmpeg and yuvfps, which is part of mjpeg tools, to alter the framerate of the desired footage. If your original file is PAL based, with a framerate of 25fps, changing the framerate to 50fps will result in the video running twice as fast, for half as long.

I didn’t initially have mjpegtools installed, but on my Debian based system this was easy enough with

sudo apt-get install mjpegtools

Next, the input video needs to be converted to yuv4mpegpipe format, passed through yuvfps and output to a new avi file. Here’s the command line I used to create a clip at 50fps:

ffmpeg -i input.dv -f yuv4mpegpipe - | yuvfps -s 50:1
-r 50:1  | ffmpeg -f yuv4mpegpipe -i - -b 28800k -y output.avi

Change the 50:1 ratios to whatever framerate you require. e.g. 100:1 for 100fps. Be sure to set the output file bitrate to a relevant quality level. Omitting this flag will result in a poor quality AVI output file by default.

The resulting AVI file was easily played back with Totem, and handled on the timeline admirably by OME.

Thanks to Victor Paesa on the FFmpeg mailing list for pointing me in the right direction.

Some other options to investigate include the new Libavfilter for FFmpeg and converting the original footage to a raw data file, which will lost the audio.

How-To: Extract images from a video file using FFmpeg

February 6, 2008 5 comments

Extracting all frames from a video file is easily achieved with FFmpeg.

Here’s a simple command line that will create 25 PNG images from every second of footage in the input DV file. The images will be saved in the current directory.

ffmpeg -i input.dv -r 25 -f image2 images%05d.png

The newly created files will all start with the word “images” and be numbered consecutively, including five pre-appended zeros. e.g. images000001.png.

From a video that was 104 seconds long, for a random example, this command would create 2600 PNG files! Quite messy in the current directory, so instead use this command to save the files in a sub-directory called extracted_images:

ffmpeg -i input.dv -r 25 -f image2 extracted_images/images%05d.png

Moving on, let’s say you just wanted 25 frames from the first 1 second, then this line will work:

ffmpeg -i input.dv -r 25 -t 00:00:01 -f image2 images%05d.png

The -t flag in FFmpeg specifies the length of time to transcode. This can either be in whole seconds or hh:mm:ss format.

Making things a little more complex we can create images from all frames, beginning at the tenth second, and continuing for 5 seconds, with this line:

ffmpeg -i input.dv -r 25 -ss 00:00:10 -t 00:00:05 -f image2 images%05d.png

The -ss flag is used to denote start position, again in whole seconds or hh:mm:ss format.

Maybe extracting an image from every single frame in a video, resulting in a large number of output files, is not what you need. Here’s how to create a single indicative poster frame, of the video clip, from the first second of footage:

ffmpeg -i input.dv -r 1  -t 00:00:01 -f image2 images%05d.png

Notice that the -r flag is now set to 1.

If you want the poster frame from a different part of the clip, then specify which second to take it from using the -ss tag, in conjunction with the line above.

Lastly, if you wanted to create a thumbnail story board, showing action throughout the entire length of the video clip, you’ll need to specify the output image dimensions. Use the following line:

ffmpeg -i input.dv -r 1 -f image2 -s 120x96 images%05d.png

My original file was 720×576, so the image dimensions are a whole division of this.

Follow

Get every new post delivered to your Inbox.