Archive

Archive for February, 2008

Linux Powers The Spiderwick Chronicles

February 28, 2008 Leave a comment

An interesting article surfaced today talking about Linux based systems used in Hollywood, specifically on a new Paramount feature production, The Spiderwick Chronicles directed by Mark Waters.

The article has lots of good screen shots showing software in use, but I guess a little unfortunately many of the applications are proprietary and Blender doesn’t get a look in. Still, it’s encouraging to know how the Linux OS has penetrated this sphere.

Here’s the article.

Categories: Video Tags: , ,

Kino 1.3.0 Released

February 25, 2008 Leave a comment

Yesterday a new version of popular Linux video editing tool, Kino, was released. The new version is 1.3.0 and contains the following changes:

  • Updated export scripts for FFmpeg changes (x264, mp3)
  • Improved speed on SMP systems by enabling FFmpeg multi-threaded codecs
  • Improved import (DV conversion) progress dialog
  • Added gstreamer-based Ogg Theora to the blip.tv publishing script
  • Added quality level option to the blip.tv publishing script
  • Updated Hungarian translation
  • Added Ukranian translation by Yuri Chornoivan

Congratulations to Dan Kennedy and the team.

The new source files can be downloaded directly from here.

Categories: FFmpeg, Kino, Video Tags: ,

Open Movie Editor – New Release Feb 9th

February 10, 2008 Leave a comment

The Open Movie Editor project has just released a new version of this Linux based non-linear video editing tool.

Amongst the highlights of this new version are the following items:

  • Inclusion of a new colour scheme called Shark
  • Colour scheme preferences are now restored at restart

Full release notes are available on Sourceforge.

While only a minor update to Open Movie Editor, coming just one week after the previous release on February 3rd, this source does include my first codebase contribution to an Open Source project – the Shark colour scheme.

Download the new version of Open Movie Editor.

The Future of Cinelerra

February 8, 2008 Leave a comment

It would be remiss if I did not at least mention the current buzz around Linux video editing tool Cinelerra. An article appeared on Linux.com a couple of days ago, outlining a new direction for the software that began life as Broadcast2000.

I’m not going to do a Cinelerra history lesson here, go and read the Linux.com article. What is more interesting is the desire to build a new Cinelerra (Cin3), completely divorced from the original Heroine Warrior sponsor.

I’ve actually been following the Cin3 discussion on the Cinelerra mailing list for some time now. At this stage discussion seems to be centered around what the new name for the software will be and what GUI toolkit to use. There’s a long way to go before Cin3 – or Verite as it may now be known – becomes a stable usable product.

And that’s where the problem lies. Cin3 may be another 2 or 3 years away from being production ready. What happens in the meantime? How much effort will be expended on developing and maintaining the existing Cinelerra 2? While such a long lead time may be needed for a community driven application of this complexity, it does open the opportunity for other projects, both commercial and Open Source, to carve out a large video editing market share on the Linux platform.

Already Blender incorporates a reasonably full featured video sequence editor. I wonder about the viability of spinning that off as a standalone piece of software. What if MainConcept did indeed decide to open source their now defunct MainActor editing tool? Perhaps Adobe, or Sony, or Pinnacle will take the plunge and release a Linux version of their video editors. If the Linux desktop continues to rise in popularity, these scenarios are distinct possibilities.

Already Cinelerra suffers from an image problem, allegedly being too complex to learn and generally unstable. Let’s hope the Cinelerra community team can forge ahead quickly to create an easy to use, but powerful, open source non-linear video editor.

Real World Open Source Video Editing

February 7, 2008 1 comment

A short while ago I wrote a review about Open Movie Editor. Essentially this review was written after a couple of hours testing various video clips and assessing the functionality within OME. Now, I can write about what OME is like on a real editing assignment.

Recently I was given a DVD full of PAL DV material and asked to create a compilation from the individual clips. A fun little project that should only take a day or two. Open Movie Editor was the obvious tool for the job.

The good news I can report is that even after 10 to 12 hours of constant video editing, OME is still a very stable piece of software. I only managed to induce two crashes – once when trying to undo multiple edits in a row and once when vigorously moving clips around on the timeline. Other than that, Open Movie Editor was easily up to the task.

I’m not an advanced video editor, happy within my comfort zone using something like Adobe Premiere, but also not using all the intricate features. However, Open Movie Editor does still lack a few basic features, that would have greatly increased my productivity. Changing playback speed of a clip is not possible within OME. I needed to change the framerate of target clips using FFmpeg and mjpeg tools to achieve this effect. While fade transitions are easy enough, I’m sure they could have been even quicker if such a function was built into OME. Precise frame editing, for splitting clips for example, would also make life easier.

There are some really nice features in Open Movie Editor though. Audio automations are a breeze, the media browser window provides easy access to your video library and the list of render options is quite vast – dependent on FFMpeg, Libquicktime and other shared video libraries.

So what did I produce in my 12 hours of work? A fun 4 minute clip, which is still a little rough around the edges, but generally a good laugh. Here’s a link for your viewing pleasure:

http://kapitalmototv.co.uk/play-183-0.html

Edited in Open Movie Editor, with some clip transformations using FFmpeg and mjpeg tools. Follow this with final transcoding to x264, again with FFmpeg for more finite control, and you have an Open Source Editing project.

The Kapital Moto TV site uses open source products where possible. The server runs on Debian Etch, the site is served with Apache, built largely with PHP and data is stored in a MySQL database. Content is a mix of QuickTime generated H.264 and FFmpeg generate x264 video files. The Flash player is not open source, but is free as in beer.

How-To: Alter Video Speed with FFmpeg and mjpegtools

February 6, 2008 2 comments

Unfortunately my Linux based non-linear editing tool of choice, Open Movie Editor, doesn’t currently support directly altering video playback speed. For example, if you wanted a portion of your new compilation to run at 200% of original recorded speed, it can’t be done within OME. This exact functionality was something I needed for an existing editing project.

After some thought and investigation, such changes can be achieved through using a combination of FFmpeg and yuvfps, which is part of mjpeg tools, to alter the framerate of the desired footage. If your original file is PAL based, with a framerate of 25fps, changing the framerate to 50fps will result in the video running twice as fast, for half as long.

I didn’t initially have mjpegtools installed, but on my Debian based system this was easy enough with

sudo apt-get install mjpegtools

Next, the input video needs to be converted to yuv4mpegpipe format, passed through yuvfps and output to a new avi file. Here’s the command line I used to create a clip at 50fps:

ffmpeg -i input.dv -f yuv4mpegpipe - | yuvfps -s 50:1
-r 50:1  | ffmpeg -f yuv4mpegpipe -i - -b 28800k -y output.avi

Change the 50:1 ratios to whatever framerate you require. e.g. 100:1 for 100fps. Be sure to set the output file bitrate to a relevant quality level. Omitting this flag will result in a poor quality AVI output file by default.

The resulting AVI file was easily played back with Totem, and handled on the timeline admirably by OME.

Thanks to Victor Paesa on the FFmpeg mailing list for pointing me in the right direction.

Some other options to investigate include the new Libavfilter for FFmpeg and converting the original footage to a raw data file, which will lost the audio.

How-To: Extract images from a video file using FFmpeg

February 6, 2008 5 comments

Extracting all frames from a video file is easily achieved with FFmpeg.

Here’s a simple command line that will create 25 PNG images from every second of footage in the input DV file. The images will be saved in the current directory.

ffmpeg -i input.dv -r 25 -f image2 images%05d.png

The newly created files will all start with the word “images” and be numbered consecutively, including five pre-appended zeros. e.g. images000001.png.

From a video that was 104 seconds long, for a random example, this command would create 2600 PNG files! Quite messy in the current directory, so instead use this command to save the files in a sub-directory called extracted_images:

ffmpeg -i input.dv -r 25 -f image2 extracted_images/images%05d.png

Moving on, let’s say you just wanted 25 frames from the first 1 second, then this line will work:

ffmpeg -i input.dv -r 25 -t 00:00:01 -f image2 images%05d.png

The -t flag in FFmpeg specifies the length of time to transcode. This can either be in whole seconds or hh:mm:ss format.

Making things a little more complex we can create images from all frames, beginning at the tenth second, and continuing for 5 seconds, with this line:

ffmpeg -i input.dv -r 25 -ss 00:00:10 -t 00:00:05 -f image2 images%05d.png

The -ss flag is used to denote start position, again in whole seconds or hh:mm:ss format.

Maybe extracting an image from every single frame in a video, resulting in a large number of output files, is not what you need. Here’s how to create a single indicative poster frame, of the video clip, from the first second of footage:

ffmpeg -i input.dv -r 1  -t 00:00:01 -f image2 images%05d.png

Notice that the -r flag is now set to 1.

If you want the poster frame from a different part of the clip, then specify which second to take it from using the -ss tag, in conjunction with the line above.

Lastly, if you wanted to create a thumbnail story board, showing action throughout the entire length of the video clip, you’ll need to specify the output image dimensions. Use the following line:

ffmpeg -i input.dv -r 1 -f image2 -s 120x96 images%05d.png

My original file was 720×576, so the image dimensions are a whole division of this.

Follow

Get every new post delivered to your Inbox.