Archive

Author Archive

Dirac Schrödinger 1.0.9 Released

March 9, 2010 Leave a comment

As we were on holiday last week, in the chilly snows of Austria, we almost missed an important announcement regarding the Schrödinger implementation of the Dirac codec.


It has been roughly eleven months since the last Schrödinger release, so this is indeed welcome news.

Don’t know what either Schrödinger or Dirac are? Dirac is an advanced royalty-free video compression format, initially developed by the UK’s BBC Research and Development team. To quote from the recent release announcement:

“Schrödinger is a cross-platform implementation of the Dirac video compression specification as a C library. The Dirac project maintains two encoder implementations: dirac-research, a research encoder, and Schrödinger, which is meant for user applications. As of this release, Schrödinger outperforms dirac-research in most encoding situations, both in terms of encoding speed and visual quality.”

That last sentence is really important. Previous testing by Stream0 showed that while Schrödinger was a much faster implementation than Dirac Research, the quality suffered enormously. If indeed Schrödinger has now surpassed Dirac Research in quality terms, this is exciting news.

Further information regarding enhancements in this release, and plans for a more regular release cycle, are available on the Dirac Video website.

With the increasing acceleration of HTML 5 acceptance, it’d be fantastic to see more browser support for Dirac, alongside Ogg Theora, as an alternative to the currently almost ubiquitous Flash/H.264 combination.
Categories: Codecs Tags: , ,

Kdenlive 0.7.7 Released

February 17, 2010 Leave a comment

From the Kdenlive Release Notes page

Kdenlive 0.7.7 was released on the 17th of february 2010.

This release fixes a lot of bugs reported against Kdenlive 0.7.6, including timeline corruption and various crashes. We also fixed a compatibility issue with Qt 4.6. We hope that this new release will make the video editing experience easier and more comfortable for everyone!

Kdenlive requires the latest release of the MLT video framework (0.5.0)

Some of the new features

  • User selectable color schemes
  • Improved keyboard navigation
  • Timeline editing mode (normal, overwrite)
  • Fix compatibility issue with Qt 4.6
  • Allow shutdown after render when using Gnome Session manager
  • Improved titler (now supports font outline)
  • Editing properties for several clips at once (for example aspect ratio)

A complete list of the fixed issues can be found on our bugtracker.

Categories: Kdenlive, Video Tags:

New FFmbc Release 0.3

November 19, 2009 1 comment

Just days after I first wrote about FFmbc (FFMedia Broadcast) the team have released a new version, marked as 0.3.

Enhancements in this version include:
  • Sync on FFmpeg svn r20539.
  • Write Quicktime timecode track.
  • Set closed gop flag for first gop when encoding with b frames.
  • Search relative path when opening Quicktime reference files.
Download the latest source, or a Windows binary, from the project homepage.
Also now included on the FFmbc wiki is a list of requested enhancements. These include support for additional codecs, bitstream validation for MPEG2 files and support for DNxHD 10-bit files. Go to the requested enhancements page to review and add your own requests to the FFmbc-user discussion group.
Categories: FFmbc, Video Tags: , ,

FFmbc – A Broadcast Media Alternative to FFmpeg

November 12, 2009 Leave a comment

FFmbc (FFMedia Broadcast) is an off-shoot of the FFmpeg project that is targeted squarely at the broadcast media world. The project while still in its infancy, but available for around 6 months already, is currently at release version 0.2. Launched and managed by Baptiste Coudurier, well known for his work on the FFmpeg project, FFmbc rolls out the following enhancements:

Import your files in Final Cut Pro or AVID Media Composer by

Creating XDCAM HD422 files in .mov or .mxf

Creating XDCAM IMX/D-10 files in .mov or .mxf
Creating AVID DNxHD files in .mov

Transcode your MPEG-2 4:2:2 Tranport stream files containing S302M audio.
Transcode your AVCHD Camera files correctly.
Merge and split your audio tracks.
Create Quicktime files containing time code tracks.
Advanced Metadata support.

ID3v2 complete support.
Itunes complete support.

We’ve been meaning to test some of FFmbc’s functionality for a while now and after a couple of false starts, we’ve been successfully able to convert a generic MPEG2 50i (50Mbps all Intra-Frame) 4:2:2 Transport Stream to IMX D-10 in a .mov container. This file contained PCM audio, which version 0.1 of FFmbc baulked at, but the latest version handled perfectly. The output IMX D10 file was imported without error directly into Final Cut Pro for editing. FFmbc has not yet renamed any FFmpeg libraries, so the same conversion syntax and commands can be used across both. Although, be careful as this may create some library conflicts if you try to have both FFmbc and FFmpeg installed at the same time.
Why would we want to use an Open Source transcoding tool in a predominantly proprietary video production environment? The answer is simple. Every commercial product we’ve investigated (Telestream’s Episode Engine and Flip Factory, Rhozet’s Carbon Coder, Digital Rapid’s Streamz) wanted to transcode our MPEG2 source file to IMX, rather than simply re-wrap the essence into IMX. Transcoding takes a considerable amount of time and will always lower the quality of the final output, no matter how minutely. FFmbc instead took our video and audio essence, extracted it from the MPEG2 Transport Stream and re-wrapped it all to IMX D10. 
Our 30 minute test file was around 16GB in size. Our test machine was a puny eeePC, with an Intel Atom N280 1.66Ghx processor, running Ubuntu Karmic Koala Netbook remix (hardly ideal for transcoding video). The entire conversion process took a little over 7 minutes, at a rate of approximately 110fps (frames per second). Pretty impressive!
There are a couple of caveats to mention with regards to FFmbc. The software is very new and Baptiste is very busy. I’m sure more developers would be a welcome addition to the project. We used the earlier Stream#0 tutorial for installing FFmpeg to achieve the same for FFmbc. However, FFmbc v0.2 didn’t like the latest SVN of x264, which is a bug that won’t be fixed until the next FFmbc release. Instead, we used the packaged libx264 from the Ubuntu repository. FFmbc then compiled and installed without error. Checking out the latest FFmbc from GIT also caused some issues The source compilation complained and failed regarding the absence of swscale. However, working around these small issues, we’ve achieved our goal – a quick conversion of a generic MPEG2 file to something that can be edited using Final Cut Pro.
FFmbc is an exciting prospect, targeted directly at the broadcast media world. If you’re looking for an open source file transcoding solution, to integrate with your Avid or Final Cut Pro editing environment, give FFmbc a chance to prove itself.

Possible Formation of FFmpeg Foundation NGO

November 6, 2009 Leave a comment

Recently posted on the FFmpeg Developers mailing list was a request for comment from Ronald Bultje regarding the intention to form an FFmpeg Foundation (although not using that name). 

Full text of Ronald’s post is as follows:
Hi,

some developers have stated the intent to set up a NGO (non-governmental organization) to help us bring some structure, support and funding into the project. We haven’t decided on a name yet, although people agree that we dislike “FFmpeg Foundation” – if you can suggest better names, please do so in this thread. Right now, we’re at a point where I think most of the stuff is decided and there is just some paperwork to be filled out. All this with many thanks to Karen at the SFLC who helped to get all this going. The NGO will be registered in Delaware, USA (for administrative reasons). Its goal will be to “advance the state of free and open multimedia” or something like that.

For the time being, this organization will be run by a board consisting of 7 members. For the starting board, the following developers have volunteerd:

- Baptiste Coudurier
- Benjamin Larsson
- Carl Eugen Hoyos
- Diego Biurrun
- Mans Rullgard
- Michael Niedermayer
- Ronald S. Bultje

If you feel that these members will not be able to appropriately represent this project, now would be a good time to speak up and suggest something better. For the near future, the directors will elect a new board at the end of their yearly term (and that might end up in the same 7 members). If there’s enough interest, we might set up a member-structure so the board can be properly elected by its members (=developers, contributors, etc.), but ATM we lack the structure for even that ample task, so not for now.

The NGO will set up a bank account to accept tax-deductible donations (bank, paypal, etc.) from organizations, users and companies to benefit the further advance of “open / free multimedia”. These might be spend on FFmpeg development, FFmpeg development hackparties, FFmpeg developers attending conferences or anything else that we will would serve the greater good of “open / free multimedia” (i.e. it doesn’t have to be FFmpeg, specifically). We will also accept donations for a specific purpose (e.g. Snow, etc.). Lastly, we will host a bank account for MPlayer for donations intended specifically for MPlayer, rather than FFmpeg.

Comments?

Ronald
This sounds like some really positive news and various comments and questions followed this announcement.
Let’s hope this organisation becomes a reality and really benefits open source multimedia. We’re waiting for a formal announcement soon……

Categories: FFmpeg, Video Tags:

Ripping CDs with FLAC – Best Compression Settings

November 6, 2009 Leave a comment

As storage space becomes cheaper, there’s a growing trend to save digital music files in a lossless format. Such lossless formats provide an exact replication of the audio quality found in the original content, usually on CD. The resulting files are also much larger, when compared to MP3 or AAC at 128kbps or 256kpbs. A favourite open source lossless audio codec is FLAC, which stands for Free Lossless Audio Codec. Within the possible FLAC settings there are 8 levels of compression to choose from when creating new files.

Lossless codec? Compression? Doesn’t compression result in loss of detail? Not always. There are many lossy codecs, both audio and video, that apply various compression techniques that actually discard some of the original material, to obtain smaller file sizes. The better the codec is at discarding items that don’t impare the listening or viewing experience, the more impressive the end result will be.

FLAC, being lossless, doesn’t discard any of the original content, but still applies compression techniques. View this like compressing a file with Gzip or Bzip perhaps. Smaller files are achieved, but when de-compressed nothing has been lost in the process. Perhaps think of it like folding a piece of paper. Fold it in half once, and the end result is smaller. Keep folding to produce smaller and smaller (or more highly compressed) paper packages. Unfold the paper, and you still have the same original piece of paper. Ignore fold lines and degradation over time! This doesn’t happen in the world of bits and bytes.

We decided to test which of the FLAC compression settings provided the best trade-off between final file size and encoding time. Higher compression will require more time, but should produce smaller file sizes.

Trying to mimic how we would actually go about ripping a whole CD, we decided to use the Ripit utility, and follow instructions posted on the Debian forum. Ripit is a great example of a truly useful utility where a fancy GUI is just not needed. Edit one simple configuration file, then type “ripit” at the command prompt and that’s almost all there is to it. There would be some overhead in using Ripit, as it checks the freedb.org database for each album’s details, but this should be minimal.

Grabbing the nearest un-ripped CD from the shelf, our test file will be U2’s Pride (In the Name of Love) from their Best of 1980-1990 album. This song is 3 minutes and 50 seconds long.

Our exact Ripit command was:

>time ripit 01

“Time” provides feedback on the elapsed time of the process. “01” tells Ripit to just rip the first track on the CD.

The test machine is a reasonably old Dell Inspiron 6400, which contains a Intel Core2 CPU T5500 @ 1.66GHz and 1GB of RAM.

Here are our results from the 8 different levels of compression
available in FLAC. If no compression level is specified, 5 will always
be the default.

Comression Quality: 0
Time: 0m59.309s
Size: 30261367 bytes (28.86MB)

Compression Quality: 1
Time: 1m1.518s
Size: 29643288 bytes (28.27MB)

Compression Quality: 2
Time: 1m0.324s
Size: 29631732 bytes (28.26MB)

compression Quality: 3
Time: 0m57.156s
Size: 28596473 bytes (27.27MB)

Compression Quality: 4
Time: 1m0.707s
Size: 27717767 bytes (26.43MB)

Compression Quality: 5
Time: 1m1.406s
Size: 27710285 bytes (26.43MB)

Comression Quality: 6
Time: 1m1.899s
Size: 27710119 bytes (26.43MB)

Compression Quality: 7
Time: 1m8.692s
Size: 27696835 bytes (26.41MB)

Compression Quality: 8
Time: 1m13.376s
Size: 27664197 bytes (26.38MB)

Between Compression Quality 0 and Compression Quality 8, there’s approximately 13.5 second and 2.5MB difference. This might not seem like very much, but let’s expand these figures to account for an entire CD.

Assuming all tracks are approximately the same length (3:50) and that there are 12 tracks on the average CD, we have the following figures:

13.5 seconds x 12 = 162 seconds (2 minutes 42 seconds)
2.5MB x 12 = 30MB.

Realistically though, you can see there’s a big jump in time between Compression Quality 6 and Compression Quality 7, while there’s not a lot of difference in time between Compression Quality 5 and Compression Quality 0 (Ignoring Compression Quality 3’s time anomaly which we can’t account for). There’s also not a lot of file size difference between Compression Quality 5 and Compression Quality 8.

Therefore, unless storage space is a really big issue, the average user is probably better off leaving the Compression Quality settings at Default (5) and saving almost 3 minutes for CD rip. Then again, on a newer machine, this time difference is likely to be much less, so you may as well use Compression Quality 8 and save that little bit of space.

In the end, Compression Quality settings in FLAC don’t make that much difference. Leaving the settings at Default is a pretty good choice, but setting them to a maximum of 8 will save some space, without a major time impact. 

Categories: Audio, Codecs Tags: ,

Interview with Magic Lantern Creator

November 6, 2009 Leave a comment

Several months ago we posted an article about the Magic Lantern firmware for the Canon 5D Mark II video DSLR. This open source software adds functionality to the 5D that Canon didn’t provide out of the box. There has been quite a lot of progress on Magic Lantern over the last few months. The latest release is version 0.1.6, but even since then further enhancements have been made, including Autoboot.

The originating creator of Magic Lantern, Trammell Hudson, recently participated in an interview available on the Cinema5d website. Here are some short excerpts from Trammell’s responses:

4. What plans do you have for the new 5d firmware update? Can we expect anything beyond 24p/25p?

You would have to ask Canon about their plans…  I’ll update my code to work with their new firmware once it is available.  It would really please me if Canon incorporated all of the features from Magic Lantern into their firmware.

On my roadmap for upcoming Magic Lantern releases:

* 1080i HDMI output (still having technical problems)

* SMPTE timecode jamming

* Scripting

* USB control from the Impero remote follow-focus

* Waveforms and vector scope

* Autoboot (now available)

5. On your Wikia Page you describe the Magic Lantern as ” an enhancement atop of Canon’s firmware that makes your 5D Mark II into the 5D Mark Free” What exactly do you mean?

Most equipment is “closed” in that what you buy is what you get. Sure, you can put it on rails, add a follow focus and mattebox, but you can’t really change what is going on inside the box.  With Magic Lantern, however, the internals of the camera have been opened up so that it is possible to add new features that the manufacturer might not have ever imagined.

Read the full text of the interview over at Cinema5d.

A potentially useful enhancement to the Magic Lantern firmware would be the ability to change the codec used in the 5D Mark II. Currently, content is stored as H.264 at around 40Mbps. While this provides for some very nice high quality footage, it’d be nice if additional open source options were included, like Lagarith and Dirac Research. The Magic Lantern Wikia Discussion page has a few comments around this idea already.

H264 Video Encoding on Amazon’s EC2

October 28, 2009 2 comments
Stream #0 recently started looking at Amazon’s EC2 computing offering. We created our first public AMI, based on Debian Squeeze, including FFmpeg and x264 pre-installed. Now that we can easily start instances with the necessary basics installed, it is time to compare the relative merits of the different instance sizes that Amazon offers.
EC2 Instances come in a variety of sizes, with different CPU and RAM capacities. We tested the 64-bit offerings, including the recently announced High-Memory Quadruple Extra Large instance.
These 64-bit instances are listed on the EC2 website in the following way:
Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform
Extra Large Instance 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform
High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform
High-Memory Quadruple Extra Large Instance 68.4 GB of memory, 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform
We’ll take a closer look later at the in-depth specifications of each below.
Our test file was 5810 frames (a little over 4 minutes and 285MB) of the HD 1920×1080 MP4 AVI version of Big Buck Bunny. The FFmpeg transcode would convert this to H264 using the following 2-pass command:
>ffmpeg -y -i big_buck_bunny_1080p_surround.avi -pass 1 -vcodec libx264 -vpre fastfirstpass -s 1920×1080 -b 2000k -bt 2000k -threads 0 -f mov -an /dev/null && ffmpeg -deinterlace -y -i big_buck_bunny_1080p_surround.avi -pass 2 -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre hq -s 1920×1080 -b 2000k -bt 2000k -threads 0 -f mov big_buck_bunny_1080p_stereo_x264.mov
Setting Threads to zero should mean that FFmpeg automatically takes advantage of the entire number of CPU cores available on each EC2 instance.
FFmpeg revealed the following information about the transcode:
Input #0, avi, from ‘big_buck_bunny_1080p_surround.avi':
Duration: 00:09:56.48, start: 0.000000, bitrate: 3968 kb/s
Stream #0.0: Video: mpeg4, yuv420p, 1920×1080 [PAR 1:1 DAR 16:9], 24 tbr, 24 tbn, 24 tbc
Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s
[libx264 @ 0x6620f0]using SAR=1/1
[libx264 @ 0x6620f0]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64
[libx264 @ 0x6620f0]profile High, level 4.0
Output #0, mov, to ‘big_buck_bunny_1080p_stereo_x264.mov':
Stream #0.0: Video: libx264, yuv420p, 1920×1080 [PAR 1:1 DAR 16:9], q=10-51, pass 2, 2000 kb/s, 24 tbn, 24 tbc
Stream #0.1: Audio: aac, 48000 Hz, 2 channels, s16, 128 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Ignore the duration, as that’s read from the file header, and we only uploaded part of the overall file.
Now to look at how each EC2 instance performed.
m1.large
(Large Instance 7.5 GB of memory, 4 EC2 Compute Units)
Firstly, querying the machine capacity (cat /proc/cpuinfo) returns the following information:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
stepping : 6
cpu MHz : 2659.994
cache size : 6144 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips : 5322.41
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
There’s 2 of these cores available. RAM is confirmed as 7.5GB (free -g).
The FFmpeg transcode showed the following:
H264 1st Pass = 11fps – 18 fps, 5 minutes 30 seconds
H264 2nd Pass = 4-5fps, 18 minutes 38 seconds
Total Time: 24 minutes, 8 seconds
m1.xlarge
Extra Large Instance 15 GB of memory, 8 EC2 Compute Units
CPU Info:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
stepping : 10
cpu MHz : 2666.760
cache size : 6144 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips : 5336.15
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
There’s 4 of these cores available. RAM is confirmed at 15GB.
The FFmpeg transcode showed the following:
H264 1st Pass = 11fps – 14 fps, 5 minutes 30 seconds
H264 2nd Pass = 6-7fps, 14 minutes 19 seconds
Total Time: 19 minutes, 49 seconds
c1.xlarge
High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units
CPU Info:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz
stepping : 10
cpu MHz : 2333.414
cache size : 6144 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips : 4669.21
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
There’s 8 of these cores available. RAM confirmed at 7GB.
The FFmpeg transcode showed the following:
H264 1st Pass = 24-29fps, 3 minutes 24 seconds
H264 2nd Pass = 11-13fps, 7 minutes 8 seconds
Total Time: 10 minutes, 32 seconds
m2.4xlarge
High-Memory Quadruple Extra Large Instance 68.4 GB of memory, 26 EC2 Compute Units
CPU Info:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
stepping : 5
cpu MHz : 2666.760
cache size : 8192 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca popcnt lahf_lm
bogomips : 5338.09
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
There’s 8 of these cores available. RAM confirmed at 68GB.
The FFmpeg transcode showed the following:
H264 1st Pass = 35-38fps, 2 minutes 47 seconds
H264 2nd Pass = 12-15fps, 6 minutes 30 seconds
Total Time: 9 minutes, 17 seconds
What can be revealed from these figures? As expected, the High-Memory Quadruple Extra Large Instance performed best, but not by much. Certainly all the additional RAM didn’t make much of an impact, and the time saving is probably really down to the slightly increased CPU specifications. Obviously, over a larger file set this time saving would be more evident.
Let’s look at which EC2 instance gives best value for money for this test. Amazon charges per CPU hour, shown below:
m1.large: $0.40/hour
m1.xlarge: $0.80/hour
c1.xlarge: $0.80/hour
m2.4xlarge: $2.40/hour
These are US Dollars and for a US based instance (European instances are slightly more expensive). Amazon has also revealed that there will be a price reduction
in effect from November 1st 2009.
Looking at the time taken to transcode our test file, on each instance, reveals the following:
m1.large
Total Time: 24 minutes, 8 seconds
Total Cost: $0.16 ((($0.40/60)/60) x 1448 seconds)
Cost per GB: $0.57 ((1024MB/285MB) x $0.16)
m1.xlarge
Total Time: 19 minutes, 49 seconds
Total Cost: $0.26 ((($0.80/60)/60) x 1189 seconds)
Cost per GB: $0.93 ((1024MB/285MB) x $0.26)
c1.large
Total Time: 10 minutes, 32 seconds
Total Cost: $0.14 ((($0.80/60)/60) x 632 seconds)
Cost per GB: $0.50 ((1024MB/285MB) x $0.14)
m2.4xlarge
Total Time: 9 minutes, 17 seconds
Total Cost: $0.37 ((($2.40/60)/60) x 557 seconds)
Cost per GB: $1.33 ((1024MB/285MB) x $0.37)
Clearly the c1.large instance represents the best value for money, although I was surprised how close behind the m1.large costs were. The additional RAM, and slightly better CPU specifications for the m2.4xlarge instance do not outweigh the much more expensive per hour cost, at least when it comes to video transcoding.
A typical HD file used for broadcast or high end post production purposes is around 85GB for 60 minutes (DnxHD at 185Mbps). Obviously the time taken to transcode this file, to an H264 at 2Mbps, could vary from the actual source content we used, but from the figures above we can estimate that it would cost $42.50 and take approximately 53.62 hours!
Taking into account that these figures may vary for different input and output files, the above should represent a worst case scenario. For example, I would expect an SD MPEG2 50Mbps file to take proportionally much less effort to transcode than a DNxHD 185Mbps HD file. Only a further test will tell……
Is Amazon’s EC2 offering worth considering for high end video file transcoding? Compared to the prices charged by Post-Production facilities it is certainly a lot cheaper, as long as you have time to wait for the end result. However, that’s the beauty of cloud based computing power – if you’re in a hurry just scale up! Keep in mind though, content still needs to be uploaded to EC2 before transcoding can begin, that’s going to take additional time and add further cost. 

AWS Debian Squeeze AMI with FFmpeg and X264

October 22, 2009 Leave a comment

Stream #0 has now made available our first Amazon Web Services (AWS) AMI. This is based on Eric Hammond’s 64-bit Squeeze AMI: ami-fcf61595.

The first Stream #0 AMI can be found by looking for the following AMI ID in the AWS Management Console: ami-b535d6dc

The following additions have been made over the base Squeeze build:

  • Added Debian Multimedia Repository
  • Updated and Upgraded to October 22nd 2009 latest packages
  • Build x264 from source. r1301
  • Build FFmpeg from source. r20350

FFmpeg has been configured as per the options noted in the How-To here

Ultimately we’re planning to build a few different AMI variations. e.g. Lenny with FFmpeg 0.5 build and x264 from Debian Multimedia Repo as a slightly more stable version of the “Squeeze build everything from source” AMI approach.

The AMI has been made public and Stream0 would really appreciate feedback on this, our first time AMI build.

Everything you need to know about Amazon’s Web Services:

Amazon Web Services
AWS Elastic Compute Cloud (EC2)
AWS Developer Guide
Alestic – listing Debian and Ubuntu AMIs
ec2debian Google Group

How-To Build FFmpeg on Debian Squeeze

October 22, 2009 8 comments

It’s been a long time now since I wrote my original How-To for building FFmpeg on Debian. A lot has changed since then, in both the Debian and FFmpeg world, so it’s definitely time for an update.

This tutorial describes how to build x264 and FFmpeg from scratch, on a base Debian Squeeze system. Throughout this tutorial I will be assuming that you are operating as either root or su, or aware of how to use sudo (make sure you’ve added yourself to the /etc/sudoers list).

First, we need to update the sources list. I use pico as my text editor, as I was a long time Pine mail user way back when. Feel free to use vi or emacs if you prefer.

Go to the Debian Multimedia repository site and download the keyring package. Follow the instructions for unpackaging it about half-way down the front page. Now update your sources list:

>pico /etc/apt/sources.list

Add deb http://www.debian-multimedia.org squeeze main on a new line and save the file.

>aptitude update
>aptitude full-upgrade

Now you’re using the latest sources and packages.

Next, install all the additional libraries we’ll need:

>aptitude install install build-essential subversion git-core yasm libgpac-dev libdirac-dev libgsm1-dev libschroedinger-dev libspeex-dev libvorbis-dev libopenjpeg-dev libdc1394-dev libsdl1.2-dev zlib1g-dev texi2html libfaac-dev libfaad-dev libmp3lame-dev libtheora-dev libxvidcore4-dev libopencore-amrnb-dev libopencore-amrwb-dev

Once that has successfully completed, it’s time to grab the latest x264 code:

>git clone git://git.videolan.org/x264.git
>cd x264
>./configure –enable-shared
>make
>make install

Hopefully all is still going well and you encountered no errors so far. Great, let’s grab FFmpeg from Subversion:

>svn checkout svn://svn.ffmpeg.org/ffmpeg/trunk ffmpeg
>cd ffmpeg

Now to configure FFmpeg. There’s so many options, it’s sometimes hard to know which ones to choose. The list below is my personal preference, but do try ./configure –help to assist in choosing your own.

>./configure –enable-gpl –enable-postproc –enable-pthreads –enable-libfaac –enable-libfaad –enable-libmp3lame –enable-libtheora –enable-libx264 –enable-shared –enable-nonfree –enable-libvorbis –enable-libgsm –enable-libspeex –enable-libschroedinger –enable-libdirac –enable-avfilter –enable-avfilter-lavf –enable-libdc1394 –enable-libopenjpeg –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-version3

After a successful configuration, all the enabled decoders, encoders and muxers will be displayed. There are some configuration dependencies here. If you don’t –enable-gpl things like postproc will fail at build time. Next….

>make
>make install

“Make” will probably take quite a long time.

Optionally you may like to build qt-faststart as well. If you don’t know what this does, use Google, but in short it arranges atoms in QuickTime header files to allow for progressive download delivery.

>make tools/qt-faststart

If you try to use FFmpeg now, by simply typing “ffmpeg” you are likely to encounter an error regarding shared libraries (we did build FFmpeg with –enable-shared). To fix this we do the following:

>pico /etc/ls.so.conf

Add the line “/usr/local/lib” (without quotes) to this file and then save it. Read more about dynamically linked libraries here, specifically the fourth paragraph to explain what we just did.

>ldconfig

That’s it! Finished. Pretty easy, right? Now you just need to learn how to use FFmpeg, but that’s a topic for another day. Very briefly though, here’s a command line for creating a 2-pass H264 file, at 750kbps and 480×360 resolution, in a mov container, with progressive download enabled.

>ffmpeg -y -i inputfile.mpg -pass 1 -vcodec libx264 -vpre fastfirstpass -s 480×360 -b 750k -bt 750k -threads 0 -f mov -an /dev/null && ffmpeg -deinterlace -y -i inputfile.mpg -pass 2 -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre hq -s 480×360 -b 750k -bt 750k -threads 0 -f mov outputfile.mov

>/tools/qt-faststart outputfile.mov outputfilefast.mov

Categories: FFmpeg, Video Tags: , , ,
Follow

Get every new post delivered to your Inbox.