Archive

Archive for the ‘Codecs’ Category

Blackmagic Decklink SDI and Linux

June 24, 2011 6 comments

Almost a year ago, I invested in a central London based post-production company. At the time, I had dreams of pushing open source software solutions into the professional post-production arena. Things haven’t quite worked out as planned, and I’ve made very limited headway on this project. Business imperatives took over and changing a whole ecosystem is a big job. I’ve continued to use Linux on my laptop and happily connect to printers and network drives, but that’s about all.

Recently I had an opportunity to change all that. We needed a tape digitisation solution, separate from our Avid editing suites, for a new project. I’ve know for a while that Blackmagic’s Decklink range of cards work with Linux and were pretty good for capturing from SDI. We purchased the basic Decklink SDI card, re-cycled an old machine onto which I installed Linux Mint Debian and away we went….

Things weren’t entirely smooth from the start. I upgraded the Mint Debian ISO, including the kernel to 2.6.39. This was my first mistake. The card was not recognised with this kernel. Booting into the original 2.6.32 kernel overcame this problem and the card was recognised. I had to download the relevant Linux software from the Blackmagic website, as the accompanying DVD only included Windows software. The available Linux software from Blackmagic included the relevant drivers, firmware and Media Express software. Unfortunately, other advertised items such as the drive speed test and alpha keying utilities are not available for Linux.

Once everything was up and running, it was time to capture. The Media Express 2.3.1 software was pretty straight forward to use. Setting in and out points allowed the software to control the J30 Digibeta deck and content was captured in Uncompressed 10-bit YUV formate. The other limited codec options included Uncompressed 8-bit YUV, RGB and MotionJPEG. This uncompressed file was then transcoded to IMX50 using FFMbc. The whole process seemed to work reasonably well, and I’m just now waiting to send the IMX50 sample off for technical inspection.

Unfortunately, the Media Express software as supplied did not provide options to change the SD captured frame size from 720×576 to anything else. Ideally I was looking for 720×608 so that VITC was also captured. A phone call to Blackmagic revealed that this was possible with the Windows and Mac version of their software, but not on Linux. Their Linux SDK did expose the necessary calls, but the software had not been written to include them. Essentially, if I wanted VITC, I’d need to write a capture utility myself. Somewhat disappointing.

Overall, I’m pleased that we now have an SDI capture solution running on Linux. However, the Blackmagic Decklink card still feels a little half baked and Linux was perhaps only an afterthought for them.

Updates on WebM Support – All Aboard!

June 2, 2010 Leave a comment

As could probably be predicted, there’s been a lot more press around WebM over the last ten days or so. A few articles are worth noting.

CNET posted a reasonably ordinary piece regarding the quality of WebM, when compared against H.264. However, there were two interesting links in this piece. 
The first pointed to a WebM project page where the indepth encoding parameters for WebM content are outlined. If you’re planning to create WebM files, reading this page is essential. 
The second link, to the quAVlive website provides some various examples of H.264 (using x264) encoding compared against WebM. I can’t really see a lot of visual difference in the “Sunflower” example. However, it is easily clear to my eyes, without even enlarging the screenshots, that in “Park Joy” and “Riverbed” H.264 is certainly superior. I would like to have seen more information regarding the time taken to transcode these examples, with each codec, and the resulting file sizes. Picture quality isn’t always everything, transcode time and storage requirements should also be taken into consideration.
Everyone’s jumping on the WebM bandwagon with software and hardware support. Gstreamer claims full plugin support, which means in turn there is Moovida support and the Transmaggedon transcoder can also output VP8 codec files, although not in the Matroska/WebM container yet. Not to be outdone, Flumotion, will also stream live VP8/WebM content. The Miro Video Converter will also output valid VP8/WebM files, claiming to be the first to do so. The list could go on, but the easiest thing is to probably just keep tabs on the WebM project page listing all the supported devices and software tools, both commercial and open source.
Also worth a shout is the fact that both Mozilla and Opera are pushing for VP8/WebM to be specifically included in the HTML5 specification. Previously, major browser makers couldn’t agree on one specific video file format – Mozilla and Opera backing Ogg Theora and Apple sticking with H.264. I can’t see that particular situation changing now. 

WebM – The New Open Source Codec on the Block

May 27, 2010 Leave a comment

In August 2009, Google acquired codec developer On2 Technologies for a rumoured $106 million. The flagship On2 codec was VP8 and it was also rumoured at the time that Google may open source this technology in the future, although a number of challenges lay ahead.

Late last week this rumour became reality and WebM was born. Alongside Theora and Dirac, WebM now enters the open source HTML 5 ready codec battle. Almost immediately all major web browsers, except one, but including Internet Explorer announced support for the codec. Using the might and muscle of Google WebM must have a solid chance of taking on the dominance of H.264 in the web video delivery battle. This really will be a solid kick in the pants for Theora, which now seems destined to remain a reasonably niche product, even with direct HTML 5 support from Firefox.
In short order some early comparisons between H.264 and WebM appeared online. Some with more technical detail than others. The debate also began as to whether Google was benevolent or evil. Did WebM contain submarine patents that not even Google were aware of?
Producing WebM video for the masses was the next step. Easy to follow FFmpeg tutorials are available and just a few days ago a major commercial transcoding software vendor announced WebM/VP8 support.
WebM video is already available on YouTube, in experimental form. How long before at least all new YouTube video is transcoded to this format? If WebM quality is on parity with H.264, and the jury is still out on that, what is the unique selling point of H.264? Why would anyone continue to use it? 
There will be a substantial legacy component to overcome. Many people and organisations have invested heavily in H.264 technology, and a move to WebM may represent an operational, although not licensing, cost. However, with Google behind it, many of Big Business’ concerns around open source projects may be alleviated.
Adding to this, H.264 video within a Flash player still has significant advantages over HTML 5 delivered video content, in terms of presentation flexibility and perceived security.
H.264 video is of course still dominant for web delivery, just as VP6 and VP7 was in the past. However, WebM is an exciting development with a bright future. Using the collective power of open source development, and no small amount of corporate backing from Google, watch out for WebM to challenge MPEG-LA’s codec in the future.

Dirac Schrödinger 1.0.9 Released

March 9, 2010 Leave a comment

As we were on holiday last week, in the chilly snows of Austria, we almost missed an important announcement regarding the Schrödinger implementation of the Dirac codec.


It has been roughly eleven months since the last Schrödinger release, so this is indeed welcome news.

Don’t know what either Schrödinger or Dirac are? Dirac is an advanced royalty-free video compression format, initially developed by the UK’s BBC Research and Development team. To quote from the recent release announcement:

“Schrödinger is a cross-platform implementation of the Dirac video compression specification as a C library. The Dirac project maintains two encoder implementations: dirac-research, a research encoder, and Schrödinger, which is meant for user applications. As of this release, Schrödinger outperforms dirac-research in most encoding situations, both in terms of encoding speed and visual quality.”

That last sentence is really important. Previous testing by Stream0 showed that while Schrödinger was a much faster implementation than Dirac Research, the quality suffered enormously. If indeed Schrödinger has now surpassed Dirac Research in quality terms, this is exciting news.

Further information regarding enhancements in this release, and plans for a more regular release cycle, are available on the Dirac Video website.

With the increasing acceleration of HTML 5 acceptance, it’d be fantastic to see more browser support for Dirac, alongside Ogg Theora, as an alternative to the currently almost ubiquitous Flash/H.264 combination.
Categories: Codecs Tags: , ,

Ripping CDs with FLAC – Best Compression Settings

November 6, 2009 Leave a comment

As storage space becomes cheaper, there’s a growing trend to save digital music files in a lossless format. Such lossless formats provide an exact replication of the audio quality found in the original content, usually on CD. The resulting files are also much larger, when compared to MP3 or AAC at 128kbps or 256kpbs. A favourite open source lossless audio codec is FLAC, which stands for Free Lossless Audio Codec. Within the possible FLAC settings there are 8 levels of compression to choose from when creating new files.

Lossless codec? Compression? Doesn’t compression result in loss of detail? Not always. There are many lossy codecs, both audio and video, that apply various compression techniques that actually discard some of the original material, to obtain smaller file sizes. The better the codec is at discarding items that don’t impare the listening or viewing experience, the more impressive the end result will be.

FLAC, being lossless, doesn’t discard any of the original content, but still applies compression techniques. View this like compressing a file with Gzip or Bzip perhaps. Smaller files are achieved, but when de-compressed nothing has been lost in the process. Perhaps think of it like folding a piece of paper. Fold it in half once, and the end result is smaller. Keep folding to produce smaller and smaller (or more highly compressed) paper packages. Unfold the paper, and you still have the same original piece of paper. Ignore fold lines and degradation over time! This doesn’t happen in the world of bits and bytes.

We decided to test which of the FLAC compression settings provided the best trade-off between final file size and encoding time. Higher compression will require more time, but should produce smaller file sizes.

Trying to mimic how we would actually go about ripping a whole CD, we decided to use the Ripit utility, and follow instructions posted on the Debian forum. Ripit is a great example of a truly useful utility where a fancy GUI is just not needed. Edit one simple configuration file, then type “ripit” at the command prompt and that’s almost all there is to it. There would be some overhead in using Ripit, as it checks the freedb.org database for each album’s details, but this should be minimal.

Grabbing the nearest un-ripped CD from the shelf, our test file will be U2′s Pride (In the Name of Love) from their Best of 1980-1990 album. This song is 3 minutes and 50 seconds long.

Our exact Ripit command was:

>time ripit 01

“Time” provides feedback on the elapsed time of the process. “01″ tells Ripit to just rip the first track on the CD.

The test machine is a reasonably old Dell Inspiron 6400, which contains a Intel Core2 CPU T5500 @ 1.66GHz and 1GB of RAM.

Here are our results from the 8 different levels of compression
available in FLAC. If no compression level is specified, 5 will always
be the default.

Comression Quality: 0
Time: 0m59.309s
Size: 30261367 bytes (28.86MB)

Compression Quality: 1
Time: 1m1.518s
Size: 29643288 bytes (28.27MB)

Compression Quality: 2
Time: 1m0.324s
Size: 29631732 bytes (28.26MB)

compression Quality: 3
Time: 0m57.156s
Size: 28596473 bytes (27.27MB)

Compression Quality: 4
Time: 1m0.707s
Size: 27717767 bytes (26.43MB)

Compression Quality: 5
Time: 1m1.406s
Size: 27710285 bytes (26.43MB)

Comression Quality: 6
Time: 1m1.899s
Size: 27710119 bytes (26.43MB)

Compression Quality: 7
Time: 1m8.692s
Size: 27696835 bytes (26.41MB)

Compression Quality: 8
Time: 1m13.376s
Size: 27664197 bytes (26.38MB)

Between Compression Quality 0 and Compression Quality 8, there’s approximately 13.5 second and 2.5MB difference. This might not seem like very much, but let’s expand these figures to account for an entire CD.

Assuming all tracks are approximately the same length (3:50) and that there are 12 tracks on the average CD, we have the following figures:

13.5 seconds x 12 = 162 seconds (2 minutes 42 seconds)
2.5MB x 12 = 30MB.

Realistically though, you can see there’s a big jump in time between Compression Quality 6 and Compression Quality 7, while there’s not a lot of difference in time between Compression Quality 5 and Compression Quality 0 (Ignoring Compression Quality 3′s time anomaly which we can’t account for). There’s also not a lot of file size difference between Compression Quality 5 and Compression Quality 8.

Therefore, unless storage space is a really big issue, the average user is probably better off leaving the Compression Quality settings at Default (5) and saving almost 3 minutes for CD rip. Then again, on a newer machine, this time difference is likely to be much less, so you may as well use Compression Quality 8 and save that little bit of space.

In the end, Compression Quality settings in FLAC don’t make that much difference. Leaving the settings at Default is a pretty good choice, but setting them to a maximum of 8 will save some space, without a major time impact. 

Categories: Audio, Codecs Tags: ,

Interview with Magic Lantern Creator

November 6, 2009 Leave a comment

Several months ago we posted an article about the Magic Lantern firmware for the Canon 5D Mark II video DSLR. This open source software adds functionality to the 5D that Canon didn’t provide out of the box. There has been quite a lot of progress on Magic Lantern over the last few months. The latest release is version 0.1.6, but even since then further enhancements have been made, including Autoboot.

The originating creator of Magic Lantern, Trammell Hudson, recently participated in an interview available on the Cinema5d website. Here are some short excerpts from Trammell’s responses:

4. What plans do you have for the new 5d firmware update? Can we expect anything beyond 24p/25p?

You would have to ask Canon about their plans…  I’ll update my code to work with their new firmware once it is available.  It would really please me if Canon incorporated all of the features from Magic Lantern into their firmware.

On my roadmap for upcoming Magic Lantern releases:

* 1080i HDMI output (still having technical problems)

* SMPTE timecode jamming

* Scripting

* USB control from the Impero remote follow-focus

* Waveforms and vector scope

* Autoboot (now available)

5. On your Wikia Page you describe the Magic Lantern as ” an enhancement atop of Canon’s firmware that makes your 5D Mark II into the 5D Mark Free” What exactly do you mean?

Most equipment is “closed” in that what you buy is what you get. Sure, you can put it on rails, add a follow focus and mattebox, but you can’t really change what is going on inside the box.  With Magic Lantern, however, the internals of the camera have been opened up so that it is possible to add new features that the manufacturer might not have ever imagined.

Read the full text of the interview over at Cinema5d.

A potentially useful enhancement to the Magic Lantern firmware would be the ability to change the codec used in the 5D Mark II. Currently, content is stored as H.264 at around 40Mbps. While this provides for some very nice high quality footage, it’d be nice if additional open source options were included, like Lagarith and Dirac Research. The Magic Lantern Wikia Discussion page has a few comments around this idea already.

H264 Video Encoding on Amazon’s EC2

October 28, 2009 2 comments
Stream #0 recently started looking at Amazon’s EC2 computing offering. We created our first public AMI, based on Debian Squeeze, including FFmpeg and x264 pre-installed. Now that we can easily start instances with the necessary basics installed, it is time to compare the relative merits of the different instance sizes that Amazon offers.
EC2 Instances come in a variety of sizes, with different CPU and RAM capacities. We tested the 64-bit offerings, including the recently announced High-Memory Quadruple Extra Large instance.
These 64-bit instances are listed on the EC2 website in the following way:
Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform
Extra Large Instance 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform
High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform
High-Memory Quadruple Extra Large Instance 68.4 GB of memory, 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform
We’ll take a closer look later at the in-depth specifications of each below.
Our test file was 5810 frames (a little over 4 minutes and 285MB) of the HD 1920×1080 MP4 AVI version of Big Buck Bunny. The FFmpeg transcode would convert this to H264 using the following 2-pass command:
>ffmpeg -y -i big_buck_bunny_1080p_surround.avi -pass 1 -vcodec libx264 -vpre fastfirstpass -s 1920×1080 -b 2000k -bt 2000k -threads 0 -f mov -an /dev/null && ffmpeg -deinterlace -y -i big_buck_bunny_1080p_surround.avi -pass 2 -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre hq -s 1920×1080 -b 2000k -bt 2000k -threads 0 -f mov big_buck_bunny_1080p_stereo_x264.mov
Setting Threads to zero should mean that FFmpeg automatically takes advantage of the entire number of CPU cores available on each EC2 instance.
FFmpeg revealed the following information about the transcode:
Input #0, avi, from ‘big_buck_bunny_1080p_surround.avi’:
Duration: 00:09:56.48, start: 0.000000, bitrate: 3968 kb/s
Stream #0.0: Video: mpeg4, yuv420p, 1920×1080 [PAR 1:1 DAR 16:9], 24 tbr, 24 tbn, 24 tbc
Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s
[libx264 @ 0x6620f0]using SAR=1/1
[libx264 @ 0x6620f0]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64
[libx264 @ 0x6620f0]profile High, level 4.0
Output #0, mov, to ‘big_buck_bunny_1080p_stereo_x264.mov’:
Stream #0.0: Video: libx264, yuv420p, 1920×1080 [PAR 1:1 DAR 16:9], q=10-51, pass 2, 2000 kb/s, 24 tbn, 24 tbc
Stream #0.1: Audio: aac, 48000 Hz, 2 channels, s16, 128 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Ignore the duration, as that’s read from the file header, and we only uploaded part of the overall file.
Now to look at how each EC2 instance performed.
m1.large
(Large Instance 7.5 GB of memory, 4 EC2 Compute Units)
Firstly, querying the machine capacity (cat /proc/cpuinfo) returns the following information:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
stepping : 6
cpu MHz : 2659.994
cache size : 6144 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips : 5322.41
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
There’s 2 of these cores available. RAM is confirmed as 7.5GB (free -g).
The FFmpeg transcode showed the following:
H264 1st Pass = 11fps – 18 fps, 5 minutes 30 seconds
H264 2nd Pass = 4-5fps, 18 minutes 38 seconds
Total Time: 24 minutes, 8 seconds
m1.xlarge
Extra Large Instance 15 GB of memory, 8 EC2 Compute Units
CPU Info:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
stepping : 10
cpu MHz : 2666.760
cache size : 6144 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips : 5336.15
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
There’s 4 of these cores available. RAM is confirmed at 15GB.
The FFmpeg transcode showed the following:
H264 1st Pass = 11fps – 14 fps, 5 minutes 30 seconds
H264 2nd Pass = 6-7fps, 14 minutes 19 seconds
Total Time: 19 minutes, 49 seconds
c1.xlarge
High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units
CPU Info:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz
stepping : 10
cpu MHz : 2333.414
cache size : 6144 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips : 4669.21
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
There’s 8 of these cores available. RAM confirmed at 7GB.
The FFmpeg transcode showed the following:
H264 1st Pass = 24-29fps, 3 minutes 24 seconds
H264 2nd Pass = 11-13fps, 7 minutes 8 seconds
Total Time: 10 minutes, 32 seconds
m2.4xlarge
High-Memory Quadruple Extra Large Instance 68.4 GB of memory, 26 EC2 Compute Units
CPU Info:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
stepping : 5
cpu MHz : 2666.760
cache size : 8192 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca popcnt lahf_lm
bogomips : 5338.09
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
There’s 8 of these cores available. RAM confirmed at 68GB.
The FFmpeg transcode showed the following:
H264 1st Pass = 35-38fps, 2 minutes 47 seconds
H264 2nd Pass = 12-15fps, 6 minutes 30 seconds
Total Time: 9 minutes, 17 seconds
What can be revealed from these figures? As expected, the High-Memory Quadruple Extra Large Instance performed best, but not by much. Certainly all the additional RAM didn’t make much of an impact, and the time saving is probably really down to the slightly increased CPU specifications. Obviously, over a larger file set this time saving would be more evident.
Let’s look at which EC2 instance gives best value for money for this test. Amazon charges per CPU hour, shown below:
m1.large: $0.40/hour
m1.xlarge: $0.80/hour
c1.xlarge: $0.80/hour
m2.4xlarge: $2.40/hour
These are US Dollars and for a US based instance (European instances are slightly more expensive). Amazon has also revealed that there will be a price reduction
in effect from November 1st 2009.
Looking at the time taken to transcode our test file, on each instance, reveals the following:
m1.large
Total Time: 24 minutes, 8 seconds
Total Cost: $0.16 ((($0.40/60)/60) x 1448 seconds)
Cost per GB: $0.57 ((1024MB/285MB) x $0.16)
m1.xlarge
Total Time: 19 minutes, 49 seconds
Total Cost: $0.26 ((($0.80/60)/60) x 1189 seconds)
Cost per GB: $0.93 ((1024MB/285MB) x $0.26)
c1.large
Total Time: 10 minutes, 32 seconds
Total Cost: $0.14 ((($0.80/60)/60) x 632 seconds)
Cost per GB: $0.50 ((1024MB/285MB) x $0.14)
m2.4xlarge
Total Time: 9 minutes, 17 seconds
Total Cost: $0.37 ((($2.40/60)/60) x 557 seconds)
Cost per GB: $1.33 ((1024MB/285MB) x $0.37)
Clearly the c1.large instance represents the best value for money, although I was surprised how close behind the m1.large costs were. The additional RAM, and slightly better CPU specifications for the m2.4xlarge instance do not outweigh the much more expensive per hour cost, at least when it comes to video transcoding.
A typical HD file used for broadcast or high end post production purposes is around 85GB for 60 minutes (DnxHD at 185Mbps). Obviously the time taken to transcode this file, to an H264 at 2Mbps, could vary from the actual source content we used, but from the figures above we can estimate that it would cost $42.50 and take approximately 53.62 hours!
Taking into account that these figures may vary for different input and output files, the above should represent a worst case scenario. For example, I would expect an SD MPEG2 50Mbps file to take proportionally much less effort to transcode than a DNxHD 185Mbps HD file. Only a further test will tell……
Is Amazon’s EC2 offering worth considering for high end video file transcoding? Compared to the prices charged by Post-Production facilities it is certainly a lot cheaper, as long as you have time to wait for the end result. However, that’s the beauty of cloud based computing power – if you’re in a hurry just scale up! Keep in mind though, content still needs to be uploaded to EC2 before transcoding can begin, that’s going to take additional time and add further cost. 

How Firefox Is Pushing Open Video Onto the Web

June 20, 2009 Leave a comment

There’s a great article called How Firefox Is Pushing Open Video Onto the Web by Micheal Calore over at WebMonkey, dealing with the HTML 5 <video> tag and Firefox’s native Ogg Theora support. The piece outlines the technical details of the <video> tag and includes an interview with Mozilla director of Firefox Mike Beltzner and Mozilla director of platform engineering Damon Sicore.

An excerpt from the interview:

Webmonkey: How do you see these factors — the HTML
5 video tag, putting the Ogg codecs right into the browser,
presentation techniques that mimic the plug-in player experience –
affecting video on the web? What’s it going to change in six months? Or
six years?

Beltzner: In six months, you’re going to see more
sites like DailyMotion doing things where they detect that the browser
supports Ogg and the video tag, and in that case, they’re going to give
those users an Ogg-and-video-tag-experience.

I think you’ll see content sites doing this because they’ll have the
ability to re-encode their entire video libraries without having to pay
any licensing fees. The Ogg Theora encoders are completely license-free
and patent-proof. They don’t need to worry about which player you’ve
got. They also don’t need to worry about which hardware you’ve got. Ogg
Theora will run on Windows, Mac and Linux, or any embedded device or
mobile device built on the Linux platform.

Here’s a beta example page from DailyMotion demonstrating use of the HTML 5 <video> tag. If you have Firefox 3.5 installed, or a reasonably new version of Webkit/Safari and the XiphQT component install, you should have in browser video playback – Ogg Theora and no Flash player needed.

YouTube’s demonstration page here.

Categories: Codecs, Video Tags: , , , ,

HTML 5, Codecs and the Video Tag

June 20, 2009 Leave a comment

Spending the last two days at the Open Video Conference has been a great experience, lots of interesting speakers and I’ve learned a few things. Perhaps I’ll write more in general later, however it’s worth mentioning, while still fresh in my mind, today’s sessions around royalty-free codecs and the HTML 5 <video> tag.

The main focus of the Royalty Free Codecs session seemed to be around Ogg Theora. Also present though were Sun, speaking about their new Open Media Stack, and David Schleef to represent his work on the Schroedinger Dirac library. I would have loved to hear more about what was happening with Dirac, but the crowd wanted Theora news.

A short demonstation on the projector screen showed H.263/H.264 content versus the same Ogg Theora content at various bit rates, the highest less than 500Kbps. The results, from Theora’s perspective, were very good. Visually I couldn’t pick out any differences on the large screen. I would have liked to see the demonstration done at higher, greater than 1Mbps, bitrates, though. Not the one used today, but a similar demonstration is available here.

Sun did not do themselves any favours at this Conference. A session yesterday gave them time to discuss the process they undertook to ensure there were no IP encumbrance in their new codec and Open Media Stack, but right at the end the key revelation was that they’re unable to Open Source their work.

David did not have much of a chance to talk in depth about Dirac, and I was disappointed not to have gained a better understanding the current development status, and community input velocity around Dirac. He did make a point that the BBC were using Dirac internally, which is true but only to a very small extent. In non-linear editing environments, DVCProHD, AVC-I 100 and ProRes are still the codecs of choice. In my opinion this due to the lack of tools available for Dirac work. Dirac tool development needs a great leap forward if this codec is to gain any significant traction.

The next session had representatives from all major browsers (Firefox, Webkit and Opera), except IE, present to talk about HTML 5 and the new <video> tag.

Firstly, I was particularly interested in the W3C Draft Web Fragments specification. Amongst other things, this will allow playback of just segments of video, based on a time specification in seconds. While not currently possible, if this could be extended to read an embedded timecode track and seek in a frame accurate manner, that would be truly powerful in an open standard.

With Safari on Mac, the <video> tag can be used to playback any video format for which the user has the relevant codec and QuickTime component installed. Thus we have Theora support through the XiphQT component. In the latest version of iMovie, QuickTime Pro and Final Cut Pro, users can now also choose to export or render in Ogg Theora. If only the Dirac QT component was ready.

Metavid developers also demonstrated a cute javascript library embed workaround that covered IE’s lack of support for the <video> tag. Full details are available on the Metavid website, as well as a demonstration of the code in action. Even if you’re browser doesn’t currently support the HTML 5 <video> element, this script will take care of it.

The cross fade is particularly interesting. Do we no longer need to finish clips in a non-linear editor? Can we now perform hard cuts based on an edit decision list and let the browser deal with the fading or finishing element of the job?

Hopefully there’s some exciting times ahead for open source, royalty free video codecs and ubiquity of embedded video on the Web. 

BBC R&DTV Episode 2 Released

June 20, 2009 Leave a comment

We’re only about two weeks late noticing that the BBC has released the second episode in their R&DTV series. Again they’re providing a whole bunch of different video codecs – including Ogg Theora, but they’re still not their offering files encoded in their own Dirac codec. More information available on the main page or the BBC Backstage blog, but a wider selection of files can also be found directly on the FTP site where both 30 minute and 5 minute versions are available, as well as an entire asset bundle with rushes.

This episode features interviews with David Kirby on the BBC’s Ingex project, Matt Biddulph CTO of Dopplr and Jason Calacanis CEO of Mahalo.com.

The BBC has released this content under a Creative Commons attribution licence, allowing everyone to remix as they see fit, providing an original BBC credit is maintained.

Our post regarding Episode 1 of R&DTV goes into some more details regarding the technical details of the available files.

Follow

Get every new post delivered to your Inbox.