OpenEXR Codecs explained

Making sense of the codecs in OpenEXR. What do they do and what are the best choices?

In this weeks post we are going to look at the internals of the OpenEXR format from the perspective of an artist. The format itself is very impressive at a technical level but we’ll focus on the features that matter to you as an artists and make some recommendations for how to get the most out of the format in Nuke (and other programs that support OpenEXR!)

First, a little background. OpenEXR is a, multi-channel, multi-layer image file format capable of storing floating point data. It was developed at Industrial Light & Magic in 1999 and then subsequently released as Free Open Source Software under a BSD-Like license in 2003. It is notable for its support of “HALF” precision (16 bit) floating point data, which ILM pioneered. HALF is a tremendously clever solution for storing floating point data with sufficient precision for modern C.G., film and video images while keeping file sizes under control. If you want to learn more about the math behind it and why it was so clever to match up a micro-float format to images you can read more here. OpenEXR supports several other features that are useful for computer graphics like tiles, mip-maps, deep pixels, multi-view (mostly used for stereo) and separate “data window” and “display window”, which is a way to store a subset of the raster or a raster that is larger than the display window.

OpenEXR supports 32 bit float and 32 bit integer bit depths in addition to 16-bit HALF.

As you can see, there is a lot to know about OpenEXR. Rather than get too far into the weeds, we’ll focus on the features you are most likely to use day to day in Nuke and explain the why behind the choices for each.

As mentioned earlier, OpenEXR has support for three bit depths. (Nuke labels them datatype and only exposes two of them in the write node)

  • 16-bit floating-point (half)

  • 32-bit floating-point

  • 32-bit unsigned integer (not exposed directly in the Nuke UI)

You should always use HALF for image data unless you have a very good reason not to. You will always get smaller file sizes with 16-bit HALF as the data is half the size to begin with. HALF is more than sufficient for image data. You will gain nothing but extra file size by using 32 bit float for the visual portion of images. However, 32 bit float will often be necessary for NON-viewable utility data channels like Z buffers, STMaps, Smart Vectors, etc. (They require the extra precision.)

Here are the Codecs (compression types) in OpenEXR and their relative strengths and weaknesses.

None: Disables all compression. (Not recommended)

Run Length Encoding (RLE) Is a very basic form of compression comparable to that used by the ancient Targa format. Fast but not very efficient for anything but large areas of perfectly flat color. Might be suitable for say, alpha channels but any of the other lossless options will still provide better compression.

Zip (1 scanline) deflate (LOSSLESS similar to zip) compression with a zlib wrapper per-scan line. A little more overhead than the following variant but still good for C.G. stuff. Nuke used to prefer this variant but it’s not as big a deal any more.

Zip (16 scanlines) deflate compression applied to groups of 16 scanlines. This is a good option for with rendered images that do not have film grain applied.

PIZ (LOSSLESS) Uses a combined wavelet / Huffman Coding scheme. PIZ handles grainy images better than ZIP, and will often surpass any of the other lossless options under grainy conditions. Depending on how powerful your CPU is it can be a bit slower to encode/decode but it’s often worth the price.

PXR24 (24-bit data conversion then deflate compression) This form of compression was contributed by Pixar Animation Studios. It converts 32-bit floats to 24 bits then uses deflate compression. It is lossless for half and 32-bit integer data and slightly lossy for 32-bit float data. According to the original OpenEXR documentation “This compression method works well for depth buffers and similar images, where the possible range of values is very large, but where full 32-bit floating-point accuracy is not necessary. Rounding improves compression significantly by eliminating the pixels' 8 least significant bits, which tend to be very noisy, and difficult to compress.” (Not exposed as an option in Nuke Write node but it will read it just fine.)

B44 This form of compression is lossy for half data and stores 32-bit data uncompressed. It maintains a fixed compression size of either 2.28:1 or 4.57:1 and is designed for real-time playback. B44 compresses uniformly regardless of image content. It was really included for use in real-time playback systems and I would not recommend using it for general use.

B44A An extension to B44 where areas of flat color are further compressed, such as alpha channels. Also no recommended for general use.

DWAA is a JPEG-like lossy compression format contributed by DreamWorks Animation. Compresses 32 scanlines together. While lossy, it is capable of quite high quality. The “compression setting” can be a bit confusing if you are used to “JPEG” in that with DWA compression, higher settings mean higher compression (lower quality).

DWAB Same as DWAA, but compresses 256 scanlines per row.

As a test I compressed a production image to each codec to see what the file sizes would be. The plate was a grainy full HD image in 16 Bit HALF. Here are the results.

As you can see, every codec offers more compression than “NONE”. As mentioned earlier, I recommend using some kind of compression at all times. You can see from the test that RLE is not a good choice for a grainy plate type image like I used for my test but it could be an option for mattes or CG images with very smooth gradients and flat areas of color. I personally would not bother with it, however, as you will always get better compression from ZIP in those same situations. My personal go to is PIZ as it is lossless and compact. I use DWAB for tests and in same cases even for intermediate pre-comps when I know I will be able to get back to the original. For example, I frequently use DWAB for pre-comps of degrained plates as I will be restoring the grain to its original state at the end of my comp anyway (which also has the benefit of restoring any minor loss incurred from being compressed to DWAB)

The DWA codec has the highest compression but keep in mind it is a lossy format. In my test I saved it with three different compression levels. In the case of my test image I found the default level of 45 to only impose a tiny bit of visual loss. The setting of 20 looked better, effectively totally visually lossless. At the higher setting of 100 I could see a softening of the grain in the image. To even see the loss I had to flip between the original and the compressed version. Even at the higher setting of 100 the amount of visual loss was still pretty low. On a less grainy image it would be tough to spot.

In the case of my test image, it seems that The default setting of 45 seems like a good choice but you can experiment to see what works for you.

One more tip. If your renderer supports writing out a bounding box you should enable it. Nuke will read images with efficient bounding boxes very quickly. You should also consider writing out bounding boxes from your saved images from Nuke. It’s particularly useful for things like precomps of mattes which have a lot of empty space around them. While it will slow down saving images a little, the “autocrop” feature in the write node, in conjunction with judicious use of a manual crop, can help make very efficient pre-comps of mattes which load back into Nuke very quickly. As comps start to get large and complicated every little bit of speed is welcome, so it’s worth thinking about details like that as you build up your comp.

This last note is more specific to pipelines with renders that don’t support creating proper bounding boxes or whose renderers can only produce tiled images. In some cases, it can make sense to re-compress those images as a batch process before using them in Nuke to make them load more quickly in Nuke once you have to work with them. I’ve discussed the OpenImageIO and the command line tools that ship with it in the past. The excellent oiiotool that ships with OpenImageIO can also do “autocrop” only from a command line while it is recompressing the images. No need to tie up a license of Nuke for the processing. The argument in oiiotool is ”--autotrim” oiiotool can be run as a post process after a render competes and recompress the images so by the time a compositor needs to use them they have been optimized for use in Nuke. You can also of course use a Nuke render license for this same process but it seems like a bit of a waste to me when a piece of Free Open Source Software could be used for such a simple task.

Happy Composting!