Description
Is this really 12-bit video ?
How can you tell if you’re really producing 12-bit video? Aside from a handful of DCI projectors, few displays can actually render in 12-bit color. Even if you found a proper 12-bit display, it would be challenging to visually tell the difference between 10 and 12 bit video.
With the introduction DNXHR 444 and the development on new mastering workflows, I wanted to analyse the video content of my files to ensure I was not losing video information in my pipeline. Also, since DNXHR is still a new codec, not all software has been updated to read the bit-rate metadata correctly. This can lead to confusion, not knowing what you’re actually working with.
With this 16-bit TIFF Test Pattern, you can actually test how different applications are affecting bit-depth and which transformation are being made. Here’s how to use the chart:
Import the TIFF in your video software, export your codec and then get the numerical RGB values for all 4 sections of the chart.
– In Resolve. you can use “Show RGB Picker Values”, but the values are limited to 8-bit or 10-bit.
– In After Effects, open the “Info” window and you can choose the cursor display RGB values of 8,10 or 16 bit per channels
The chart consists of 4 very similar grey values that are numerically different when processed at 12-bit or 16-bit, but get compressed when processed at 10-bit and 8-bit.
In 12-bit, 16-bit or 32-bit-float workspace, here’s what to look for:
If your video was processed as 16-bit or 12-bit, every column will have a different RGB value
If your video was processed as 10-bit, 2 columns will have the same RGB value
If your video was processed as 8-bit, all columns will have the same RGB value
Additionally, you can check for gamma shifts by comparing processed RGB numerical values to the RGB values of the original TIFF file.
Reviews
There are no reviews yet.