So with all the talk about Log vs Lin and what can and can’t be done digitally, lets get back to the basics. Lets talk about the bit depth in color channels.
That’s right, Bit Depth. I am constantly astounded how many people here or on the Cow fail to grasp the need for more bit’s - and the misunderstanding about where and when it matters.
Let me start with a couple of basic ideas.
Color spaces are identified in the video world as either RGB or as Y’CbCr.
Bit depths are determined by how many “bits” or one’s and zero’s that are needed to identify as specific color.
So the Per Channel information looks like this.
1 bit = 2 colors ( black or white)
2 bits = 4 color (first gray-scale)
3 bits = 8 colors
4 bits = 16 colors
8 bits = 256 colors
10bits = 1024 colors
12bits = 4096 color
16bits = 65,536 colors
Do not forget that these are the numbers of colors (or levels of grey) that can be reproduced in EACH of the 3 color channels of image and they also refer to the uniquely identifiable luminance levels that can be reproduced.
So that defines the channels but what about the overall colors available are available?
Bit depth levels Total Colors Resolved
8b 256 16,777,216
10b 1024 1,073,741,824
12b 4096 68,719,476,736
16b 65536 281,474,976,710,656
The bigger issue in my mind is not everybody uses the same terminology when talking about bit depth, video uses one methodology and print uses another so this can end up being very confusing if one does not use the proper terminology for each type of workflow.
Print uses terms like “Millions of colors” to describe its bit depth while often also listing the alpha channel as part of the available depth volume (32bpc is considered to be 8x8x8x8 bits per channel), whereas Video traditionally uses the bit level of one single channel to define their meaning.
So that very same file in the video world is considered 8bit.
While higher bit depth is preferred anytime there is image processing going on, say when shooting for VFX or when using greenscreen, the reality is that there is very little video content that is ever captured at higher than 10bit.
The vast majority of cameras on the market cannot capture more than 8bits of info when recording internally and that would include virtually all tape based cameras (excluding the SRW 9000 camera that shoots HDCamSR tape) All DV and HDV style cameras, P2 (while some P2 cameras can record 10bit as AVC-I, most do not), even digital betacam, consider the first 10bit system, did not record more than 8 bits in-camera, the 10bit path was only capable in post.
The digital confusion really sets in the DSLR world,where all of the new people shooting video cannot seem to comprehend that video recorded in these cameras is NOT at anywhere near the level of color fidelity or resolution that the Camera Raw format records when capturing still images in a DSLR.
So I offer some guidelines that I use when speaking out about bit depth, and while not all of them are completely and fully accurate, they are for the most part guides to keep people on the same page when discussing bit depth.
1) Video delivery is either 8 bits (256 levels) or 10bits (1024 levels) without alpha channels or masking info, there are only a very few cameras that can actually record 16 bits of data, and even fewer ways to record it.
- You can create and handle 16bit material via raw / native codec compression (as recorded in the Phantom camera), as DPX or Tiff still frames that are captured at the camera using high-end 3rd party recorders. Working at this level in FCP will force color depth conformity by turning off the RT extreme engine.
2) There is NO color subsampling in RGB - its kinda hard to use less color when all 3 channels are needed to record full color data. Color subsampling is only done when working in the Y’Cb’Cr’ video color space, that allows for color (chroma) data to be separate from luminance.
3) Alpha channels are NOT included as part of the bit depth calculations - mainly because in the video space the alpha channel can be a differing bit depth than the video file, as in Apple’s ProRes 4444 codec-where video processing can support upto 12bits per channel for video, but up to 16bit for the alpha channel.
4) Realtime compression schemes for playback are 8bit- Speed is what RT is all about, so bit depth is a secondary concern. Apple, Avid and Adobe all force 8bit to achieve more responsive playback in the timeline, using those RT files will limit depth to 8bit.
5) Remember that 90% of the viewing world is done at 8bit.
that would be almost anything that uses compression for delivery- BluRay discs, OTA transmission, Flash, even most h.264 encoding (especially those iPod, iPhone and apple Tv settings in compressor) are limited to 8bit, as are the vast majority of displays. Televisions will stay that way for a while longer, while computer displays are just starting to peak out at 10bit.
To get a better handle on Bit Depth and how it pertains to video check out these sources of information:
On Bit Depth and Color Spaces:
On the Specs: