Chances are you have probably already heard the buzz surrounding High Dynamic Range (HDR) and how it’s going to take your viewing experience to the next level. However, since HDR is still a fairly new technology, some people are still not quite clear about how it actually works. So what exactly is HDR?
HDR is a way of capturing, processing, and reproducing a video content or an image in a way that increased detail in both the shadows and highlights of a scene; for the purposes of this article, we will focus on HDR video content. To understand high dynamic range, we must first understand how dynamic range works. Dynamic range is the range of information between the lightest and darkest part of an image, also known as an image’s luminosity.
Currently, SDR (Standard Dynamic Range) is the standard for video and cinema and it is only able to represent a fraction of the dynamic range that HDR can display. To put this in simpler terms, HDR is a technology that preserves detail in scenes where otherwise the contrast ratio of the monitor could be limiting. Dynamic range is also measured in stops, much like the aperture of a camera.
On a typical SDR display, images will have a dynamic range of about 6 stops. HDR content on the other hand is able to almost triple that dynamic range to approximately 17.6 stops. This is achieved by making adjustments to the gamma and bit depth used.
When a monitor has a low contrast ratio or doesn’t operate with HDR, it is common to see color tones being “clipped” at both the light and dark ends of the spectrum. When detail in an image is clipped, it means that detail in that part of an image was not recorded and therefore cannot be seen. With standard dynamic range, a dark scene may see dark grey tones becoming clipped to black, while in bright scenes some colors and detail in that part of the scene may become clipped to white, rendering all information in those clipped parts of the scene useless.
When a monitor is trying to produce a scene with a wide range of luminance, this problem becomes even more pronounced. HDR calculates the amount of light in a given scene and uses that information to preserve details within the image, even in scenes with large variations in brightness, for more realistic looking images. In other words, bright parts of a scene can be very bright, dark parts of a scene can be very dark, and details will still be seen in both.
When it comes to HDR, there are two prominent standards used today, HDR10 and Dolby Vision.
HDR10
HDR10 is a more easily adoptable standard and is used by manufacturers as a means of avoiding having to submit to Dolby’s standards and fees. For example, HDR10 uses 10-bit color and has the capability masters content at 1000 nits of brightness. HDR10 has established itself as the default standard for 4K UHD Blu-ray disks and has also been used by Sony and Microsoft in the PlayStation 4 and the Xbox One S.
Dolby Vision
The other major HDR format, Dolby Vision, requires monitors to have been specifically designed with a Dolby Vision hardware chip, which Dolby receives licensing fees for. Dolby Vision uses 12-bit color and a 10,000 nit brightness limit; Dolby Vision’s color gamut and brightness level exceed the limits of what can be achieved by displays being made today. The barrier to entry for display manufacturers to incorporate Dolby Vision is still high due to its specific hardware support requirements.
ViewSonic VP Series Monitors with HDR10
ViewSonic has incorporated HDR10 into its VP series color accurate professional monitors with the VP3268 and VP2785-4K models. Both the VP3268 and VP2785-4K combine 4K sharpness and clarity with the high dynamic range of luminosity attainable with HDR10. HDR technology is a major leap forward for the general viewing experience. Users looking to maximize detail and enhance their viewing experience can count on VP3268 and VP2785-4K to deliver more true-to-life looking HDR images.