Perfect photo. What is HDR+ and how to activate it on your smartphone


What is HDR

To fully understand how HDR+ works, you will first have to understand regular HDR.

The main problem of all smartphone cameras is the small size of the matrix (or rather, photo cells) and, as a result, insufficient coverage of the dynamic range. To correct this shortcoming, the HDR (High-Dynamic Range) algorithm was developed, the principle of which is as follows: the camera takes a frame with a standard exposure level for a given scene, then takes an underexposed frame, in which only the overexposed areas of the original image will be clearly visible, then An overexposed photo in which only the darkened parts of the original photo are visible, and everything else is overexposed. Next, the images are superimposed on each other and combined using special algorithms, the quality of which depends on the camera software manufacturer. The result is a photo with good detail in both shadows and lighter areas.

The disadvantages of HDR are obvious: a long shooting time leads to the fact that moving objects caught in the frame will appear double, and even a slight shaking will blur the picture.

Standards in detail

As mentioned earlier, modern HDR video support includes the three most popular standards: HLG, DolbyVision and HDR10.

H.L.G.

A standard that includes the gamma curve (here, too, we won’t puff out our cheeks and won’t write clever words about functions; the gamma curve is an amateur definition) HLG and the color space (container) Rec.2020. Maximum brightness means 1000 nits. But it is also used for more. This standard was intended as a standard for delivering live broadcasts in HDR. Its meaning was that the curve itself very easily adapts to any display brightness. It can work well at both 300 nits and 1000 nits. Those. its flexibility is its advantage. For the same reason, this standard was warmly accepted by videographers. It allows you to not worry too much about controlling the balance of lights and shadows. It is enough to roughly understand that the first half of the histogram is the main part of the scene, and the second part is for bright areas and highlights. This standard is easy to work with and easy to convert on screens of any brightness. At the same time, it is worth noting that at the very beginning of its journey, the standard was not intended to record raw material with subsequent post-processing. This is a delivery standard, not a recording standard. But Sony somehow managed to present it as a logarithmic profile. And at the very beginning, the color depth was 8 bits. Sony devices, including smartphones, still accept 8-bit HLG normally. True colorists do not consider this standard to be an HDR standard.

BBC WhitePaper WHP 309.

Dolby Vision

The standard for high quality content. Includes PQ gamma curve and P3 color space (in files packaged in a Rec.2020 container), typically 12-bit. The maximum brightness implies 10,000 nits, but has two reference gradations of 1000 and 4000 nits. Created with the goal of showing what the content creator intended without distorting brightness and color.

What do you mean without distortion? The PQ gamma curve was designed to match real-world brightness with video content brightness, as well as display capabilities. Let's imagine that we have two displays: 600 nits and 1000 nits. And content designed for a 1000 nit display. In this case, the content should not be adjusted to the capabilities of a 600 nits display, such a display should display everything that is intended to be displayed up to this limit and cut off the rest (not everything is so ideal in reality). Thus, both displays will have exactly the same frame rate up to 600 nits, and a 1000 nit monitor will be able to show additional content above 600 nits. This is the main advantage of this standard, among other things. Dolby Vision has enormous capabilities for dynamic metadata. Dynamic metadata is information that accompanies each individual frame, telling the player how it should react to the frame. This tool gives the artist the opportunity, without changing the content of the frame itself, to give it emotion. Within this metadata, you can change both color and brightness quite subtly. 10 bits may not be enough for this type of manipulation, so the standard implies 12 bits. Most often we deal with this standard in cinemas. To ensure the correct operation of this standard at home, you need to try hard. True colorists consider true Dolby Vision to be true HDR, and do not recognize any other standards.

HDR10

Free alternative to Dolby Vision from Samsung. The gamma curve and color space are the same. A color bit depth of 10 bits is assumed. At the very beginning, the target was a maximum brightness of 1000 nits, but given the rapid development of HDR video, the threshold was raised to 4000 nits. Yes, the gamma curve itself, as in Dolby Vision, assumes a maximum limit of 10,000 nits. But for some reason, in the context of HDR10, such brightness is not mentioned. The next difference from Dolby Vision is the lack of dynamic metadata. This issue has been resolved in the HDR10+ standard. That same plus means support for dynamic metadata. Here is also their “budget” version. Using metadata in HDR10+, you can draw the light and shadow pattern of the frame, and that’s all. There is no such flexibility as in Dolby Vision. The second disadvantage of HDR10+ metadata is that for how long I have been working in the field of video editing, I have never come across a single application that can work with this metadata. Those. They seem to be there, but they seem to be not there. For whom they exist as a standard and are covered in advertising remains a mystery.

This standard in both variations (with + and without) was warmly received by smartphone manufacturers who wanted their devices to support HDR video. With the advent of HDR displays in smartphones, manufacturers are thinking about how HDR10+ content can be recorded. Plus, of course, it turned out to be superfluous, but everyone kept silent about it. The disadvantage of this standard as a video recording standard is that it is not intended for this purpose. When using HDR10, it is very difficult to control exposure in the highlights, which leads to unsightly clipping when a saturated sky fades into a sharply defined circle of sun. When working with HDR10, you need to take a very responsible approach to issues of color in the light areas of the frame, because HDR10, when playing content on a player, requires additional saturation of the light areas of the frame. The saturation of highlights is an advantage of HDR10 over HLG, in which the brighter the pixel, the more it loses saturation. But this is an advantage of HDR10 as a standard for delivering video content, not recording. When recording, it is simply impossible to control all this. And this standard was never intended for recording. But either the market dictates the rules, or the manufacturers themselves are trying to surprise in this way; today almost all modern smartphones shoot HDR10. The only exceptions are Apple and Sony. Today HDR10 is the most popular standard. It is supported by most manufacturers of smartphones, monitors and TVs, and even laser projectors.

HDR10+ WhitePaper.

An example of an HDR video shot on a phone using the mcpro24fps application.

What is HDR+

Smart heads came up with an algorithm that does not have the disadvantages of HDR. However, it has only one thing in common with HDR: its name.

HDR+ stands for High-Dynamic Range + Low Noise. It gained its fame for a number of outstanding capabilities: the algorithm is able to eliminate noise with virtually no loss of detail, improve the quality of color rendering, which is extremely important in poor lighting and at the edges of the frame, and at the same time it greatly expands the dynamic range of the photograph. HDR+, unlike standard HDR, is almost immune to smartphone shaking and movement in the frame.

The first smartphone to support HDR+ was the Nexus 5. Due to its not-the-best white balance and small aperture (f2.4), the camera of this smartphone was considered nothing more than a solid average. Everything changed with the release of Android 4.4.2 update. It was this that brought with it support for HDR+ mode and amazing quality of night shots. Although they were not very bright across the entire field of the frame, thanks to HDR+ they contained virtually no noise while maintaining fine details and had excellent (for smartphones in 2013) color reproduction.


Nexus 5+HDR+

Other articles in the issue:

Xakep #232. Dangerous IoT

  • Contents of the issue
  • Subscription to "Hacker" -70%

()

If you take a closer look

And yet, terminology is essential: the concept of “dynamic range” comes into play. A healthy person can easily distinguish between very bright and very dark objects, simply by moving his gaze from one to the other: for example, from the colorful facade of the Pena Palace in Sintra to the rich blue mosaic in its arch.

If you look at any picture and take its darkest point as 0, and its brightest point as 100, it becomes obvious that our eyes capture any shades at any point on this conventional scale. This is our dynamic range.

But cameras are designed differently: even the most sophisticated lens is not able to cover the full spectrum and accurately convey details with both brightness 0 and brightness 100. That is why their dynamic range is defined as “narrow” or “wide”, “good” or “bad” " The camera has access to only a limited portion of the scale, on average from 30% to 70%. The wider the dynamic range of a device, the more effectively it balances brightness.

“+7” or “8” in a phone number: what is the difference and what does it affect?

History of HDR+

How did a company that had never made cameras create an algorithm that works wonders using ordinary, by flagship standards, Nexus and Pixel cameras?

It all started in 2011, when Sebastian Thrun, head of Google X (now just X), was looking for a camera for Google Glass augmented reality glasses. The weight and size requirements were very strict. The size of the camera matrix had to be even smaller than in smartphones, which would have an extremely bad effect on the dynamic range and lead to a lot of noise in the photo.

There was only one way out - try to improve the photo programmatically, using algorithms. This problem was to be solved by Marc Levoy, a lecturer in the computer science department at Stanford University and an expert in the field of computational photography. He worked on software-based image capture and processing technology.

Mark formed a team known as Gcam, which began studying Image Fusion, a method based on combining a series of images into one frame. Photos processed using this method were brighter and sharper, and had little noise. The technology debuted in Google Glass in 2013, and then renamed HDR+ in the Nexus 5 later that year.


Another night shot from the Nexus 5

()

Bit color depth

In the context of HDR video, bit depth is deprecated. Considering that the developers of the standard put into it requirements for the future, it was immediately clear that such a wide color gamut and such high contrast would not be friendly with such a low bit depth of color. The higher the contrast and the wider the color space, the more noticeable the problems of posterization at low color depth. Therefore, it was decided to use a minimum value of 10 bits in the standards. It's worth making a caveat here about the HLG standard, which Sony TVs support with 8-bit color depth. As for displays, everything is very confusing and incomprehensible. Some matrices have fair 10 bits, others have 8 bits + FRC (Frame rate control), when 2 bits are added through a flicker frequency game (unfair 10 bits). In fact, I personally don't see any difference. But I am sure that there are people who can distinguish honest 10 bits from dishonest ones. Or maybe there is some other problem that arises when using dishonest 10 bits. Below I tried to show in a very exaggerated way why 10 bits are so important. To avoid tricking the particularly smart, I immediately warn that I did not try to take into account contrast and did not try to show everything as it is. I focused exclusively on one parameter: the maximum/peak brightness of fictitious displays in nits.

Let's say we have 10-bit content in the form of a 64-block gradient. Our first monitor has a low contrast of 300 nits and 8-bit color depth. Please note that we see individual blocks, but they are quite uniform in brightness. The second monitor is a high-contrast monitor with a maximum brightness of 1200 nits, i.e. 4 times brighter, and a bit depth of 8 bits. The same content no longer looks so smooth and we begin to see the boundaries of bit depth quite clearly. The third monitor is a high-contrast monitor with 1200 nits and a bit depth of 10 bits, i.e. 4 times more grayscale. There are no questions here. All 64 blocks are displayed with the brightness with which it was intended. The gradient looks smooth.

How HDR+ works

HDR+ is an extremely complex technology, which cannot be discussed in detail within the scope of this article. Therefore, we will consider the general principle of operation without dwelling on the details.

Fundamental Principle

After you press the shutter button, the camera captures a series of underexposed (fast shutter speed) frames (this is necessary to preserve the maximum amount of detail in the photo). The number of frames depends on the complexity of the lighting conditions. The darker the scene or the more details in the shadows that need to be illuminated, the more frames the smartphone takes.

When a series of pictures is taken, it is combined into one picture. A low shutter speed helps here, thanks to which each photo in the series looks relatively sharp. From the first three frames, the most acceptable one in terms of both sharpness and detail is selected for use as a basis. Then the resulting images are divided into fragments and the system checks whether adjacent fragments can be combined and how to do this. Having detected unnecessary objects in one of the fragments, the algorithm removes this fragment and selects a similar one from another frame. The resulting images are processed using a special algorithm based on the method of successful exposures (mainly used in astrophotography to reduce blurriness of images caused by the flickering atmosphere of the Earth).

Next comes a complex noise reduction system, which includes both a simple method of averaging pixel colors based on multiple images, and a system for predicting the appearance of noise. The algorithm works very gently at the edges of tonal transitions to minimize the loss of detail, even at the cost of having a small amount of noise in such places. But in areas with a uniform texture, the “noise reduction” evens out the picture to an almost perfectly uniform tone while maintaining the transition of shades.


Operation of noise reduction in difficult conditions. On the left before processing, and on the right after

(


)

What about dynamic range expansion? As we already know, using a fast shutter speed eliminates overexposed areas. All that remains is to remove the noise in the dark area using the previously described algorithm.

At the final stage, post-processing of the resulting image is performed: the algorithm minimizes vignetting caused by light hitting the matrix at an oblique angle, corrects chromatic aberration by replacing pixels at high-contrast edges with neighboring ones, increases the saturation of greens, shifts blue and purple shades towards blue, enhances sharpness (sharpening) ) and performs a number of other steps that improve the quality of the photo.


Illustration of the HDR+ pipeline algorithm from the developers' report

On the left is a photo from a Samsung stock camera in HDR, and on the right is a photo created in Gcam in HDR+. It can be seen that the algorithm sacrificed the detail of the sky to draw objects on the ground.

()

()

Google Pixel HDR+ update

The Google Pixel algorithm has undergone significant changes. Now the smartphone starts shooting immediately after starting the camera and, depending on the degree of lighting, takes from 15 to 30 frames per second. This technology is called ZSL (Zero Shutting Lag) and was invented in order to take snapshots. But Pixel uses it for HDR+ work: when you press the shutter button, the smartphone selects from 2 to 10 frames from the ZSL buffer (depending on lighting conditions and the presence of moving objects). Then the best one is selected from the first two or three frames, and the rest, as in the previous version of the algorithm, are superimposed on the main one in layers.

Along with this, a division into two modes appeared: HDR+ Auto and HDR+. The latter takes as many photographs as possible to create the final photograph. It turns out more juicy and bright.

HDR+ Auto takes fewer photos, meaning moving objects are less blurred, hand shake is less affected, and your photo is ready almost instantly when you press the capture button.

In the Pixel 2/2XL version of Google Camera, HDR+ Auto mode has been renamed HDR+ On, and HDR+ has been renamed HDR+ Enhanced.

The second generation of Google Pixel introduced a special coprocessor called Pixel Visual Core. Currently, the chip is only used for accelerated photo processing in HDR+ mode, and also provides third-party applications with the ability to take photos in HDR+. Its presence or absence does not affect the quality of photos taken by Google Camera.

Principle of operation


The principle of operation of the mode.

HDR on the phone is enabled both manually and automatically. When the mode is activated, when you press the shutter button, the camera takes several photos at once - each photo has a different sensitivity and exposure. The resulting sources are processed by software algorithms that determine successful frames and their parts, after which the selected ones are merged into a single image.

If you use the extended range, the user gets photos with a short shutter speed. When the autofocus system works quickly, the photo turns out to be sharper and better quality - in most cases, an order of magnitude better than with the standard mode.

INFO

Google even uses HDR+ to fix hardware issues. Due to a design flaw, the Google Pixel/Pixel XL may have taken a photo that was overexposed. Google has released an update that uses HDR+ to remove this glare by combining shots.

Advantages and disadvantages

Let's highlight the main advantages of HDR+:

  • The algorithm perfectly removes noise from photographs, practically without distorting details.
  • Colors in dark scenes are much richer than in single-frame shooting.
  • Moving objects in photos are less likely to double up than when shooting in HDR mode.
  • Even when taking a photo in low light conditions, the likelihood of blurring the image due to camera shake is minimized.
  • The dynamic range is wider than without using HDR+.
  • Color rendition is mostly more natural than with single-frame shooting (not for all smartphones), especially in the corners of the image.

In the illustrations below, on the left is a photo from the stock camera of the Galaxy S7, and on the right is a photo in HDR+ via Google Camera on the same device.

Night photographs of the city. Here you can clearly see that HDR+ gives us the opportunity to get a clear image of a group of citizens located under the Beeline sign. The sky looks clear, the road sign is clear. The grass is green, as it should be. Beeline sign with correct colors. Clear drawing of balconies, wires and tree crowns. It’s important that the detail in the trees on the right (in the shadow) is slightly worse with HDR+ than with the stock camera.

()

(


)

(


)

Pay attention to the depiction of the faces of the sculptures, the richness of the colors of the clothes, the absence of critical noise. However, the rendering of objects in the shadows again leaves much to be desired.

(


)

(


)

(


)

City outskirts. The dim light of the lanterns is enough to paint the HDR+ surface of a building wall.

(


)

()

(


)

Morning photos. In difficult conditions of morning shooting with pronounced backlight, colors look natural, the pattern on tree trunks is clear, the image of a bush and grass in the shadow of a tree is visible in depth.

(


)

(


)

()

HDR+ has few disadvantages, and they are insignificant for most scenes. Firstly, creating HDR+ photography requires a lot of CPU and RAM resources, which leads to a number of negative consequences:

  • Battery consumption increases and the device heats up when combining a series of images;
  • you can’t quickly take several pictures;
  • Instant preview is not available; the photo will appear in the gallery after processing is complete, which on the Snapdragon 810 lasts up to four seconds.

These problems have already been partially solved using Pixel Visual Core. But this coprocessor will most likely remain Google Pixel's trump card.

Secondly, the algorithm needs at least two photographs to work, and on average four to five frames are captured. That's why:

  • There will definitely be situations in which the algorithms will fail;
  • HDR+ is slightly inferior to classic HDR in terms of dynamic range coverage;
  • creating one photo and processing it using a fast ISP coprocessor will be preferable in action scenes, because it allows you to avoid ghosting and blurring of objects at low shutter speeds.


Disadvantages of HDR+ from the report of its developers


Night photography with many moving objects
()

Color gamut

Before we start looking at the standards, I want to take a look at the Rec.2020 color gamut and bit depth. Color gamut is more display oriented than camera sensors. It is the former in the consumer segment who have limitations in their capabilities. Sensors have come a long way. Sensors have no problem capturing the entire spectrum of light waves visible to the human eye. The eye sees 380-700 nm, the sensor sees 400-1100 nm. Therefore, there are no problems with color gamut. With this I want to close the topic of the constant question: “can the sensor have such a wide coverage.” Can do more.

Let's look at this diagram. Rec.709 (aka sRGB) is what monitors have been able to display for us for a long time, and not always 100%. Rec.2020 is what HDR standards developers are planning for the future. Today, no monitor is capable of displaying the entire Rec.2020 100%. A squiggle, painted in different colors, is what the human eye sees. Mentally continue the figure from 700 to 1100 and shorten it a little from 380 to 400, and you will get what the sensor of a modern camera can do. And this is much more than Rec.2020. C DD is a little more modest. “Adult” cameras are capable of 14 stops and higher (if necessary, we can take a RED camera with parallel recording of different exposures in two separate files and get an ultra-wide DD). In simple terms, stop defines a range of brightness that is 2 times brighter than the previous brightness. So, if x is the original brightness, then 1 stop = x*2 (600 nits relative to 300 nits = +1 stop). Smartphone camera sensors are 10-11 stops. With displays it’s a little different: regular HDR displays can display 10-11 stops, premium ones 13-14, professional ones up to 24 stops (For example, Eizo ColorEdge Prominence CG3146). Here I would like to once again draw attention to the fact that when we are talking about a camera sensor, we are talking about the entire sensor area, and when we are talking about the consumer segment of displays, there is a chance that the peak brightness is only working at 10% at a time display area. In this case, we can assume that a sensor of 11 stops can capture more DD than the monitor can display.

It is generally accepted that Rec.2020 is used in HDR video content. In fact, to display Rec.2020, the monitor must literally shine a laser directly into your eyes. As I wrote above, today there is not a single monitor capable of displaying the entire Rec.2020. Rec.2020 remains a container with hope for future development, but in reality they are trying to deliver the content itself in the P3 color space. Colorists who color HDR content color it within P3, but place it in Rec.2020 metadata containers.

What devices does HDR+ work on?

Purely theoretically, HDR+ can work on any smartphone with a version of Android no lower than 5.0 (Camera2 API required). But for marketing reasons, and also due to the presence of some optimizations that require special hardware components (the Hexagon coprocessor in Snapdragon), Google deliberately blocked the inclusion of HDR+ on any device other than the Pixel. However, Android wouldn't be Android if enthusiasts hadn't found a way around this limitation.

In August 2022, one of the 4PDA users managed to modify the Google Camera application so that the HDR+ mode could be used on any smartphone with a Hexagon 680+ signal processor (Snapdragon 820+) and Camera2 API enabled. At first, the mod did not support ZSL, and overall it looked damp. But this was enough to improve the photography quality of smartphones Xiaomi Mi5S, OnePlus 3 and others to a level previously unattainable for them, and HTC U11 was able to compete on equal terms with Google Pixel.

Later, other developers joined in adapting Google Camera to phones from third-party vendors. After some time, HDR+ even worked on devices with Snapdragon 808 and 810. Today, for almost every smartphone based on Snapdragon ARMv8, running Android 7+ (in some cases Android 6) and having the ability to use the Camera2 API, there is a ported version Google Camera. Often it is supported by an individual enthusiast, but usually there are several such developers at once.

In early January 2018, XDA user miniuser123 managed to run Google Camera with HDR+ on his Exynos-powered Galaxy S7. A little later it turned out that Google Camera also worked on the Galaxy S8 and Note 8. The first versions for Exynos were unstable, often crashed and froze, optical image stabilization and ZSL did not work in them. Version 3.3 is already quite stable, supporting optical image stabilization and ZSL and all Google Camera functions, with the exception of portrait mode. And supported devices now include several Samsung A series smartphones.

But my iPhone shoots Dolby...

Now let's remember where we heard about the Dolby Vision standard regarding smartphones? Right! iPhone. It would seem how cool it is to have Dolby Vision in a smartphone! But it was not there. Marketing has again captivated us with its beautiful chatter. How it happened that Dolby unexpectedly created a new standard using the HLG gamma curve and implying 10 bits still remains a mystery to me. But the fact remains: Dolby made big concessions especially for Apple. What do we have as a result? We have the HLG standard with the ability to include dynamic metadata from Dolby. And the further into the forest, the more firewood. To work with dynamic metadata from Dolby, a colorist must buy a not-so-cheap license from Dolby. Apple has such a license, and their native application has the ability to work with this metadata. That's all. All the advantages end there, which, in principle, forces Dolby Vision on Apple smartphones to be perceived as regular HLG 10 bits from the same Sony. Maybe Apple has some far-reaching plans in this regard. But so far it all looks like beautiful marketing. Because not only does no one have licenses to edit metadata, but you also need to look for users who can do it correctly. Plus, there are critically few applications that can work with Dolby Vision metadata. Dolby Vision from Dolby is a very powerful video standard, the best available today. Dolby Vision from Apple is a pathetic imitation, designed to popularize the standard (all TV manufacturers rushed to support it), which is also not bad. Because Labor colorists don’t consider the HLG standard to be an HDR standard, so the Dolby Vision presented by Apple did not cause any delight in them.

Using an Apple iPhone 12 captured Dolby Vision content as a source in a Dolby Vision production.

How to get HDR+ on your device

If you have an Exynos smartphone, then the choice is small.8) Go to the discussion thread on XDA, open the spoiler V8.3b Base (if you have Android or Pixe2Mod Base (for Android 7) and download the latest version. You can also visit the group on Telegram, where all Google Camera updates are quickly posted.

Owners of smartphones with the Qualcomm process will have to search. Enthusiasts actively support HDR+ versions of Google Camera for many smartphones. Here are the most famous models:

  • OnePlus 3/3t;
  • OnePlus 5/5t;
  • OnePlus 6;
  • Xiaomi Mi Max2;
  • Xiaomi Redmi 4x;
  • Xiaomi Redmi 5 Plus.

If your model is not on the list, I recommend taking a look at the topics discussing the camera and the device itself on w3bsit3-dns.com and XDA. At a minimum, there will be users who tried to launch HDR+.

In addition to all of the above, I will mention that there is a page on the Internet where almost all versions of Google Camera are collected, where it is convenient to test various Gcams on little-known devices.

What is Al in a smartphone camera?

What is an AI camera and what is it for? AI stands for artificial intelligence. Artificial intelligence inside a smartphone camera is an AI specially trained algorithm that can determine the scene and subject of shooting.

Interesting materials:

How to print from a computer to a Canon printer? How to transfer photos from computer to laptop? How to transfer data from computer to computer? How to transfer photos from iPhone to computer via Bluetooth? How to transfer music from a computer to a flash drive? How to transfer video to computer? How to switch to normal mode on a computer? How to transfer photos from iPhone to computer via iTunes? How to transfer photos from one computer to another? How to transfer a recording from a voice recorder on an iPhone to a computer?

Rating
( 1 rating, average 5 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]