HomeCulture and EntertainmentServices (only Italian)

Instructions for using the page for the creation of HDR images on line

A problem one can ofter encouter while shooting is the insufficient dynamic range of the sensor. The dynamic range can be described as the range of brightness values which can be reproduced in a correct manner. Toward higher brightness values it is limited in a rather sharp way by the saturation of the photodiods, in other words by the moment in which the photodiods are "full" and are not able any more to collect further light photons. At the level of the picture it corrisponds to the areas which are burnt, it means completely white and whithout any detail.
Towerd lower brightness values the limit is less sharp, and it corrisponds to the brightness value at which the signal becomes indistinguishable noise.
This means that it is not possible to improve the apparent dynamic range just by working on the exposure curve, since in this way the darker parts would show inacceptable amounts of noise when lightened.
An alternative solution is represented by the fusion of differently exposed pictures. Real HDR photography (High Dynamic Range photography) is based on the modelling of the physical properties of the sensor by the analysis of various pictures (at least three), in order to obtain the values of the parameters which characterize its response to incoming light. Although valuable, this approach is very complicated. I belive that, at least in some situations, a much more simple approach to the problem is possible, an approach that completely avoids a modelling of the properties of the sensor and is just based on the fusion of two or three jpg pictures, according to a certain rule, in order to obtain a final image which better reproduces the real observed scene than any possible simple single photo. This solution is proposed on the page HDR on line of this web site. The method is based on shooting two or three pictures (with the same framing, although, in the case of two pictures, it is possible to correct small shifts). In the case of two pictures one overexposed and one underexposed, in the case of three pictures one overexposed, one exposed correctly and one underexposed. The two, or three, pictures are then fused, so that in the darker parts the information from the overexposed picture prevails, in the brighter parts the information from the underexposed picture.

Compared to real HDR this approach is simplicistic, nevertheless it can deliver good results in most of the cases in which all parts of the scene can be correctly exposed at least in one of the two starting pictures. In case of strongly contrasted scenes, like sunsets or windows in a room, or scenes with particular light conditions, like reflections, this can not be the case and real HDR would be necessary.
In the following an example. The picture shows a scene in which the trees on the sides risult more or less agains the light. Exposing correctly for the central parts of the picture the branches of the trees appear almost black, although in reality they appeared green, with many recognizable details.





Modifying the exposure curve it is possible to lighten the branches.



If you look in detail, however, you see that in the parts which have been lightened there is now much more noise than in the rest or the picture and that the details are there not very well recognizable anyway.



Using instead a second picture, with a longer exposure time, so that the trees are correctly exposed,



and fusing it with the first one, according to a particular algorithm, described further forward, it is possible to get a better result



in which the trees are at the same time correctly exposed, don' t show more noise than the rest of the picture and possess a good amount of details.



If you want to use this page, in the case of two pictures these are the steps:
  • Shoot a picture correctly exposed for the brighter parts of the scene (High Lights picture)
  • Shoot a picture correctly exposed for the parts of the scene which appear too dark in the first picture (Low Lights Picture)
  • Upload both picters. Obviously they must be of the same size and shot with identical frameset.
  • Choose the best gradiend shape, position and width. You can get a preview of the result by cliccking on "Preview!"
  • Once you have found a suitable gradient you can get the proper mixed photo by clicking on "Execute Mixing!" After a while in the lower part of the page a large preview will appear. Clicking on it you can download the mixed picture at full resolution.
The parameters you can vary are the gradient shape, its width and the minimum brightess, it means the minimum brightness for which all the information, at the level of the single pixel, comes from the High Lights picture. The brightness of a pixel has been defined as the normalized square root of the some of the squares of the three RGB values. If, for example, a pixel has RGB values 100, 150 and 200, respectively, the brightness will be 0,61. The brightness of black is 0, that of white is 1.

The gradient can have four different shapes:
  • Linear gradient
  • Spherical gradient (ok, in reality it just has the shape of a quarter of a circle, but spherical sounds very good!)
In the first case the percentage of information coming from the High Light picture increases linearly with the decreasing of the pixel brightness. In the second case, instead, the information of the Low Light picture is used almost only in the darker parts of the image.
The following picture illustrate the two gradient types and the meaning of the parameters you can work on:



Up to now I can say that the gradient shape which gives more often the best results is the linear gradient with L1 = 0 and L2 = 1. Sometimes it is useful to set the gradient width larger then the minimum brightness. This means that in the final image no picture will contain only information coming from the picture for the Low Lights.
In case of images with bright low contrast areas the spherical gradient can be more suitable, because it better preserves the details in those areas than the linear one.

In the case it is possible to put th camera on a tripod or an other kind of support it is possible to use the approach involving three picturs. Also in this case it is possible to choose between two gradient shapes, one linear and one spherical:
It is worth underline that the best choice varies from case to case, and also how much the two final images will be different. In some cases they will be almost identical, in some others they will be clearly different.


On the final picture it is often useful to use a simple image editor (FastStone Image Viewer is pefect for this purpose) in order to increase the local contrast and sometimes the gamma value. Otherwise the picture tends to look a little bit dull and darker than the original.

Here some examples, for now only for the case of two starting pictures, but with three the sense doesn't change a lot. By clicking on the previews you can download half resolution pictures:

High Lights picture Low Lights picture Final picture


Shift Correction
The page now permits to correct small vertical and horizontal shift between the two pictures. Rotations have still to be corrected in advance with an image editing tool. For this specific porpose the best solution is Irfan View), whic permits rotation with a precision up to one tenth of degree.
Important Note: The image which has to be rotated is always the one for the high lights (the darker one). When you rotate it with the external tool be sure that the autocrop option is not active. Use the larger created image without cutting away the created lateral triangles. The image is going to be cropped automatically to the dimension of the low light pictures while uploading.
The correction system includes three options:
  • Grid
    In this case the necessary corretion is evaluated based on a grid of nine rectangles uniformly distributed on the image. It is the first way which was implemented, but is the less reliable.

  • Rectangle
    In this case it is possible to decide the position and the dimensions of a rectangle on which estimate the best shift. To do it, click once on the high light image to open it and double click to close it. I have come to the conclusion that the best is to create a very large rectangle, which includes the whole image, because this reduces the uncertainties of the system.
    Two cautions: on one hand it is important not to include in the rectangle subjects which have moved from one picture to the other. On the other hand it is also necessary to leave enough margins on all sides so that the rectangles doesn't go out of the shifted image. If this happens, the system will encounter an error. In this case just go back to the previous page with the browser and choose a smaller rectangle.

  • Square to optimize manually the shift
    In this case the square part of the two images are shown superposed at original dimension and in transparency in a dedicated table. Clicking on the arrows it is possible to shift one square in respect to the other until the subjects are exactly superposed. The values for the vertical and the horizontal shift can be used to create the preview or directly the full size image (throuh the button Apply!).
    The manual optimization can be necessary in case of images with no sharp edges inside or in which part of the subject becomes invisible in the longer exposed picture.
    Important Note: If the shown square doesn't look like it should, in particolar with Internet Explorer, delate the cache of the browser!
The option "Use inserted values as starting point!" means that the optimization process (in case of the first two options) starts from the inserted values. This is useful to help the algorythm move in the correct directions when the shift is very large (for example in the case of pictures taken at long focal lentgh).