# HDR may have ruined me!



## D-50 (Aug 21, 2007)

So I figured out how to use Photomatrix and Photoshop to create nice HDR images although I think I may have passed the point of no return.  Now when I look back at my regular photos they seem lifeless and flat. Anyone else experience this?


----------



## gizmo2071 (Aug 21, 2007)

Lots of people seem to use HDR's for everything.
Like "Oh I can't be bothered to expose this correctly or use a polarizer... I know I'll just make it a HDR"

Most of the time I like to have my shadow areas and my highlight areas, looking at things via the Zone Metering schoolof thought.
HDR kicks zone metering in the teeth, and thus... I'm not much of a fan.

I like to see HDR's every now and again, and I sometimes love the surrealness to them, but most pictures do not need HDR, they just need proper exposure and a filter of some form.


----------



## abraxas (Aug 21, 2007)

D-50 said:


> So I figured out .... the point of no return.  Now ... my regular photos they seem lifeless and flat. Anyone else experience this?



Yes.  However, even my HDR's are lifeless and flat.  I'm thinking it's my style.

I use HDR in my work, but it's because it's quicker than figuring out exposures.  I try not to think at work.

I'm about 50/50 when it comes to exposing to the light and using RAW or HDR.  I think in the long run I'll end up shooting RAW.  I'm hoping to maintain the flat amd lifeless thing I have going.


----------



## glaston (Aug 22, 2007)

> Like "Oh I can't be bothered to expose this correctly or use a polarizer... I know I'll just make it a HDR"


That's kind of off base IMO. I understand your point and I'm not saying that HDR is never used as a crutch.
However, by nature, failing to expose the image properly and without accurate stops of exposure in the right bracketed range, the HDR image produced from that wouldn't be an accurate HDR image at all.
And accuracy is the entire point of using HDRI. So if it's not accurate, there's no point in using HDRI.
It would result in a 'non-linear' dynamic range that would throw off all the other luminance values.
It's all about the technical aspect of measured light samples.
The premise of HDRI is to force the image into 32bit float mode so that it can represent a much higher range of luminance/radiance values than 8-16bit images.

It's the same concept as using 32 or 64bit memory addressing in your computer.

For example, the x64 architecture of new CPU's. Using 64bit addressing allows the computer to utilize much more memory than 32bit x86 architecture. Which makes it suitable for working with large datasets and CPU and RAM intensive applications like those needed for 3D rendering and animation.

The dynamic range of a digital image works in much the same way.

Assuming that once you create the HDRi mage you save it in a file format that supports 32bit float mode. Such as HDR, OpenEXR, radiance etc...

This allows the luminance values to be represented in a much higher range that traditional 8-16bit images would automatically clip.
So regardless of how well exposed a 16bit image is, it will never represent luminance values higher than is mathematically possible with 16bit range.

Simply put, a 16bit image will suffer from blown highlights and loss of detail in shadow points LONG before the same image in 32bit float mode.
A 16bit image just doesn't have enough data to retain detail beyond a certain point because it doesn't recognize mathematical values beyond that point.
The computer basically rounds off the values used to represent those higher luminance ranges. Which visually translates to clipping that lower/higher exposure range.

I hope I've explained myself well enough for people to understand.
There's plenty of information on the web which could explain it better than I can.


----------



## Sw1tchFX (Aug 22, 2007)

D-50 said:


> Now when I look back at my regular photos they seem lifeless and flat. Anyone else experience this?



Nope. There are times when HDR is useful, there are other times when it's not. It's up to the photographer to decide when it would be a wise course of action. 

About HDR's replacing correct exposure or Polarizers, HDR does not reduce glare, if anything, it would promote it. If the images you use for your HDR are not carefully created to work together, than you'll fail.


----------



## glaston (Aug 22, 2007)

> Nope. There are times when HDR is useful, there are other times when it's not. It's up to the photographer to decide when it would be a wise course of action.


This sounds sketchy to me.
Anyone who works with REAL full rez 32bit float HDRI's in a true hdr format like radiance, can't help but notice that compared to that other lower rez images pale in comparison.
There's just no getting around it.
You must be working with 16bit (half precision) hdr images that are tone mapped for display on low dynamic range output device/s.
When you properly create a light probe using the chrome ball method and have all your settings dialed in properly, the range and contrast of the image is astounding. There's no way that you're getting results anywhere near it with traditional "device referred" digital images.
I don't care what you do to it, it just isn't possible.
I'll post an example of a 3D generated chrome sphere, lit with a radiance map for true specular reflections.
Because I don't think we're talking about the same thing here.


----------



## Mohain (Aug 22, 2007)

glaston said:


> Anyone who works with REAL full rez 32bit float HDRI's in a true hdr format like radiance, can't help but notice that compared to that other lower rez images pale in comparison.
> ...
> 
> I'll post an example of a 3D generated chrome sphere, lit with a radiance map for true specular reflections.
> Because I don't think we're talking about the same thing here.


 
I think you must be right. I'm pretty sure that the OP, and subsequant posters, were referring to HDR (and tone mapping) use in photography, what with it being a photography forum and all that. Not many are going to have $50k monitors to view 32bit HDR images. 

Back on topic ... I was getting to the stage where I wanted to HDR everything but I think I've got over it now. I try to go for subtle uses sometimes now to bring out some shadow detail or if there is a bright sky that needs detail retaining. I'm determined to use filters and get it right in camera as much as possible now.


----------



## D-50 (Aug 22, 2007)

I am in the beginging stages of HDR use so I guess I will have to get it out of my system so I can get back to real photography. Not to say I take less time when actually taking photographs, I always try to get it right in the camera.  I have found though I enjoy HDR images of people much more than lanscapes.


----------



## Sw1tchFX (Aug 22, 2007)

glaston said:


> This sounds sketchy to me.
> Anyone who works with REAL full rez 32bit float HDRI's in a true hdr format like radiance, can't help but notice that compared to that other lower rez images pale in comparison.
> There's just no getting around it.
> You must be working with 16bit (half precision) hdr images that are tone mapped for display on low dynamic range output device/s.
> ...


There are times when HDR's are impossible to create (any sort of action), and HDR's add time to post production. I just assume use a single RAW file and do my PP on that than have to make an HDR and fuss with it, unless HDR is the only way to get the look i'm aiming for.


----------



## glaston (Aug 23, 2007)

> I think you must be right. I'm pretty sure that the OP, and subsequant posters, were referring to HDR (and tone mapping) use in photography, what with it being a photography forum and all that. Not many are going to have $50k monitors to view 32bit HDR images.


Of course, neither do I have a $50k monitor.
But I do work with HDRI images and normal images, and know that an image generated properly using HDR is above and beyond what a normal image ever could be.
In terms of dynamic range and contrast.
My comments were directed to what I thought was switch saying that his lower dynamic range images were achieving the same quality as HDR images.
Which could only be true if the HDR image wasn't properly generated using all the correct stops of exposure to cover the dynamic range.
Even when it was tone mapped for display on conventional monitors.

As to this being a photography forum, again, of course. But I can't help but notice that some forms of hybrid photography aren't ever used, and would not be respected the same as work created using more traditional methods. Like doing everything "in-camera".
Which I'm sorry to say, is BORING. And I don't see why it is so revered.
It only shows proficiency with the camera. Not with post-processing skills which are as much a part of photography in this day and age as skill with the camera. The proof of this is that camera makers are adding post processing functions into the camera.
All things being equal, work that draws on the skill of in-camera functionality, and also extensive post processing skill, shows more talent with more than one area of photography.
Which basically means MORE SKILL LEVEL in general.
The only proof needed to validate this is how old timers resist digital imaging, and how many people try to act like doing everything "in-camera" is somehow more valid than an image that makes use of extensive post processing.
What it REALLY means is that your skills are lacking, and you need to update not only your skill level, but your out of date ideas about photography in general...


----------



## glaston (Aug 23, 2007)

> There are times when HDR's are impossible to create (any sort of action), and HDR's add time to post production. I just assume use a single RAW file and do my PP on that than have to make an HDR and fuss with it, unless HDR is the only way to get the look i'm aiming for.


Can't argue with that logic. Not all shots require HDR, that's just a fact. 
I thought based on a previous post that you were saying that your normal photos look as good as your HDR images.
My fault.
I do alot of work with HDR, and know that they are in a separate class of photograph all their own.
On occasion I do product concept photography. Which is basically a product shot when there's no physical product to photograph.
My method requires HDRI. I shoot a bracketed sequence to create an HDR from the environment that the product would normally be photographed in.
Then I create a 3D rendering of the product and light it with the HDRI.
Rendered with a separate alpha, I then composite the 3D rendering into the tone mapped version of the HDRI.
The result is a totally natural looking product shot which uses a hybrid form of photography to create an ad-like shot of a totally non-existent product.

It's the work I like most, but do least.
3D rendering is very closely related with photography. Just more involved. You have to have an understanding of surface properties and how light interacts with that surface. Because you're creating a photograph from the ground up.


----------



## JBLoudG20 (Aug 23, 2007)

I am confused by one thing that is mentioned. There seems to be a distinction: use HDR or photograph in RAW. Why is this? Am I reading incorrectly? Can you not make an HDR out of RAW images?


----------



## glaston (Aug 23, 2007)

Yeah you can use RAW for HDR images.
People are just saying that with RAW you can get a good enough image that compiling an HDR isn't necessary in many cases.
But RAW is still a traditional low dynamic range format so if you're creating an HDR image RAW isn't a replacement.


----------



## ashleysmithd (Aug 23, 2007)

I've recently done a HDR shot, and already it's the best I've got. It beats all my other pictures.

Perhaps not for creativity and technical brilliance, but as eye candy it comes first.


----------



## fido dog (Aug 24, 2007)

Ummmmm...................What's HDR?

Seriously, I have no clue.:blushing:


----------



## LaFoto (Aug 25, 2007)

High Dynamic Range.

In the best of cases, you take at least three photos of the same thing, with the camera never moving, in three different exposure steps. One photo exposes for the brightest part (which will throw all dark parts into underexposure), one exposes for the midtones ("normal" exposure), and one exposes for the shadow parts (throwing all that is bright into glaring overexposure). You can achieve even better results if you up the odd number of photos you take.

Later, in post processing, you layer all your exposures one above the other, and then comes this (to me still unexplored and therefore kind of "magic") thing of "tone mapping" where you adjust all the newly to be seen highlight and shadow areas so that you end with a very dynamic (with regards to the distribution of the light) picture. "Dynamic" meaning: the range of areas that are correctly exposed is much wider than what the camera could ever manage to do in one photo only.

That is the idea as I understand it.
I don't understand the MAKING-OF HDRs too well so far, though.
I have created a few, and was pleased with ONE, but I often lack both time and patience for them - and my tripod is too sketchy for the "real" HDRs (with three, five, seven or more photos) and I get "where-are-my-glasses" pics in the end, and as I understand it, changing exposure values of a RAW file later in the RAW programme to create three or five different exposures is not considered the "true HDR-technique" ...


----------



## fido dog (Aug 25, 2007)

This sounds like something a Photoshop instructor was telling me about. Is there another name for it in Photoshop?


----------



## LaFoto (Aug 25, 2007)

Not that I would know of. No. It is even the same in my language. Seems an international term, kind of.


----------



## glaston (Aug 25, 2007)

Only terms I know of are HDR-HDRI-tone mapping-radiance map-cross probe-light probe.
In photography it's mainly called HDR because your goal is a High Dynamic Range photo.

In 3D design it's usually called HDRI to describe the process. But the individual shots are known as cross probes or light probes because what you're doing is essentially probing a scene for it's real world light model to apply to a computer generated scene. 
You can see this lighting method in CG movies like Shrek, Cars, and hybrids like Star Wars, Lord of the Rings, etc...
You do that by taking a series of photos with varied exposures by using a mirrored ball suspended in front of your camera in the same way a golf ball sits on a tee. You take the photos from all sides of the mirrored ball to basically get the light readings from the entire scene, not just from the perspective of the camera pointed in 1 direction the way a normal image is taken.
Seems that in both photography and 3D rendering it is considered a very advanced technique and really does require some technical knowledge and experience to accomplish.
With photography you don't need to take as many photos to cover the dynamic range. In 3D design you have to be strict about getting ALL the shots right to cover the dynamic range in a technically correct manner. Because that extra dynamic range is used to apply the light to the rendering.
And without having the entire range correct, your rendering will look synthetic and incorrect in it's illumination.
Once you have your HDRI for your 3D rendering, you map the image to the luminance channel of your material on a sphere created in your modeling app and put your objects inside the sphere.
The computer calculates light rays inside the sphere according to the luminance values of the HDRI and gives the synthetic scene the exact same light readings that you recorded with your camera.

The terminology used to describe the actual physics of what the light is doing is in CG terms, 'global illumination' but in physical terms is known as 'diffuse inter-reflection', and then a cheat method called 'ambient occlusion'. 
In physical reality-
Diffuse interreflection is a process where light reflected from an object strikes other objects in the surrounding area, illuminating them. Diffuse interreflection specifically describes light reflected from objects which are not shiny or specular. In real life terms what this means is that light is reflected off non-shiny surfaces such as the ground, walls, or fabric, to reach areas not directly in view of a light source. If the diffuse surface is colored, the reflected light is also colored, resulting in similar coloration of surrounding objects. 

Radiance mapping is somewhat like using a light box, of course a very advanced and technically difficult to setup light box that exists only within a coordinate grid inside the computer..
It took me about 3 months to be able to accurately use the HDRI method in my renderings.
And the application of it is constantly evolving so it's important to keep up with it.

Sound complicated? It is! But it's SO interesting. IMO, anyway.
Understanding these things gives you a broader understanding of how light interacts with matter.
I believe that this allows me to understand the physics of light much better, which in many cases allows me to push my standard images above an beyond because I know more about what the light is doing and needs to be doing to get the result I'm after.


----------



## Q-Ball (Sep 11, 2007)

Im in the same boat as the OP. I am still experimenting with using HDR freeware and some of the results are asstounding. No doubt the purists would be up in arms as a lot of the images are bordering on fantasy however I find myself admiring the strong contrasts and deep saturated colors. Hopefully this newfound fad will eventually wear off and I can get back to regular two step post processing


----------



## elsaspet (Sep 11, 2007)

gizmo2071 said:


> Like "Oh I can't be bothered to expose this correctly or use a polarizer... I know I'll just make it a HDR"
> quote]
> 
> That's pretty narrow minded, IMO


----------



## ga_shooter (Sep 12, 2007)

What in the world is HDR? I searched the site, but all i can find are people extolling the vitrues or bashing the process, none explaning what it is??


----------



## Vinnay (Sep 14, 2007)

LaFoto said:


> High Dynamic Range.
> 
> In the best of cases, you take at least three photos of the same thing, with the camera never moving, in three different exposure steps. One photo exposes for the brightest part (which will throw all dark parts into underexposure), one exposes for the midtones ("normal" exposure), and one exposes for the shadow parts (throwing all that is bright into glaring overexposure). You can achieve even better results if you up the odd number of photos you take.
> 
> ...




couple posts above you.


----------



## MarkCSmith (Sep 16, 2007)

fido dog said:


> This sounds like something a Photoshop instructor was telling me about. Is there another name for it in Photoshop?


 
There's a process known as the Dragan Technique. Produces a very similar hyper-realistic effect.


----------

