HI...
I have a basic question regarding preprocessing techniques(in particular normalization) in computer vision/image processing.
This is what I read about normalization under my computer vision course.
"As objects in images usually have parameters that vary with in certain intervals(e.g size, position, intensity...). However, results of image analysis should be independent of this variation.
So the goal is to transform the image such that parameters are mapped onto normalized values(or some appropriate approximation)
1) We do normalization to standard interval [0,a] e.g [0,255].
2) We normalize to zero mean and unit variance i.e. normalized intensities have mean = 0 and variance = 1.
And the most important normalization method is Histogram equalization"
I get the first point that it is necessary for contrast stretching to use the complete dynamic range of intensity so we do this first step. But I don't know why do we do the second step? why we normalize to zero mean and unit variance??
Can some one help me out please?
Normalization in Image processing

 Posts: 13034
 Joined: 20100123T23:01:3307:00
 Authentication code: 1151
 Location: England, UK
Re: Normalization in Image processing
Fair enough.1) We do normalization to standard interval [0,a] e.g [0,255].
If the image has values from 0 to 255, then normalising the mean to 0 (zero) will wipe it out, setting all values to zero. That would remove all the information from the image.2) We normalize to zero mean and unit variance i.e. normalized intensities have mean = 0 and variance = 1.
More sensibly, the mean would be set to the midpoint of the range. If the range is 0 to 255, the midpoint is 127.5.
Setting variance (or standard deviation) to any particular value may be useful for comparing images.
Histogram equalization is an important method. It will spread the values to the full range eg 0 to 255, set the mean to the mid point, and the SD to 0.288 of the range.And the most important normalization method is Histogram equalization
Machine vision is commonly used to identify objects in a scene, or find their orientation, etc. This involves image comparison. I might photograph the same object twice under different lighting. Normalization increases the chances that the two photos will look the same, and can be compared with each other.why we normalize to zero mean and unit variance??
Does that help?
snibgo's IM pages: im.snibgo.com
 fmw42
 Posts: 26383
 Joined: 20070702T17:14:5107:00
 Authentication code: 1152
 Location: Sunnyvale, California, USA
Re: Normalization in Image processing
In normalized cross correlation, one subtracts the mean and divides by the standard deviation to achieve what you have in 1) and 2). The mean subtraction mitigates brightness variations and the division by the standard deviation mitigates variations in the spread of the data about the mean so that the two images have similar means and standard deviations.
If one used histogram equalization, that tries to make the standard deviation infinite so that all grayevels have the same counts.
These may be two competing effects that do not always work well together.
If one used histogram equalization, that tries to make the standard deviation infinite so that all grayevels have the same counts.
These may be two competing effects that do not always work well together.
Re: Normalization in Image processing
Thanks... Yes It helped me a lot

 Posts: 13034
 Joined: 20100123T23:01:3307:00
 Authentication code: 1151
 Location: England, UK
Re: Normalization in Image processing
I've just figured out that mean=0 and variance=1 describes the "standard normal distribution", aka Gaussian, or bellshaped curve. This is different to "equalization", which is a straight line. See http://en.wikipedia.org/wiki/Normal_distribution
snibgo's IM pages: im.snibgo.com