## using a sobel operator - edge detection

- fmw42
**Posts:**25757**Joined:**2007-07-02T17:14:51-07:00**Authentication code:**1152**Location:**Sunnyvale, California, USA

### Re: using a sobel operator - edge detection

Normalization for kernels that add to zero as in this case should be equal to 1/2 of the sum of the absolute values of the kernels (at least when no bias)

Take this image, simply a step from white to black (in normalized image values 1 to 0):

convert -size 100x100 gradient: -threshold 50% grad_thresh50.png

Now consider the filter:

1 2 1

0 0 0

-1 -2 -1

As the filter passes over the image vertically moving downward, first it has:

1 2 1 at the white (4x1)

0 0 0 at the white (0x1)

-1 -2 -1 at the white (-4x1)

Thus while it is over white it gets a sum of zero, so the result is black.

Then as the filter reaches the transition from white to black, it has

1 2 1 at the white (4x1)

0 0 0 at the white (0x1)

-1 -2 -1 at the black (-4x0)

this sums to 4 which gets clipped at 1 for nonHDRI.

Then as the filter moves one row down it has

1 2 1 at the white (4x1)

0 0 0 at the black (0x0)

-1 -2 -1 at the black (4x0)

This sums to 4 which gets clipped at 1.

Thus we get a black image with a double line of white.

Set the image normalization divisor to 1 equivalent to what IM produces as a nominal normalization per Anthony's comment about the normalizatin code:

divisor=1

convert grad_thresh50.png -evaluate divide $divisor \

-convolve "1,2,1,0,0,0,-1,-2,-1" grad_thresh50_$divisor.png

convert grad_thresh50_$divisor.png -format "min=%[fx:minima]; max=%[fx:maxima]" info:

min=0; max=1

Now change the divisor to a perfect normalization (divide the image by 4 which is equivalent to dividing the kernel elements by 4):

divisor=4

convert grad_thresh50.png -evaluate divide $divisor \

-convolve "1,2,1,0,0,0,-1,-2,-1" grad_thresh50_$divisor.png

convert grad_thresh50_$divisor.png -format "min=%[fx:minima]; max=%[fx:maxima]" info:

min=0; max=1

but once the divisor is larger than the optimal normalization value of 4. The max value starts to come down from 1:

divisor=4.1

convert grad_thresh50.png -evaluate divide $divisor \

-convolve "1,2,1,0,0,0,-1,-2,-1" grad_thresh50_$divisor.png

convert grad_thresh50_$divisor.png -format "min=%[fx:minima]; max=%[fx:maxima]" info:

min=0; max=0.975616

divisor=8

convert grad_thresh50.png -evaluate divide $divisor \

-convolve "1,2,1,0,0,0,-1,-2,-1" grad_thresh50_$divisor.png

convert grad_thresh50_$divisor.png -format "min=%[fx:minima]; max=%[fx:maxima]" info:

min=0; max=0.500008

So nominally IM will make too large a result from any edge filter. The result will be amplified by 1/2 the sum of the absolute value of the kernel elements. Small edge values will be brighter by this factor and large edge values will be clipped at 1.

So to avoid clipping at 1 in non-hdri, you need to normalize by 4 = 1/2 sum (abs value of kernel elements)=

( 1+2+1+0+0+0+1+2+1)=8/2=4

That would be the optimum automatic normalization for edge filters whose kernel elements sum to 0.

Otherwise, one has to provide kernel elements that are properly normalized by dividing the integers by 4, i.e.

0.25 0.5 0.25

0 0 0

-0.25 -0.5 -0.25

or divide the input image by 4. This provides a result that is not clipped at the white end for nonHDRI.

Take this image, simply a step from white to black (in normalized image values 1 to 0):

convert -size 100x100 gradient: -threshold 50% grad_thresh50.png

Now consider the filter:

1 2 1

0 0 0

-1 -2 -1

As the filter passes over the image vertically moving downward, first it has:

1 2 1 at the white (4x1)

0 0 0 at the white (0x1)

-1 -2 -1 at the white (-4x1)

Thus while it is over white it gets a sum of zero, so the result is black.

Then as the filter reaches the transition from white to black, it has

1 2 1 at the white (4x1)

0 0 0 at the white (0x1)

-1 -2 -1 at the black (-4x0)

this sums to 4 which gets clipped at 1 for nonHDRI.

Then as the filter moves one row down it has

1 2 1 at the white (4x1)

0 0 0 at the black (0x0)

-1 -2 -1 at the black (4x0)

This sums to 4 which gets clipped at 1.

Thus we get a black image with a double line of white.

Set the image normalization divisor to 1 equivalent to what IM produces as a nominal normalization per Anthony's comment about the normalizatin code:

divisor=1

convert grad_thresh50.png -evaluate divide $divisor \

-convolve "1,2,1,0,0,0,-1,-2,-1" grad_thresh50_$divisor.png

convert grad_thresh50_$divisor.png -format "min=%[fx:minima]; max=%[fx:maxima]" info:

min=0; max=1

Now change the divisor to a perfect normalization (divide the image by 4 which is equivalent to dividing the kernel elements by 4):

divisor=4

convert grad_thresh50.png -evaluate divide $divisor \

-convolve "1,2,1,0,0,0,-1,-2,-1" grad_thresh50_$divisor.png

convert grad_thresh50_$divisor.png -format "min=%[fx:minima]; max=%[fx:maxima]" info:

min=0; max=1

but once the divisor is larger than the optimal normalization value of 4. The max value starts to come down from 1:

divisor=4.1

convert grad_thresh50.png -evaluate divide $divisor \

-convolve "1,2,1,0,0,0,-1,-2,-1" grad_thresh50_$divisor.png

convert grad_thresh50_$divisor.png -format "min=%[fx:minima]; max=%[fx:maxima]" info:

min=0; max=0.975616

divisor=8

convert grad_thresh50.png -evaluate divide $divisor \

-convolve "1,2,1,0,0,0,-1,-2,-1" grad_thresh50_$divisor.png

convert grad_thresh50_$divisor.png -format "min=%[fx:minima]; max=%[fx:maxima]" info:

min=0; max=0.500008

So nominally IM will make too large a result from any edge filter. The result will be amplified by 1/2 the sum of the absolute value of the kernel elements. Small edge values will be brighter by this factor and large edge values will be clipped at 1.

So to avoid clipping at 1 in non-hdri, you need to normalize by 4 = 1/2 sum (abs value of kernel elements)=

( 1+2+1+0+0+0+1+2+1)=8/2=4

That would be the optimum automatic normalization for edge filters whose kernel elements sum to 0.

Otherwise, one has to provide kernel elements that are properly normalized by dividing the integers by 4, i.e.

0.25 0.5 0.25

0 0 0

-0.25 -0.5 -0.25

or divide the input image by 4. This provides a result that is not clipped at the white end for nonHDRI.

- anthony
**Posts:**8883**Joined:**2004-05-31T19:27:03-07:00**Authentication code:**8675308**Location:**Brisbane, Australia

### Re: using a sobel operator - edge detection

Okay would you like me to adjust the convolution code NOW!

NOTE the current autoscaling works fine when no negatives are involved, It just

fails when negatives are present.

Basically I think you have to auto-scale and auto-bias, or do neither!

Hmm looking to see what uses the -bias setting....

-convolve sets 'bias' to the current bias setting (but we knew that)

-adaptive-blur --- Why I have no idea! it uses gaussian!

-adaptive-sharpen

-blur

-selective-blur

I really don't know why the other 4 uses bias, as they only use posivtive 1-D Gaussian kernels

I also looked as expanding -bias to allow users to control not only bias, but also scaling and automatic scale/bias handling. However only a single floating point value is stored from that setting, nothing else.

NOTE the current autoscaling works fine when no negatives are involved, It just

fails when negatives are present.

Basically I think you have to auto-scale and auto-bias, or do neither!

Hmm looking to see what uses the -bias setting....

-convolve sets 'bias' to the current bias setting (but we knew that)

-adaptive-blur --- Why I have no idea! it uses gaussian!

-adaptive-sharpen

-blur

-selective-blur

I really don't know why the other 4 uses bias, as they only use posivtive 1-D Gaussian kernels

I also looked as expanding -bias to allow users to control not only bias, but also scaling and automatic scale/bias handling. However only a single floating point value is stored from that setting, nothing else.

Anthony Thyssen -- Webmaster for ImageMagick Example Pages

https://imagemagick.org/Usage/

https://imagemagick.org/Usage/

### Re: using a sobel operator - edge detection

However, this is only true for kernels that sum to 0.So nominally IM will make too large a result from any edge filter. The result will be amplified by 1/2 the sum of the absolute value of the kernel elements. Small edge values will be brighter by this factor and large edge values will be clipped at 1.

So to avoid clipping at 1 in non-hdri, you need to normalize by 4 = 1/2 sum (abs value of kernel elements)=

( 1+2+1+0+0+0+1+2+1)=8/2=4

That would be the optimum automatic normalization for edge filters whose kernel elements sum to 0.

If we consider a kernel like

Code: Select all

```
-1 -2 -3
0 0 0
1 1 1
```

However, this filter produces values between -6 and +3

so the proper normalisation with bias would be

Code: Select all

```
pos=0; neg=0
for element in kernel:
if element>0: pos = pos + element
else: neg = neg - element
norm_kernel = kernel / (pos + neg)
auto_bias = neg
```

imagemagick should probably print a warning when it autoscales and autobiases, with the exact bias and scaling values used.

Or some other way to get these values.

I am not sure whether anyone uses kernels with negative elements that do not sum up to zero though.

That sounds like a good idea. How about a new option -convole-settings, and depreciate -bias?I also looked as expanding -bias to allow users to control not only bias, but also scaling and automatic scale/bias handling. However only a single floating point value is stored from that setting, nothing else.

- fmw42
**Posts:**25757**Joined:**2007-07-02T17:14:51-07:00**Authentication code:**1152**Location:**Sunnyvale, California, USA

### Re: using a sobel operator - edge detection

Yes one does, when when wants to sharpen an image rather than edge extract. Then one adds the edge image back to the image in some percent. But this can be done by combining kernels as the image kernel itself is justI am not sure whether anyone uses kernels with negative elements that do not sum up to zero though.

0,0,0,0,1,0,0,0,0

or

0 0 0

0 1 0

0 0 0

Then you mix that with the result of a laplacian edge filter

-1 -1 -1

-1 8 -1

-1 -1 -1

See my script http://www.fmwconcepts.com/imagemagick/ ... /index.php

which computes the combined kernel and then does one convolution rather than the edge convolution and mix with image.

- anthony
**Posts:**8883**Joined:**2004-05-31T19:27:03-07:00**Authentication code:**8675308**Location:**Brisbane, Australia

### Re: using a sobel operator - edge detection

I was thinking that if we do not have auto-bias, then the kernel should scale so thatHugoRune wrote:It becomes a little complicated, especially if one wants to remove the bias afterwards. So an option to disable autoscaling would be definitely useful.

it fits the range the bias generates.

That is if bias is set to 50% and the kernel range is from -4 to +6 then it scales it so the +6 fits into the 50% to 100% range.

That sounds like a good idea. How about a new option -convole-settings, and depreciate -bias?[/quote]I also looked as expanding -bias to allow users to control not only bias, but also scaling and automatic scale/bias handling. However only a single floating point value is stored from that setting, nothing else.

That may solve the problem, though Cristy would probably want to make this

-set option:convolve:settings bias[,scale]

Unlike the existing 'bias' setting this can have a 'undefined' state, which would control normal auto-bias/scaling.

QUESTION: what about the four other blur operators which I identified as using 'bias' All of them only uses gaussian kernels, and as such probably has no need for bias. Having bias set for convolve, then accidentally using -blur could cause a lot of confusion. My preferance would be to remove 'bias' from the normal blur operations.

Anthony Thyssen -- Webmaster for ImageMagick Example Pages

https://imagemagick.org/Usage/

https://imagemagick.org/Usage/

### Re: using a sobel operator - edge detection

Seems logical, I certainly would be confused by that.anthony wrote:QUESTION: what about the four other blur operators which I identified as using 'bias' All of them only uses gaussian kernels, and as such probably has no need for bias. Having bias set for convolve, then accidentally using -blur could cause a lot of confusion. My preferance would be to remove 'bias' from the normal blur operations.

But -bias is a new concept to me.

- fmw42
**Posts:**25757**Joined:**2007-07-02T17:14:51-07:00**Authentication code:**1152**Location:**Sunnyvale, California, USA

### Re: using a sobel operator - edge detection

You need to be careful it is NOT as simple as looking at the largest value. Look carefully at my example above for the edge filter. The sum of values on the one side is +4 and on the other side is -4. So when you have a full dynamic range step from white to black, you get a value of 4x1 (where 1=white). Whereas the largest value is 2. So if you used 2, you would be underestimating the result.That is if bias is set to 50% and the kernel range is from -4 to +6 then it scales it so the +6 fits into the 50% to 100% range.

As the gaussian is always positive, then I am not sure of the use of the bias for them. What are the other two besides -blur and -gaussian-blur?QUESTION: what about the four other blur operators which I identified as using 'bias' All of them only uses gaussian kernels, and as such probably has no need for bias. Having bias set for convolve, then accidentally using -blur could cause a lot of confusion. My preferance would be to remove 'bias' from the normal blur operations.

- anthony
**Posts:**8883**Joined:**2004-05-31T19:27:03-07:00**Authentication code:**8675308**Location:**Brisbane, Australia

### Re: using a sobel operator - edge detection

-selective-blur -adaptive-blur and -adaptive-sharpen

Gaussian-blur is NOT on that list as it works by generating a 2D kernel and then calling the ConvolveImage sub-routine.

These are fairly new operators, and appear to be copies of the 2-pass 1D blur sub-routine, but with changes about when a specific blur should be applied or not.

NOTE: that the new -composite blur (a mapped variable blur) also does not use bias, as it works by calling the EWA resampling function with scaling vectors, to generate a blur from elliptical areas.

That reminds me to add to my ToDo list, to allow a switch to enable the 'blue channel' to 'map' the 'angle' of the blur ellipse.

I have too many ToDo's and no time to do them!!!!! Arrggg...

Gaussian-blur is NOT on that list as it works by generating a 2D kernel and then calling the ConvolveImage sub-routine.

These are fairly new operators, and appear to be copies of the 2-pass 1D blur sub-routine, but with changes about when a specific blur should be applied or not.

NOTE: that the new -composite blur (a mapped variable blur) also does not use bias, as it works by calling the EWA resampling function with scaling vectors, to generate a blur from elliptical areas.

That reminds me to add to my ToDo list, to allow a switch to enable the 'blue channel' to 'map' the 'angle' of the blur ellipse.

I have too many ToDo's and no time to do them!!!!! Arrggg...

Anthony Thyssen -- Webmaster for ImageMagick Example Pages

https://imagemagick.org/Usage/

https://imagemagick.org/Usage/

- fmw42
**Posts:**25757**Joined:**2007-07-02T17:14:51-07:00**Authentication code:**1152**Location:**Sunnyvale, California, USA

### Re: using a sobel operator - edge detection

Here is a case to show that your scheme for normalizing may be wrong. Consider a white square in a black background.

Then do a simple laplacian.

convert square15.jpg -convolve "-1,-1,-1,-1,8,-1,-1,-1,-1" square15_lap3.jpg

This is overscaled and you would by your scheme divide by 16 = sum (abs values of kernel elements). Even dividing by 8 is wrong for this picture. For example dividing by 8 is equivalent to using -0.125 all around and then with a 1 in the middle. And this is too dark as the divisor, 8, is too big

v1=-0.125

v2=1

scale=1

convert square15.jpg -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale square15_lap3_s$scale.jpg

and doubling again to scale=4 which is about right for this image (which amounts to normalizing with a divisor of 2 for the original kernel coefs of "-1,-1,-1,-1,8,-1,-1,-1,-1"

v1=-0.125

v2=1

scale=4

convert square15.jpg -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale square15_lap3_s$scale.jpg

and doubling again to scale=8 which gets us back to the original convolve equivalent, but is overscaled by a factor of 2 and thus has clipping.

v1=-0.125

v2=1

scale=8

convert square15.jpg -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale square15_lap3_s$scale.jpg

If we do the same with the cameraman image.

convert cameraman2.png -convolve "-1,-1,-1,-1,8,-1,-1,-1,-1" cameraman2_lap3.png

which looks pretty good.

But if we normalize the weights by dividing by 8 we get

v1=-0.125

v2=1

scale=1

convert cameraman2.png -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale cameraman2_lap3_s$scale.png

which is too dark.

v1=-0.125

v2=1

scale=2

convert cameraman2.png -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale cameraman2_lap3_s$scale.png

v1=-0.125

v2=1

scale=4

convert cameraman2.png -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale cameraman2_lap3_s$scale.png

v1=-0.125

v2=1

scale=8

convert cameraman2.png -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale cameraman2_lap3_s$scale.png

So for this image, you don't want to do any normalizing, you want the original coeffs of "-1,-1,-1,-1,8,-1,-1,-1,-1"

Whereas for the square image you might want to normalize by dividing by 2 (not 8 or 16).

Then do a simple laplacian.

convert square15.jpg -convolve "-1,-1,-1,-1,8,-1,-1,-1,-1" square15_lap3.jpg

This is overscaled and you would by your scheme divide by 16 = sum (abs values of kernel elements). Even dividing by 8 is wrong for this picture. For example dividing by 8 is equivalent to using -0.125 all around and then with a 1 in the middle. And this is too dark as the divisor, 8, is too big

v1=-0.125

v2=1

scale=1

convert square15.jpg -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale square15_lap3_s$scale.jpg

and doubling again to scale=4 which is about right for this image (which amounts to normalizing with a divisor of 2 for the original kernel coefs of "-1,-1,-1,-1,8,-1,-1,-1,-1"

v1=-0.125

v2=1

scale=4

convert square15.jpg -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale square15_lap3_s$scale.jpg

and doubling again to scale=8 which gets us back to the original convolve equivalent, but is overscaled by a factor of 2 and thus has clipping.

v1=-0.125

v2=1

scale=8

convert square15.jpg -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale square15_lap3_s$scale.jpg

If we do the same with the cameraman image.

convert cameraman2.png -convolve "-1,-1,-1,-1,8,-1,-1,-1,-1" cameraman2_lap3.png

which looks pretty good.

But if we normalize the weights by dividing by 8 we get

v1=-0.125

v2=1

scale=1

convert cameraman2.png -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale cameraman2_lap3_s$scale.png

which is too dark.

v1=-0.125

v2=1

scale=2

convert cameraman2.png -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale cameraman2_lap3_s$scale.png

v1=-0.125

v2=1

scale=4

convert cameraman2.png -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale cameraman2_lap3_s$scale.png

v1=-0.125

v2=1

scale=8

convert cameraman2.png -convolve "$v1,$v1,$v1,$v1,$v2,$v1,$v1,$v1,$v1" -evaluate multiply $scale cameraman2_lap3_s$scale.png

So for this image, you don't want to do any normalizing, you want the original coeffs of "-1,-1,-1,-1,8,-1,-1,-1,-1"

Whereas for the square image you might want to normalize by dividing by 2 (not 8 or 16).

### Re: using a sobel operator - edge detection

It is certainly true that depending one the situation one may want to multiply the kernel with different values.

But looking at your examples it seems like this can be easily acomplished by using -evaluate multiply afterwards. If the kernel is properly normalized to prevent clipping, you can always clip later, HDRI or not.

If the kernel is not properly normalized however, then on non-HDRI some information is irrevocably lost, nothing can be done to restore it later. And since there is no indication that clipping happened, this can be very confusing. (That's what happened to me and caused me to start this thread).

By the way, your examples of laplace kernels got me wondering.

Has someone used imagemagick sucessfully for a laplacian, or laplacian-of-gaussian edge detection with zero-crossing?

Or asked differently, is there a good way to detect zero-crossing in imagemagick?

(References: laplacian/LoG zero crossing)

But looking at your examples it seems like this can be easily acomplished by using -evaluate multiply afterwards. If the kernel is properly normalized to prevent clipping, you can always clip later, HDRI or not.

If the kernel is not properly normalized however, then on non-HDRI some information is irrevocably lost, nothing can be done to restore it later. And since there is no indication that clipping happened, this can be very confusing. (That's what happened to me and caused me to start this thread).

By the way, your examples of laplace kernels got me wondering.

Has someone used imagemagick sucessfully for a laplacian, or laplacian-of-gaussian edge detection with zero-crossing?

Or asked differently, is there a good way to detect zero-crossing in imagemagick?

(References: laplacian/LoG zero crossing)

- anthony
**Posts:**8883**Joined:**2004-05-31T19:27:03-07:00**Authentication code:**8675308**Location:**Brisbane, Australia

### Re: using a sobel operator - edge detection

Given "-1,-1,-1,-1,8,-1,-1,-1,-1" and a bias of 0 then '1/8 would be the correct scaling. But that will clip negative values!!!

To get ALL the information the filter generates in a non-HDRI version of IM you will need to auto-bais to 50% and auto-scale by 1/16 (1/abs_sum).

Of course the resulting image will be mostly gray, but none of the information will be lost.

Now the user could have set bias specifically to 0 (no way at the moment for IM to determine this unless we change things) in which case Im should be able to realise that he wants to clip negatives, so auto-scale should then be 1/8 (positive values only).

But what if the user set bias to say 25%? what should auto-scale generate?

It isn't obvious that he wants negatives clipped!

Of course any change should allow the user to turn off both auto-bias and auto-scale, (for use in HDRI), or better still allow the user to specify both a user defined bias and user-defined scaling factor for a kernel.

To get ALL the information the filter generates in a non-HDRI version of IM you will need to auto-bais to 50% and auto-scale by 1/16 (1/abs_sum).

Of course the resulting image will be mostly gray, but none of the information will be lost.

Now the user could have set bias specifically to 0 (no way at the moment for IM to determine this unless we change things) in which case Im should be able to realise that he wants to clip negatives, so auto-scale should then be 1/8 (positive values only).

But what if the user set bias to say 25%? what should auto-scale generate?

It isn't obvious that he wants negatives clipped!

Of course any change should allow the user to turn off both auto-bias and auto-scale, (for use in HDRI), or better still allow the user to specify both a user defined bias and user-defined scaling factor for a kernel.

Anthony Thyssen -- Webmaster for ImageMagick Example Pages

https://imagemagick.org/Usage/

https://imagemagick.org/Usage/

- fmw42
**Posts:**25757**Joined:**2007-07-02T17:14:51-07:00**Authentication code:**1152**Location:**Sunnyvale, California, USA

### Re: using a sobel operator - edge detection

I have done a script for a simple laplacian, but not the LOG; although I had used it earlier in my career but not with IM. You can also have the similar difference of gaussians (DOG)By the way, your examples of laplace kernels got me wondering.

Has someone used imagemagick sucessfully for a laplacian, or laplacian-of-gaussian edge detection with zero-crossing?

Or asked differently, is there a good way to detect zero-crossing in imagemagick?

(References: laplacian/LoG zero crossing)

Should be scriptable, but then you either need to include bias or use HDRI.

See

http://en.wikipedia.org/wiki/Laplacian_ ... f_Gaussian

IM has the gaussian, but the laplacian is just a small kernel convolve in IM. If you need a larger laplacian kernel, then the way to go would be to do the FFT to the frequency domain and multiply by a small circle to get a large laplacian effect in the spatial domain after transforming back. The gaussian can also be applied in the frequency domain. But then you probably need HDRI to work with the FFT (possibly not if using -fft for mag phase). I have not gotten around to trying this with my FFT processing, yet. But it has crossed my mind.

Also no one that I know of has implemented the Canny edge detector in IM either.

- fmw42
**Posts:**25757**Joined:**2007-07-02T17:14:51-07:00**Authentication code:**1152**Location:**Sunnyvale, California, USA

### Re: using a sobel operator - edge detection

According to http://en.wikipedia.org/wiki/Difference_of_Gaussians

This is the recommended 1:4 ratio of gaussians for DOG.

Grayscale processing of input:

g1=1

g2=1.6

convert flowers2.jpg -colorspace gray \

\( -clone 0 -blur 0x$g1 \) \

\( -clone 0 -blur 0x$g2 \) \

-delete 0 +swap -compose minus -composite -contrast-stretch 0 -negate \

flowers2g_dog_${g1}_${g2}_b.png

and for simulating LOG the ratio should be 1:1.6, so

g1=1

g2=1.6

convert flowers2.jpg -colorspace gray \

\( -clone 0 -blur 0x$g1 \) \

\( -clone 0 -blur 0x$g2 \) \

-delete 0 +swap -compose minus -composite -contrast-stretch 0 -negate \

flowers2g_dog_${g1}_${g2}_b.png

This is the recommended 1:4 ratio of gaussians for DOG.

Grayscale processing of input:

g1=1

g2=1.6

convert flowers2.jpg -colorspace gray \

\( -clone 0 -blur 0x$g1 \) \

\( -clone 0 -blur 0x$g2 \) \

-delete 0 +swap -compose minus -composite -contrast-stretch 0 -negate \

flowers2g_dog_${g1}_${g2}_b.png

and for simulating LOG the ratio should be 1:1.6, so

g1=1

g2=1.6

convert flowers2.jpg -colorspace gray \

\( -clone 0 -blur 0x$g1 \) \

\( -clone 0 -blur 0x$g2 \) \

-delete 0 +swap -compose minus -composite -contrast-stretch 0 -negate \

flowers2g_dog_${g1}_${g2}_b.png

### Re: using a sobel operator - edge detection

Interesting.

I do not fully understand how a DOG approximates a LOG, but I still have to read the references cited in the wikipedia article.

My own attempts at building a LOG edge detector with zero crossing:

first, laplacian of gaussian:

now, the zero crossings in this picture are all border areas between dark and bright.

So I solarize the picture:

(interesting effect btw)

Now the zero crossings should be all the white maxima, and I want to keep zero crossings where the slope is high, i.e. all thin white lines surrounded by dark areas. This is by chance just what the -edge operator does:

The advantage that I was aiming for was that the edges stay nice and thin, even with bigger gaussians, and they are "on edge" - not shifted into the white or the black areas

But - it just does not work well as an edge detector. I think it is probably because of my method to detect zero crossings.

It only works moderately well for certain gaussian blur sizes and input images, for most it produces only garbage instead of clean edges.

To detect zero crossings, I realy should select all pixel pairs with one dark and one bright pixel, and I am thinking that imagemagick is not the best tool for this

I do not fully understand how a DOG approximates a LOG, but I still have to read the references cited in the wikipedia article.

My own attempts at building a LOG edge detector with zero crossing:

first, laplacian of gaussian:

*convert Flowers_before_difference_of_gaussians.jpg -blur 0x5*

-evaluate multiply 0.125 -bias 50% -convolve -1,-1,-1,-1,8,-1,-1,-1,-1

-contrast-stretch 0% test.png-evaluate multiply 0.125 -bias 50% -convolve -1,-1,-1,-1,8,-1,-1,-1,-1

-contrast-stretch 0% test.png

now, the zero crossings in this picture are all border areas between dark and bright.

So I solarize the picture:

*convert Flowers_before_difference_of_gaussians.jpg -blur 0x5*

-evaluate multiply 0.125 -bias 50% -convolve -1,-1,-1,-1,8,-1,-1,-1,-1

-solarize 50% -level 0%,50% -contrast-stretch 5%x0% test.jpg-evaluate multiply 0.125 -bias 50% -convolve -1,-1,-1,-1,8,-1,-1,-1,-1

-solarize 50% -level 0%,50% -contrast-stretch 5%x0% test.jpg

(interesting effect btw)

Now the zero crossings should be all the white maxima, and I want to keep zero crossings where the slope is high, i.e. all thin white lines surrounded by dark areas. This is by chance just what the -edge operator does:

*convert Flowers_before_difference_of_gaussians.jpg -blur 0x5*

-evaluate multiply 0.125 -bias 50% -convolve -1,-1,-1,-1,8,-1,-1,-1,-1

-solarize 50% -level 0%,50% -contrast-stretch 5%x0%

-edge 1 test.jpg-evaluate multiply 0.125 -bias 50% -convolve -1,-1,-1,-1,8,-1,-1,-1,-1

-solarize 50% -level 0%,50% -contrast-stretch 5%x0%

-edge 1 test.jpg

The advantage that I was aiming for was that the edges stay nice and thin, even with bigger gaussians, and they are "on edge" - not shifted into the white or the black areas

But - it just does not work well as an edge detector. I think it is probably because of my method to detect zero crossings.

It only works moderately well for certain gaussian blur sizes and input images, for most it produces only garbage instead of clean edges.

To detect zero crossings, I realy should select all pixel pairs with one dark and one bright pixel, and I am thinking that imagemagick is not the best tool for this

- fmw42
**Posts:**25757**Joined:**2007-07-02T17:14:51-07:00**Authentication code:**1152**Location:**Sunnyvale, California, USA

### Re: using a sobel operator - edge detection

I understand your wanting the zero crossings. I had not had time to do that with the DOG. But the way to go is with a gradient or as you have done just with -edge. They do similar things. The gradient uses a square kernel area whereas -edge, I believe, uses the circular kernel with a gaussian roll off. But the idea is the same to get edges of various strengths and then find the ones with the highest edge strength.

A couple of things, you are stuck with a small laplacian (3x3). I was starting to do this yesterday with using FFT to generate the laplacian where it is easier to control the laplacian size. But I do have some old work on a more generalized laplacian in the spatial domain for larger size laplacians. Also perhaps you want to work with grayscale first. I will get back with my attempts at the LOG using the FFT, if successful. Not sure how well it will work.

As reference, the DOG says to use a second gaussian that is larger than the first. (subtract larger from smaller) So I had assumed, but not investigated yet the need for the laplacian to be bigger than the gaussian.

A couple of things, you are stuck with a small laplacian (3x3). I was starting to do this yesterday with using FFT to generate the laplacian where it is easier to control the laplacian size. But I do have some old work on a more generalized laplacian in the spatial domain for larger size laplacians. Also perhaps you want to work with grayscale first. I will get back with my attempts at the LOG using the FFT, if successful. Not sure how well it will work.

As reference, the DOG says to use a second gaussian that is larger than the first. (subtract larger from smaller) So I had assumed, but not investigated yet the need for the laplacian to be bigger than the gaussian.