Why using magnitude method to get processed image?










2















Hi guys I’ve thinking about this question:



I know that we use Fourier transform to get into frequency domain to process the image.



I read the text book, it said that when we are done with processing the image in the Fourier domain we have to invert it back to get processed image.



And the textbook taught to get the real part of the inverse.



However, when I go through the OpenCv tutorial, no matter if using OpenCV or NumPy version, eventually they use magnitude (for OpenCV) or np.abs (for NumPy).



For OpenCV, the inverse returns two channels which contain the real and imaginary components. When I took the real part of the inverse, I got a totally weird image.



May somebody who knows the meaning behind all of this:



  1. Why using magnitude or abs to get processed image?


  2. What’s wrong with textbook instruction (take the real part of inverse)?










share|improve this question
























  • you can only use real numbers in image, when you take inverse fourier transform it might have complex numbers as output, so you need to get real part of the inverse FT and display it as an image

    – user8190410
    Nov 12 '18 at 17:19











  • docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/….

    – kris
    Nov 12 '18 at 17:27












  • The link calculate the magnitude and the result looks great

    – kris
    Nov 12 '18 at 17:27















2















Hi guys I’ve thinking about this question:



I know that we use Fourier transform to get into frequency domain to process the image.



I read the text book, it said that when we are done with processing the image in the Fourier domain we have to invert it back to get processed image.



And the textbook taught to get the real part of the inverse.



However, when I go through the OpenCv tutorial, no matter if using OpenCV or NumPy version, eventually they use magnitude (for OpenCV) or np.abs (for NumPy).



For OpenCV, the inverse returns two channels which contain the real and imaginary components. When I took the real part of the inverse, I got a totally weird image.



May somebody who knows the meaning behind all of this:



  1. Why using magnitude or abs to get processed image?


  2. What’s wrong with textbook instruction (take the real part of inverse)?










share|improve this question
























  • you can only use real numbers in image, when you take inverse fourier transform it might have complex numbers as output, so you need to get real part of the inverse FT and display it as an image

    – user8190410
    Nov 12 '18 at 17:19











  • docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/….

    – kris
    Nov 12 '18 at 17:27












  • The link calculate the magnitude and the result looks great

    – kris
    Nov 12 '18 at 17:27













2












2








2


2






Hi guys I’ve thinking about this question:



I know that we use Fourier transform to get into frequency domain to process the image.



I read the text book, it said that when we are done with processing the image in the Fourier domain we have to invert it back to get processed image.



And the textbook taught to get the real part of the inverse.



However, when I go through the OpenCv tutorial, no matter if using OpenCV or NumPy version, eventually they use magnitude (for OpenCV) or np.abs (for NumPy).



For OpenCV, the inverse returns two channels which contain the real and imaginary components. When I took the real part of the inverse, I got a totally weird image.



May somebody who knows the meaning behind all of this:



  1. Why using magnitude or abs to get processed image?


  2. What’s wrong with textbook instruction (take the real part of inverse)?










share|improve this question
















Hi guys I’ve thinking about this question:



I know that we use Fourier transform to get into frequency domain to process the image.



I read the text book, it said that when we are done with processing the image in the Fourier domain we have to invert it back to get processed image.



And the textbook taught to get the real part of the inverse.



However, when I go through the OpenCv tutorial, no matter if using OpenCV or NumPy version, eventually they use magnitude (for OpenCV) or np.abs (for NumPy).



For OpenCV, the inverse returns two channels which contain the real and imaginary components. When I took the real part of the inverse, I got a totally weird image.



May somebody who knows the meaning behind all of this:



  1. Why using magnitude or abs to get processed image?


  2. What’s wrong with textbook instruction (take the real part of inverse)?







opencv image-processing fft magnitude






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 12 '18 at 17:43









Cris Luengo

21.3k52250




21.3k52250










asked Nov 12 '18 at 17:08









kriskris

345




345












  • you can only use real numbers in image, when you take inverse fourier transform it might have complex numbers as output, so you need to get real part of the inverse FT and display it as an image

    – user8190410
    Nov 12 '18 at 17:19











  • docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/….

    – kris
    Nov 12 '18 at 17:27












  • The link calculate the magnitude and the result looks great

    – kris
    Nov 12 '18 at 17:27

















  • you can only use real numbers in image, when you take inverse fourier transform it might have complex numbers as output, so you need to get real part of the inverse FT and display it as an image

    – user8190410
    Nov 12 '18 at 17:19











  • docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/….

    – kris
    Nov 12 '18 at 17:27












  • The link calculate the magnitude and the result looks great

    – kris
    Nov 12 '18 at 17:27
















you can only use real numbers in image, when you take inverse fourier transform it might have complex numbers as output, so you need to get real part of the inverse FT and display it as an image

– user8190410
Nov 12 '18 at 17:19





you can only use real numbers in image, when you take inverse fourier transform it might have complex numbers as output, so you need to get real part of the inverse FT and display it as an image

– user8190410
Nov 12 '18 at 17:19













docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/….

– kris
Nov 12 '18 at 17:27






docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/….

– kris
Nov 12 '18 at 17:27














The link calculate the magnitude and the result looks great

– kris
Nov 12 '18 at 17:27





The link calculate the magnitude and the result looks great

– kris
Nov 12 '18 at 17:27












1 Answer
1






active

oldest

votes


















4














The textbook is right, the tutorial is wrong.



A real-valued image has a complex conjugate symmetry in the Fourier domain. This means that the FFT of the image will have a specific symmetry. Any processing that you do must preserve this symmetry if you want the inverse transform to remain real-valued. If you do this processing wrong, then the inverse transform will be complex-valued, and probably non-sensical.



If you preserve the symmetry in the Fourier domain properly, then the imaginary component of the inverse transform will be nearly zero (likely different from zero because of numerical imprecision). Discarding this imaginary component is the correct thing to do. Computing the magnitude will yield the same result, except all negative values will become positive (note some filters are meant to produce negative values, such as derivative filters), and at an increased computational cost.



For example, a convolution is a multiplication in the Fourier domain. The filter in the Fourier domain must be real-valued and symmetric around the origin. Often people will confuse where the origin is in the Fourier domain, and multiply by a filter that is seems symmetric, but actually is shifted with respect to the origin making it not symmetric. This shift introduces a phase change of the inverse transform (see the shift property of the Fourier transform). The magnitude of the inverse transform is not affected by the phase change, so taking the magnitude of this inverse transform yields an output that sort of looks OK, except if one expects to see negative values in the filter result. It would have been better to correctly understand the FFT algorithm, create a properly symmetric filter in the Fourier domain, and simply keep the real part of the inverse transform.



Nonetheless, some filters are specifically designed to break the symmetry and yield a complex-valued filter output. For example the Gabor filter has an even (symmetric) component and an odd (anti-symmetric) component. The even component yields a real-valued output, the odd component yields an imaginary-valued output. In this case, it is the magnitude of the complex value that is of interest. Likewise, a quadrature filter is specifically meant to produce a complex-valued output. From this output, the analytic signal (or its multi-dimensional extension, the monogenic signal), both the magnitude and the phase are of interest, for example as used in the phase congruency method of edge detection.




Looking at the linked tutorial, it is the line



fshift[crow-30:crow+30, ccol-30:ccol+30] = 0


which generates the Fourier-domain filter and applies it to the image (it is equivalent to multiplying by a filter with 1s and 0s). This tutorial correctly computes the origin of the Fourier domain (though for Python 3 you would use crow,ccol = rows//2 , cols//2 to get the integer division). But the filter above is not symmetric around that origin. In Python, crow-30:crow+30 indicates 30 pixels to the left of the origin, and only 29 pixels to the right (the right bound is not included!). The correct filter would be:



fshift[crow-30:crow+30+1, ccol-30:ccol+30+1] = 0


With this filter, the inverse transform is purely real (imaginary component has values in the order of 1e-13, which is numerical errors). Thus, it is now possible (and correct) to replace img_back = np.abs(img_back) with img_back = np.real(img_back).






share|improve this answer




















  • 1





    @kris: it depends on how you created the kernel. In the tutorial, they used fftshift on the transform, which means that the origin is in the middle of the image. Your kernel must be designed to have the origin in the same location. If your kernel is the result of an FFT of a kernel made in the spatial domain, you should apply ifftshift to the spatial-domain kernel, then apply fft to the image and the shifted kernel, then multiply, then do the inverse transform (no fftshift in the Fourier domain).

    – Cris Luengo
    Nov 12 '18 at 17:59






  • 1





    @kris: If the filter is not symmetric, then the inverse transform will be a complex-valued function. Some filters are designed this way (see the paragraph I added to the answer). It depends on the filter and how it is designed, what one does with the complex output. The discussion about symmetric filters relates to filters that are meant to keep the real-valued image being real-valued, such as low-pass, high-pass or band-pass filters.

    – Cris Luengo
    Nov 13 '18 at 5:19






  • 1





    The error in the OpenCV documentation has been corrected: github.com/opencv/opencv/issues/13152

    – Cris Luengo
    Nov 16 '18 at 2:52






  • 1





    @kris: the FT is a continuous-domain construct that requires an infinitely long signal. Sampling this signal leads to a periodic frequency domain. Now you have the DTFT (discrete time FT). Windowing the signal (when we measure a signal we don’t have infinitely long data) means we cannot apply the FT any more. So instead we assume periodicity of the signal, repeating it infinitely. Now we do have something we can compute the FT of. The periodicity leads to a discrete frequency domain. Now you have a DFT (discrete FT). The FFT computes the DFT.

    – Cris Luengo
    Nov 29 '18 at 14:17






  • 1





    @kris: The DSP Stack Exchange is a good place to ask questions about Fourier analysis and many other things.

    – Cris Luengo
    Nov 30 '18 at 6:26










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53266944%2fwhy-using-magnitude-method-to-get-processed-image%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









4














The textbook is right, the tutorial is wrong.



A real-valued image has a complex conjugate symmetry in the Fourier domain. This means that the FFT of the image will have a specific symmetry. Any processing that you do must preserve this symmetry if you want the inverse transform to remain real-valued. If you do this processing wrong, then the inverse transform will be complex-valued, and probably non-sensical.



If you preserve the symmetry in the Fourier domain properly, then the imaginary component of the inverse transform will be nearly zero (likely different from zero because of numerical imprecision). Discarding this imaginary component is the correct thing to do. Computing the magnitude will yield the same result, except all negative values will become positive (note some filters are meant to produce negative values, such as derivative filters), and at an increased computational cost.



For example, a convolution is a multiplication in the Fourier domain. The filter in the Fourier domain must be real-valued and symmetric around the origin. Often people will confuse where the origin is in the Fourier domain, and multiply by a filter that is seems symmetric, but actually is shifted with respect to the origin making it not symmetric. This shift introduces a phase change of the inverse transform (see the shift property of the Fourier transform). The magnitude of the inverse transform is not affected by the phase change, so taking the magnitude of this inverse transform yields an output that sort of looks OK, except if one expects to see negative values in the filter result. It would have been better to correctly understand the FFT algorithm, create a properly symmetric filter in the Fourier domain, and simply keep the real part of the inverse transform.



Nonetheless, some filters are specifically designed to break the symmetry and yield a complex-valued filter output. For example the Gabor filter has an even (symmetric) component and an odd (anti-symmetric) component. The even component yields a real-valued output, the odd component yields an imaginary-valued output. In this case, it is the magnitude of the complex value that is of interest. Likewise, a quadrature filter is specifically meant to produce a complex-valued output. From this output, the analytic signal (or its multi-dimensional extension, the monogenic signal), both the magnitude and the phase are of interest, for example as used in the phase congruency method of edge detection.




Looking at the linked tutorial, it is the line



fshift[crow-30:crow+30, ccol-30:ccol+30] = 0


which generates the Fourier-domain filter and applies it to the image (it is equivalent to multiplying by a filter with 1s and 0s). This tutorial correctly computes the origin of the Fourier domain (though for Python 3 you would use crow,ccol = rows//2 , cols//2 to get the integer division). But the filter above is not symmetric around that origin. In Python, crow-30:crow+30 indicates 30 pixels to the left of the origin, and only 29 pixels to the right (the right bound is not included!). The correct filter would be:



fshift[crow-30:crow+30+1, ccol-30:ccol+30+1] = 0


With this filter, the inverse transform is purely real (imaginary component has values in the order of 1e-13, which is numerical errors). Thus, it is now possible (and correct) to replace img_back = np.abs(img_back) with img_back = np.real(img_back).






share|improve this answer




















  • 1





    @kris: it depends on how you created the kernel. In the tutorial, they used fftshift on the transform, which means that the origin is in the middle of the image. Your kernel must be designed to have the origin in the same location. If your kernel is the result of an FFT of a kernel made in the spatial domain, you should apply ifftshift to the spatial-domain kernel, then apply fft to the image and the shifted kernel, then multiply, then do the inverse transform (no fftshift in the Fourier domain).

    – Cris Luengo
    Nov 12 '18 at 17:59






  • 1





    @kris: If the filter is not symmetric, then the inverse transform will be a complex-valued function. Some filters are designed this way (see the paragraph I added to the answer). It depends on the filter and how it is designed, what one does with the complex output. The discussion about symmetric filters relates to filters that are meant to keep the real-valued image being real-valued, such as low-pass, high-pass or band-pass filters.

    – Cris Luengo
    Nov 13 '18 at 5:19






  • 1





    The error in the OpenCV documentation has been corrected: github.com/opencv/opencv/issues/13152

    – Cris Luengo
    Nov 16 '18 at 2:52






  • 1





    @kris: the FT is a continuous-domain construct that requires an infinitely long signal. Sampling this signal leads to a periodic frequency domain. Now you have the DTFT (discrete time FT). Windowing the signal (when we measure a signal we don’t have infinitely long data) means we cannot apply the FT any more. So instead we assume periodicity of the signal, repeating it infinitely. Now we do have something we can compute the FT of. The periodicity leads to a discrete frequency domain. Now you have a DFT (discrete FT). The FFT computes the DFT.

    – Cris Luengo
    Nov 29 '18 at 14:17






  • 1





    @kris: The DSP Stack Exchange is a good place to ask questions about Fourier analysis and many other things.

    – Cris Luengo
    Nov 30 '18 at 6:26















4














The textbook is right, the tutorial is wrong.



A real-valued image has a complex conjugate symmetry in the Fourier domain. This means that the FFT of the image will have a specific symmetry. Any processing that you do must preserve this symmetry if you want the inverse transform to remain real-valued. If you do this processing wrong, then the inverse transform will be complex-valued, and probably non-sensical.



If you preserve the symmetry in the Fourier domain properly, then the imaginary component of the inverse transform will be nearly zero (likely different from zero because of numerical imprecision). Discarding this imaginary component is the correct thing to do. Computing the magnitude will yield the same result, except all negative values will become positive (note some filters are meant to produce negative values, such as derivative filters), and at an increased computational cost.



For example, a convolution is a multiplication in the Fourier domain. The filter in the Fourier domain must be real-valued and symmetric around the origin. Often people will confuse where the origin is in the Fourier domain, and multiply by a filter that is seems symmetric, but actually is shifted with respect to the origin making it not symmetric. This shift introduces a phase change of the inverse transform (see the shift property of the Fourier transform). The magnitude of the inverse transform is not affected by the phase change, so taking the magnitude of this inverse transform yields an output that sort of looks OK, except if one expects to see negative values in the filter result. It would have been better to correctly understand the FFT algorithm, create a properly symmetric filter in the Fourier domain, and simply keep the real part of the inverse transform.



Nonetheless, some filters are specifically designed to break the symmetry and yield a complex-valued filter output. For example the Gabor filter has an even (symmetric) component and an odd (anti-symmetric) component. The even component yields a real-valued output, the odd component yields an imaginary-valued output. In this case, it is the magnitude of the complex value that is of interest. Likewise, a quadrature filter is specifically meant to produce a complex-valued output. From this output, the analytic signal (or its multi-dimensional extension, the monogenic signal), both the magnitude and the phase are of interest, for example as used in the phase congruency method of edge detection.




Looking at the linked tutorial, it is the line



fshift[crow-30:crow+30, ccol-30:ccol+30] = 0


which generates the Fourier-domain filter and applies it to the image (it is equivalent to multiplying by a filter with 1s and 0s). This tutorial correctly computes the origin of the Fourier domain (though for Python 3 you would use crow,ccol = rows//2 , cols//2 to get the integer division). But the filter above is not symmetric around that origin. In Python, crow-30:crow+30 indicates 30 pixels to the left of the origin, and only 29 pixels to the right (the right bound is not included!). The correct filter would be:



fshift[crow-30:crow+30+1, ccol-30:ccol+30+1] = 0


With this filter, the inverse transform is purely real (imaginary component has values in the order of 1e-13, which is numerical errors). Thus, it is now possible (and correct) to replace img_back = np.abs(img_back) with img_back = np.real(img_back).






share|improve this answer




















  • 1





    @kris: it depends on how you created the kernel. In the tutorial, they used fftshift on the transform, which means that the origin is in the middle of the image. Your kernel must be designed to have the origin in the same location. If your kernel is the result of an FFT of a kernel made in the spatial domain, you should apply ifftshift to the spatial-domain kernel, then apply fft to the image and the shifted kernel, then multiply, then do the inverse transform (no fftshift in the Fourier domain).

    – Cris Luengo
    Nov 12 '18 at 17:59






  • 1





    @kris: If the filter is not symmetric, then the inverse transform will be a complex-valued function. Some filters are designed this way (see the paragraph I added to the answer). It depends on the filter and how it is designed, what one does with the complex output. The discussion about symmetric filters relates to filters that are meant to keep the real-valued image being real-valued, such as low-pass, high-pass or band-pass filters.

    – Cris Luengo
    Nov 13 '18 at 5:19






  • 1





    The error in the OpenCV documentation has been corrected: github.com/opencv/opencv/issues/13152

    – Cris Luengo
    Nov 16 '18 at 2:52






  • 1





    @kris: the FT is a continuous-domain construct that requires an infinitely long signal. Sampling this signal leads to a periodic frequency domain. Now you have the DTFT (discrete time FT). Windowing the signal (when we measure a signal we don’t have infinitely long data) means we cannot apply the FT any more. So instead we assume periodicity of the signal, repeating it infinitely. Now we do have something we can compute the FT of. The periodicity leads to a discrete frequency domain. Now you have a DFT (discrete FT). The FFT computes the DFT.

    – Cris Luengo
    Nov 29 '18 at 14:17






  • 1





    @kris: The DSP Stack Exchange is a good place to ask questions about Fourier analysis and many other things.

    – Cris Luengo
    Nov 30 '18 at 6:26













4












4








4







The textbook is right, the tutorial is wrong.



A real-valued image has a complex conjugate symmetry in the Fourier domain. This means that the FFT of the image will have a specific symmetry. Any processing that you do must preserve this symmetry if you want the inverse transform to remain real-valued. If you do this processing wrong, then the inverse transform will be complex-valued, and probably non-sensical.



If you preserve the symmetry in the Fourier domain properly, then the imaginary component of the inverse transform will be nearly zero (likely different from zero because of numerical imprecision). Discarding this imaginary component is the correct thing to do. Computing the magnitude will yield the same result, except all negative values will become positive (note some filters are meant to produce negative values, such as derivative filters), and at an increased computational cost.



For example, a convolution is a multiplication in the Fourier domain. The filter in the Fourier domain must be real-valued and symmetric around the origin. Often people will confuse where the origin is in the Fourier domain, and multiply by a filter that is seems symmetric, but actually is shifted with respect to the origin making it not symmetric. This shift introduces a phase change of the inverse transform (see the shift property of the Fourier transform). The magnitude of the inverse transform is not affected by the phase change, so taking the magnitude of this inverse transform yields an output that sort of looks OK, except if one expects to see negative values in the filter result. It would have been better to correctly understand the FFT algorithm, create a properly symmetric filter in the Fourier domain, and simply keep the real part of the inverse transform.



Nonetheless, some filters are specifically designed to break the symmetry and yield a complex-valued filter output. For example the Gabor filter has an even (symmetric) component and an odd (anti-symmetric) component. The even component yields a real-valued output, the odd component yields an imaginary-valued output. In this case, it is the magnitude of the complex value that is of interest. Likewise, a quadrature filter is specifically meant to produce a complex-valued output. From this output, the analytic signal (or its multi-dimensional extension, the monogenic signal), both the magnitude and the phase are of interest, for example as used in the phase congruency method of edge detection.




Looking at the linked tutorial, it is the line



fshift[crow-30:crow+30, ccol-30:ccol+30] = 0


which generates the Fourier-domain filter and applies it to the image (it is equivalent to multiplying by a filter with 1s and 0s). This tutorial correctly computes the origin of the Fourier domain (though for Python 3 you would use crow,ccol = rows//2 , cols//2 to get the integer division). But the filter above is not symmetric around that origin. In Python, crow-30:crow+30 indicates 30 pixels to the left of the origin, and only 29 pixels to the right (the right bound is not included!). The correct filter would be:



fshift[crow-30:crow+30+1, ccol-30:ccol+30+1] = 0


With this filter, the inverse transform is purely real (imaginary component has values in the order of 1e-13, which is numerical errors). Thus, it is now possible (and correct) to replace img_back = np.abs(img_back) with img_back = np.real(img_back).






share|improve this answer















The textbook is right, the tutorial is wrong.



A real-valued image has a complex conjugate symmetry in the Fourier domain. This means that the FFT of the image will have a specific symmetry. Any processing that you do must preserve this symmetry if you want the inverse transform to remain real-valued. If you do this processing wrong, then the inverse transform will be complex-valued, and probably non-sensical.



If you preserve the symmetry in the Fourier domain properly, then the imaginary component of the inverse transform will be nearly zero (likely different from zero because of numerical imprecision). Discarding this imaginary component is the correct thing to do. Computing the magnitude will yield the same result, except all negative values will become positive (note some filters are meant to produce negative values, such as derivative filters), and at an increased computational cost.



For example, a convolution is a multiplication in the Fourier domain. The filter in the Fourier domain must be real-valued and symmetric around the origin. Often people will confuse where the origin is in the Fourier domain, and multiply by a filter that is seems symmetric, but actually is shifted with respect to the origin making it not symmetric. This shift introduces a phase change of the inverse transform (see the shift property of the Fourier transform). The magnitude of the inverse transform is not affected by the phase change, so taking the magnitude of this inverse transform yields an output that sort of looks OK, except if one expects to see negative values in the filter result. It would have been better to correctly understand the FFT algorithm, create a properly symmetric filter in the Fourier domain, and simply keep the real part of the inverse transform.



Nonetheless, some filters are specifically designed to break the symmetry and yield a complex-valued filter output. For example the Gabor filter has an even (symmetric) component and an odd (anti-symmetric) component. The even component yields a real-valued output, the odd component yields an imaginary-valued output. In this case, it is the magnitude of the complex value that is of interest. Likewise, a quadrature filter is specifically meant to produce a complex-valued output. From this output, the analytic signal (or its multi-dimensional extension, the monogenic signal), both the magnitude and the phase are of interest, for example as used in the phase congruency method of edge detection.




Looking at the linked tutorial, it is the line



fshift[crow-30:crow+30, ccol-30:ccol+30] = 0


which generates the Fourier-domain filter and applies it to the image (it is equivalent to multiplying by a filter with 1s and 0s). This tutorial correctly computes the origin of the Fourier domain (though for Python 3 you would use crow,ccol = rows//2 , cols//2 to get the integer division). But the filter above is not symmetric around that origin. In Python, crow-30:crow+30 indicates 30 pixels to the left of the origin, and only 29 pixels to the right (the right bound is not included!). The correct filter would be:



fshift[crow-30:crow+30+1, ccol-30:ccol+30+1] = 0


With this filter, the inverse transform is purely real (imaginary component has values in the order of 1e-13, which is numerical errors). Thus, it is now possible (and correct) to replace img_back = np.abs(img_back) with img_back = np.real(img_back).







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 13 '18 at 5:16

























answered Nov 12 '18 at 17:23









Cris LuengoCris Luengo

21.3k52250




21.3k52250







  • 1





    @kris: it depends on how you created the kernel. In the tutorial, they used fftshift on the transform, which means that the origin is in the middle of the image. Your kernel must be designed to have the origin in the same location. If your kernel is the result of an FFT of a kernel made in the spatial domain, you should apply ifftshift to the spatial-domain kernel, then apply fft to the image and the shifted kernel, then multiply, then do the inverse transform (no fftshift in the Fourier domain).

    – Cris Luengo
    Nov 12 '18 at 17:59






  • 1





    @kris: If the filter is not symmetric, then the inverse transform will be a complex-valued function. Some filters are designed this way (see the paragraph I added to the answer). It depends on the filter and how it is designed, what one does with the complex output. The discussion about symmetric filters relates to filters that are meant to keep the real-valued image being real-valued, such as low-pass, high-pass or band-pass filters.

    – Cris Luengo
    Nov 13 '18 at 5:19






  • 1





    The error in the OpenCV documentation has been corrected: github.com/opencv/opencv/issues/13152

    – Cris Luengo
    Nov 16 '18 at 2:52






  • 1





    @kris: the FT is a continuous-domain construct that requires an infinitely long signal. Sampling this signal leads to a periodic frequency domain. Now you have the DTFT (discrete time FT). Windowing the signal (when we measure a signal we don’t have infinitely long data) means we cannot apply the FT any more. So instead we assume periodicity of the signal, repeating it infinitely. Now we do have something we can compute the FT of. The periodicity leads to a discrete frequency domain. Now you have a DFT (discrete FT). The FFT computes the DFT.

    – Cris Luengo
    Nov 29 '18 at 14:17






  • 1





    @kris: The DSP Stack Exchange is a good place to ask questions about Fourier analysis and many other things.

    – Cris Luengo
    Nov 30 '18 at 6:26












  • 1





    @kris: it depends on how you created the kernel. In the tutorial, they used fftshift on the transform, which means that the origin is in the middle of the image. Your kernel must be designed to have the origin in the same location. If your kernel is the result of an FFT of a kernel made in the spatial domain, you should apply ifftshift to the spatial-domain kernel, then apply fft to the image and the shifted kernel, then multiply, then do the inverse transform (no fftshift in the Fourier domain).

    – Cris Luengo
    Nov 12 '18 at 17:59






  • 1





    @kris: If the filter is not symmetric, then the inverse transform will be a complex-valued function. Some filters are designed this way (see the paragraph I added to the answer). It depends on the filter and how it is designed, what one does with the complex output. The discussion about symmetric filters relates to filters that are meant to keep the real-valued image being real-valued, such as low-pass, high-pass or band-pass filters.

    – Cris Luengo
    Nov 13 '18 at 5:19






  • 1





    The error in the OpenCV documentation has been corrected: github.com/opencv/opencv/issues/13152

    – Cris Luengo
    Nov 16 '18 at 2:52






  • 1





    @kris: the FT is a continuous-domain construct that requires an infinitely long signal. Sampling this signal leads to a periodic frequency domain. Now you have the DTFT (discrete time FT). Windowing the signal (when we measure a signal we don’t have infinitely long data) means we cannot apply the FT any more. So instead we assume periodicity of the signal, repeating it infinitely. Now we do have something we can compute the FT of. The periodicity leads to a discrete frequency domain. Now you have a DFT (discrete FT). The FFT computes the DFT.

    – Cris Luengo
    Nov 29 '18 at 14:17






  • 1





    @kris: The DSP Stack Exchange is a good place to ask questions about Fourier analysis and many other things.

    – Cris Luengo
    Nov 30 '18 at 6:26







1




1





@kris: it depends on how you created the kernel. In the tutorial, they used fftshift on the transform, which means that the origin is in the middle of the image. Your kernel must be designed to have the origin in the same location. If your kernel is the result of an FFT of a kernel made in the spatial domain, you should apply ifftshift to the spatial-domain kernel, then apply fft to the image and the shifted kernel, then multiply, then do the inverse transform (no fftshift in the Fourier domain).

– Cris Luengo
Nov 12 '18 at 17:59





@kris: it depends on how you created the kernel. In the tutorial, they used fftshift on the transform, which means that the origin is in the middle of the image. Your kernel must be designed to have the origin in the same location. If your kernel is the result of an FFT of a kernel made in the spatial domain, you should apply ifftshift to the spatial-domain kernel, then apply fft to the image and the shifted kernel, then multiply, then do the inverse transform (no fftshift in the Fourier domain).

– Cris Luengo
Nov 12 '18 at 17:59




1




1





@kris: If the filter is not symmetric, then the inverse transform will be a complex-valued function. Some filters are designed this way (see the paragraph I added to the answer). It depends on the filter and how it is designed, what one does with the complex output. The discussion about symmetric filters relates to filters that are meant to keep the real-valued image being real-valued, such as low-pass, high-pass or band-pass filters.

– Cris Luengo
Nov 13 '18 at 5:19





@kris: If the filter is not symmetric, then the inverse transform will be a complex-valued function. Some filters are designed this way (see the paragraph I added to the answer). It depends on the filter and how it is designed, what one does with the complex output. The discussion about symmetric filters relates to filters that are meant to keep the real-valued image being real-valued, such as low-pass, high-pass or band-pass filters.

– Cris Luengo
Nov 13 '18 at 5:19




1




1





The error in the OpenCV documentation has been corrected: github.com/opencv/opencv/issues/13152

– Cris Luengo
Nov 16 '18 at 2:52





The error in the OpenCV documentation has been corrected: github.com/opencv/opencv/issues/13152

– Cris Luengo
Nov 16 '18 at 2:52




1




1





@kris: the FT is a continuous-domain construct that requires an infinitely long signal. Sampling this signal leads to a periodic frequency domain. Now you have the DTFT (discrete time FT). Windowing the signal (when we measure a signal we don’t have infinitely long data) means we cannot apply the FT any more. So instead we assume periodicity of the signal, repeating it infinitely. Now we do have something we can compute the FT of. The periodicity leads to a discrete frequency domain. Now you have a DFT (discrete FT). The FFT computes the DFT.

– Cris Luengo
Nov 29 '18 at 14:17





@kris: the FT is a continuous-domain construct that requires an infinitely long signal. Sampling this signal leads to a periodic frequency domain. Now you have the DTFT (discrete time FT). Windowing the signal (when we measure a signal we don’t have infinitely long data) means we cannot apply the FT any more. So instead we assume periodicity of the signal, repeating it infinitely. Now we do have something we can compute the FT of. The periodicity leads to a discrete frequency domain. Now you have a DFT (discrete FT). The FFT computes the DFT.

– Cris Luengo
Nov 29 '18 at 14:17




1




1





@kris: The DSP Stack Exchange is a good place to ask questions about Fourier analysis and many other things.

– Cris Luengo
Nov 30 '18 at 6:26





@kris: The DSP Stack Exchange is a good place to ask questions about Fourier analysis and many other things.

– Cris Luengo
Nov 30 '18 at 6:26



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53266944%2fwhy-using-magnitude-method-to-get-processed-image%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Crossroads (UK TV series)

ữḛḳṊẴ ẋ,Ẩṙ,ỹḛẪẠứụỿṞṦ,Ṉẍừ,ứ Ị,Ḵ,ṏ ṇỪḎḰṰọửḊ ṾḨḮữẑỶṑỗḮṣṉẃ Ữẩụ,ṓ,ḹẕḪḫỞṿḭ ỒṱṨẁṋṜ ḅẈ ṉ ứṀḱṑỒḵ,ḏ,ḊḖỹẊ Ẻḷổ,ṥ ẔḲẪụḣể Ṱ ḭỏựẶ Ồ Ṩ,ẂḿṡḾồ ỗṗṡịṞẤḵṽẃ ṸḒẄẘ,ủẞẵṦṟầṓế