As in image enhancement the goal of restoration is to improve an image for further processing. In contrast to image enhancement that was subjective and largely based on heuristics, restoration attempts to reconstruct or recover an image that has been distorted by a known degradation phenomenon. Restoration techniques thus try to model the degradation process and apply the inverse process in order to reconstruct the original image.
The degradation process is generally modelled as a degradation function and the additive noise term together they yield .
Given , some knowledge about , and it is the objective of restoration to estimate . Of course this estimate should be as close as possible to .
Fig 7.1 Image degradation model
If the degradation function is a linear, shift-invariant process, then the degraded image is given in the spatial domain by
where is the spatial representation of the degradation function, and indicates convolution.
As we know from the Convolution Theorem this can be rewritten in the frequency domain
where the capital letters are the Fourier transforms of the respective function in Eq 7.1.
The principle sources of noise in digital images arise during image acquisition and/or transmission.
The performance of imaging sensors are affected by a variety of factors during acquisition, such as
Images can also be corrupted during transmission due to interference in the channel for example
Depending on the specific noise source, a different model must be selected that accurately reproduces the spatial characteristics of the noise.
Because of its mathematical tractability in both the spatial and frequency domain, Gaussian noise models (aka normal distribution) are used frequently in practice.
The PDF (Probability density function) of a Gaussian random variable is given by
where represents the grey level, the mean value and the standard deviation.
Fig 7.2: Gaussian probability density function
The application of the Gaussian model is so convenient that it is often used in situations in which they are marginally applicable at best.
The PDF of Rayleigh noise is defined by
the mean and variance are given by
Fig 7.3: Rayleigh probability density function
As the shape of the Rayleigh density function is skewed it is useful for approximating skewed histograms.
The PDF of the Erlang noise is given by
where , is a positive integer. The mean and variance are then given by
Fig 7.4: Erlang probability density function
The PDF of the exponential noise is given by
where . The mean and variance of the density function are
Fig 7.5: Probability density function of exponential noise
The exponential noise model is a special case of the Erlang noise model with .
The PDF of the uniform noise is given by
The mean and variance of the density function are given by
Fig 7.6: Probability density function of uniform noise
The PDF of bipolar impulse noise model is given by
Fig 7.7: Probability density function of the bipolar impulse noise model
If , grey-level appears as a light dot (salt) in the image. Conversely, will appear as dark dot (pepper). If either is zero, the PDF is called unipolar.
Because impulse corruption is generally large compared to the signal strength, the assumption is usually that and are digitised as saturated values thus black (pepper) and white (salt).
The presented noise models provide a useful tool for approximating a broad range of noise corruption situations found in practice. For example:
Periodic noise typically arises from electrical or electromechanical interference during image acquisition and is spatial dependent.
Fig 7.8: Periodic noise on a satellite image of Pompeii
Fig 7.9: Spectrum of the Pompeii image with periodic noise (peaks of the periodic noise visually enhanced)
In case of periodic noise, it is usually possible to estimate from the spectrum of and subtract it to obtain an estimate of the original image .
But estimating the noise terms is unreasonable, so subtracting them from is impossible. Spatial filtering is the method of choice in situations when only additive noise is present.
The Bandpass filter performs the opposite of a bandreject filter and thus lets pass the frequencies in a narrow band of width around . The transfer function can be obtained from a corresponding bandreject filter by using the equation
The perspective plots of the corresponding filters are depicted in Fig 7.20.
|Fig 7.20 (a) Ideal bandpass filter, (b) Butterworth bandpass filter, and (c) Gaussian bandpass filter|
Similar to the bandpass/bandreject filters, the Notch reject filters can be turned into Notch pass filters with the relation
where is the transfer function of the notch pass filter, and the corresponding Notch reject filter.
- the Notch reject filter becomes a highpass filter and
- the Notch pass filter becomes a lowpass filter.
So far we concentrated mainly at the additive noise term . In the remainder of this presentation we concentrate on ways to remove the degradation function .
Fig 7.48 Image degradation model
There are three principle methods to estimate the degradation function
The process of restoring a corrupted image using the estimated degradation function is sometimes called blind deconvolution, as the true degradation is seldom known completely.
In recent years a forth method based on maximum-likelihood estimation (MLE) often called blind deconvolution has been proposed.
Beware: All multiplications/divisions in the equations in this part of the lecture are componentwise.
A degraded image without any information about the degradation function .
Equipment similar to the equipment used to acquire the degraded image
where is the Fourier transform of the observed image, and is a constant describing the strength of the impulse.
Degradation modelling has been used for many years. It, however, requires an indepth understanding of the involved physical phenomenon. In some cases it can even incorporate environmental conditions that cause degradations. Hufnagel and Stanly proposed in 1964 a model to characterise atmospheric turbulence.
where is a constant that depends on the nature of the turbulence.
Except of the 5/6 power the above equation has the same form as a Gaussian lowpass filter.
where is a low-pass filter that should eliminiate the very low (or even zero) values often experienced in the high frequencies.
gives values, where is lower than a threshold a special treatment.
These appraoches are also known as Pseudo-Inverse Filtering.
Although very tempting from its simplicity, the direct inverse filtering approach does not work for practical applications. Even if the degradation is known exactly.
Fig 7.57: Direct inverse filtering result
If the noise spectrum is zero, the noise-to-signal power ratio vanishes and the Wiener Filter reduces to the inverse filter.
This is no problem, as the inverse filter works fine if no noise is present.
The main problem with the Wiener filter is that the power spectrum of the undegraded image is seldom known.
A frequently used approach when these quantities cannot be estimated is to approximate Eq 7.43 by the expression
where is a user selected constant.
This simplification can be partly justified when dealing with spectrally white noise, where the noise spectrum is constant. However, the problem still remains that the power spectrum of the undegraded image is unknown and must be estimated.
Even if the actual ratio is not known, it becomes a simple matter to experiment interactively varying the constant and viewing the results.
This example shows the performance of the Wiener filter on the example Fig 7.56 using (1) the approximation of Eq 7.45 with in Fig 7.62 and (2) the Wiener filter using full knowledge of the noise and undegraded image's power spectra in Fig 7.63.
Fig 7.62: Parametric Wiener filter using a constant ratio
Fig 7.63: Wiener filter knowing the noise and undegraded signal power spectra
The problem of having to know something about the degradation function is common to all methods discussed in this part.
The Wiener filter has the additional difficulty, that
must be known.
Although an approximation by a constant is possible as in the Parametric Wiener filter, see Fig 7.62, this approach is not always suitable.
The Geometric Mean Filter is a generalisation of inverse filtering and the Wiener filtering. Through its parameters it allows access to an entire family of filters.
Depending on the parameters the filter characteristics can be adjusted
The Constrained Least Squares Filtering approach only requires knowledge of the mean and variance of the noise. As shown in previous lectures these parameters can be usually estimated from the degraded image.
The definition of the 2D discrete convolution is
Using this equation we can express the linear degradation model in vector notation as
It seems obvious that the restoration problem is now reduced to simple matrix manipulations. Unfortunately this is not the case. The problem with the matrix formulation is that the matrix is very large and that its inverse is firstly very sensitive to noise and secondly does not necessarily exist.
One way to deal with these issues is to base optimality of restoration on a measure of smoothness, such as the second derivative of the image, e.g. the Laplacian.
subject to the constraint
where is the Euclidian norm, and an estimate of the undegraded image.
The Frequency domain solution to this optimisation problem is given by
where is a parameter that must be adjusted so that the constraint in Eq 7.50 is satisfied, and is the Fourier transform of the Laplace operator
Note: must be properly padded with zeros prior to computing the Fourier transform.
It is possible to adjust the parameter interactively until acceptable results are obtained.
If we are interested in optimality, the parameter must be adjusted so that the constraint in Eq 7.50 is satisfied.
It turns out that
is the optimal selection given the above optimisation equation.
It is important to understand, that optimum restoration in the sense of constrained least squares does not necessarily imply best in the visual sense. In general, automatic determined restoration filters yield inferior results to manual adjustment of filter parameters.
This example shows the performance of the Constrained Least Squares Filter on the example Fig 7.56 using (1) the theoretical optimal value for in Fig 7.77 and (2) the constrained least squares filter with a manually selected of 7.78.
Fig 7.77: Result with optimal
Fig 7.78: Results with manually selected