Quick introduction:
The first step towards designing an image analysis system is digital image
acquisition using sensors in optical or thermal wavelengths.
a twodimensional
image that is recorded by these sensors is the mapping of the
three-dimensional visual world. The captured two dimensional signals are
sampled and quantized to yield digital images.
Sometimes we receive noisy images that are degraded by some degrading
mechanism. One common source of image degradation is the optical lens
system in a digital camera that acquires the visual information. If the camera
is not appropriately focused then we get blurred images. Here the blurring
mechanism is the defocused camera. Very often one may come across images
of outdoor scenes that were procured in a foggy environment. Thus any
outdoor scene captured on a foggy winter morning could invariably result
into a blurred image. In this case the degradation is due to the fog and mist
in the atmosphere, and this type of degradation is known as atmospheric
degradation. In some other cases there may be a relative motion between the
object and the camera. Thus if the camera is given an impulsive displacement
during the image capturing interval while the object is static, the resulting
image will invariably be blurred and noisy. In some of the above cases, we need
appropriate techniques of refining the images so that the resultant images are
of better visual quality, free from aberrations and noises. Image enhancement,
filtering, and restoration have been some of the important applications of
image processing since the early days of the field.
Segmentation is the process that subdivides an image into a number of
uniformly homogeneous regions. Each homogeneous region is a constituent
part or object in the entire scene. In other words, segmentation of an image is
defined by a set of regions that are connected and nonoverlapping, so that each
pixel in a segment in the image acquires a unique region label that indicates
the region it belongs to. Segmentation is one of the most important elements
in automated image analysis, mainly because a t this step the objects or other
entities of interest are extracted from an image for subsequent processing,
such
as description and recognition. For example, in case of an aerial image
containing the ocean and land, the problem is to segment the image initially
into two parts-land segment and water body or ocean segment. Thereafter
the objects on the land part of the scene need to be appropriately segmented
and subsequently classified.
After extracting each segment; the next task is to extract a set of meaningful
features such as texture, color, and shape. These are important measurable
entities which give measures of various properties of image segments. Some
of the texture properties are coarseness, smoothness, regularity, etc., while
the common shape descriptors are length, breadth, aspect ratio, area, location,
perimeter, compactness, etc. Each segmented region in a scene may be
characterized by a set of such features.
Finally based on the set of these extracted features, each segmented object
is classified to one of a set of meaningful classes. In a digital image of ocean,
these classes may be ships or small boats or even naval vessels and a large class
of water body. The problems of scene segmentation and object classification
are two integrated areas of studies in machine vision. Expert systems, semantic
networks, and neural network-based systems have been found to perform
such higher-level vision tasks quite efficiently.
Another aspect of image processing involves compression and coding of
the visual information. With growing demand of various imaging applications,
storage requirements of digital imagery are growing explosively. Compact
representation of image data and their storage and transmission through
communication bandwidth is a crucial and active area of development today.
Interestingly enough, image data generally contain a significant amount of superfluous
and redundant information in their canonical representation. Image
compression techniques helps to reduce the redundancies in raw image data
in order to reduce the storage and communication bandwidth.