Dr. Alexandre L. M. Levada
Digital Image Processing is an area of science that aims to develop and apply mathematical methods
in order to perform one of two tasks: improve the quality of images or videos for later visualization
by humans or extract relevant information from the scene being imaged for later computational processing.
With the rapid advancement of technology for digital image acquisition and storage, applications are
increasingly prevalent in areas such as medicine, remote sensing, robotics, among others.
Briefly speaking, image processing systems are divided into three major steps: acquisition , processing and visualization. In the image acquisition stage the data is captured by some device such as digital cameras, MRI scanners and other imaging sensors. In digital processing, two processes are fundamental: sampling, which consists of the discretization of the spatial coordinates (x, y), having direct relation with the spatial resolution of the acquired image, and the quantization, which is nothing more than the discretization of the intensity values of the image. Usually, monochrome images are 8-bit (255 grayscale levels) and color images are 24-bit because they have 3 color channels: R, G, and B. Image compression methods are also very important in this step to ensure that the storage is done in a computationally efficient way.
In the processing step, several operations can be applied through computational algorithms. Among the most common techniques, we can highlight filtering, enhancement and restoration of images, as well as feature extraction. In filtering, the main purpose is to remove various types of noise that may be present in the image or detect edges, while in the enhancement the search is for improving image quality, for example by increasing contrast or sharpness. Methods for restoring images are focused on eliminating degradations such as blurring that can be caused by a number of factors ranging from camera movement and loss of focus to atmospheric turbulence in the case of aerospace sensors. In feature extraction, the objective is to detect and quantify relevant information about the objects present in the scene through descriptors. With this, machine learning methods are able to compare and recognize different types of objects present in images.
Finally, the visualization step consists of the convertiion of pixels/voxels into graphic representations. Mono and multi-spectral images are displayed in grayscale and colored representations, respectively. The imaging device industry has advanced at a tremendous rate with the manufacture of ultra-high resolution monitors and displays, such as today's 4K TVs.