IJIGSP Vol. 15, No. 2, Apr. 2023
Cover page and Table of Contents: PDF (size: 670KB)
Accessing semantically relevant data from a database is not only essential in commercial applications but also in medical imaging diagnosis. Representation of the query image by its features and subsequently the dataset are the key factors in Content Based Image Retrieval (CBIR). Texture, shape and color are commonly used features for this purpose. Features extracted from the pre-trained Convolutional Neural Networks (CNNs) are used to improve the performance of CBIR methods. In this work, we explore a recent state of the art big dataset pre-trained CNNs which are known as Big Transfer Networks. Features extracted from Big Transfer Network have higher discriminative power compared to features of many other pre-trained CNNs. The idea behind the proposed work is to demonstrate the effectiveness of using features of big transfer networks for image retrieval. Further, features extracted from big transfer networks are concatenated to improve the performance of the proposed method. Feature diversity supplemented with network diversity should ensure good discriminative power for image retrieval. This idea is supported by performing simulations on four datasets with varying sizes in terms of number of images and classes. As feature size increases with the concatenation, we applied a dimensionality reduction algorithm i.e., Principal Component Analysis. Several distance metrics are explored in this work. By properly choosing the pre-trained CNNs and distance metric, it is possible to achieve higher mean average precisions. ImageNet-21K pre-trained CNN and Instagram pre-trained CNN are chosen in this work. Further, a pre-trained network trained on Imagenet-21K dataset is superior compared to the networks trained on ImageNet-1K dataset as there are more classes and presence of wide variety of images. This is demonstrated by applying our algorithm on four datasets i.e., COREL-100, CALTECH-101, FLOWER-17 and COIL-100. Simulations are presented for various precisions (scopes), and distance metrics. Results are compared with the existing algorithms and superiority of the proposed method in terms of mean Average Precision is shown.[...] Read more.
Breast cancer is one of the main causes of mortality for women around the world. Such mortality rate could be reduced if it is possible to diagnose breast cancer at the primary stage. It is hard to determine the causes of this disease that may lead to the development of breast cancer. But it is still important in predicting the probability of cancer. We can assess the likelihood of occurrence of breast cancer using machine learning algorithms and routine diagnosis data. Although a variety of patient information attributes are stored in cancer datasets not all of the attributes are important in predicting cancer. In such situations, feature selection approaches can be applied to keep the pertinent feature set. In this research, a comprehensive analysis of Machine Learning (ML) classification algorithms with and without feature selection on Wisconsin Breast Cancer Original (WBCO), Wisconsin Diagnosis Breast Cancer (WDBC), and Wisconsin Prognosis Breast Cancer (WPBC) datasets is performed for breast cancer prediction. We employed wrapper-based feature selection and three different classifiers Logistic Regression (LR), Linear Support Vector Machine (LSVM), and Quadratic Support Vector Machine (QSVM) for breast cancer prediction. Based on experimental results, it is shown that the LR classifier with feature selection performs significantly better with an accuracy of 97.1% and 83.5% on WBCO and WPBC datasets respectively. On WDBC datasets, the result reveals that the QSVM classifier without feature selection achieved an accuracy of 97.9% and these results outperform the existing methods.[...] Read more.
Wavelets play a key role in many applications like image representations and compression, which is a main issue in the process of reducing the size in bytes of a digital image file to store it in the memory and as well as to transmit. This paper presents image representation using various wavelet transforms. In the proposed method, the comparison between wavelets applied on an image are considered by counting the number of approximation coefficients retained for the representation of images and comparative analysis of the standard wavelets available is presented. This paper mainly aims at the type of the wavelet which retains less number of approximation coefficients for representing skin cancer image and gives the reduced compressed file size by considering the various parameters like Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Structural Similarity Index Measure (SSIM) and Compression Efficiency.[...] Read more.
This paper uses a two-step method for edge detection using a polynomial differentiation threshold on contrast-enhanced images. In the first step, to enhance the image contrast, the mean absolute deviation and harmonic mean brightness values of the images are calculated. Mean absolute deviation is used to perform the histogram clipping to restrict over-enhancement. First, the clipped histogram is divided in half, and then two sub-images are created and equalized, and combined into a final image that keeps image quality. The second phase involves edge detection using a polynomial differentiation-based threshold on contrast-improved visuals. The polynomial differentiation curve-fitting method was used to smooth the histogram data. The nearest index value to zero is utilized to calculate the threshold value to detect the edges. The significance of the proposed work is to contrast enhancement of low-light images to extract the edge lines. Its value or merit is to achieve improved edge results in terms of various image quality metrics. The findings of the proposed research work are to detect the edges of low-contrast images. Image quality metrics are computed and it is observed that the suggested algorithm surpasses former methods in respect of Edge-based contrast measure (EBCM), Performance Ratio, F-Measure, and Edge-strength similarity-based image quality metric (ESSIM).[...] Read more.
Image segmentation is one of the most important steps in computer vision and image processing. Image segmentation is dividing the image into meaningful regions based on similarity pixels. We propose a new segmentation algorithm based on de-noising of images, good segmentation results depends on the noisy free images. This means that, we may not get the proper segmentation results in the presence of noise. For this, image pre-processing stage is necessary to denoise the image. An image segmentation result depends on the pre-processing results. In this paper, proposed a new integrating approach based on de-noising and segmentation which is called Level Set Segmentation of Images using Block Matching Local SVD Operator Based Sparsity and TV Regularization (BMLSVD-TV). The proposed method is dividing into two stages, in the first stage images are de-noised based on BMLSVDTV algorithm. De-noising images is a crucial aspect of image processing, there are a few factors to keep in mind during image de-noising such as smoothing the flat areas, safeguarding the edges without blurring, and keeping the textures and new artifacts should not be created. Block Matching, Updating of basis vector, Sparsity regularization, and TV regularization. This method searches for blocks that are comparable to each other in block matching. The data in the array demonstrates a high level of correlation after the matching blocks are grouped together. The sparse coefficients will be gathered after adequate modification. Most of the noise in the image will be minimized through the sparsity regularization step by employing different de-noising algorithms such as Block matching 3D using fixed basis vectors. The edge information will be retained and the piecewise smoothness of the image will be produced using the TV regularization step. Later, in the second state create a contour on the de-noised image and evolve the contour based on level Set function (LSF) defined. This combined approach gives better performance for segmenting the image regions over existing level set methods. When compared our proposed level set method over state of art level set methods. The proposed segmentation method is superior in terms of no.of iterations, CPU time and area covered over the existing level set methods. By this model, we obtained a good quality of restored image from noisy image and the performance of the image quality assessed by the two important parameters such as PSNR and Mean Square Error (MSE). The higher value of PSNR and lower value of MSE leads to good quality of image. In this research work, the proposed denoising method got higher PSNR values over existing methods. Where recovering the original image content is essential for effective performance, image denoising is a key component. It is used in a variety of applications, including image restoration, visual tracking, image registration, image segmentation, and image classification. This model is the best segmentation method for accurate segmentation of objects based on denoising images when compared with the other models in the field.[...] Read more.
Realistic knowledge of rainfall characteristics and modeling parameters such as size, shape, and drop size distribution is essential in numerous areas of scientific, engineering, industrial and technological applications. Some key application areas include, but not limited to microphysics analysis of precipitation composition phenomenon, weather prediction, signal attenuations forecasting, signal processing, remote sensing, radar meteorology, stormwater management and cloud photo detection. In this contribution, the influence of rain intensity on raindrop diameter and specific attenuation in Lokoja, a typical climate region of Nigeria is investigated and reported. Three different rain rates classes obtained due to heavy rainfall depth, heavy rainfall depth, and heavy rainfall depth have been explored for the raindrop size distribution analysis. The three-parameter lognormal and Weibull models were utilised to estimate the influence of rain rates on the drop sizes and specific rainfall attenuation in the study location. For Lognormal model, the maximum raindrop concentration occurred approximately at diameter of 1 mm before showing downfall performance trends as the drop diameter increases. In the case of Weilbull model, the maximum raindrop concentration occurred at different drop diameter with the three rain rate classes, before showing downfall concentration trends with increasing rain drop diameter values. By means of the two models, the highest raindrops concentration values attained in correspondence with the specific rain attenuation were made by drop diameters not more than 2.5 mm. In terms of rain rate, specific attenuation and frequency connection, the results disclose that attenuation of propagated electromagnetic waves increases at increasing rainfall depth and increasing operating frequency bands. The results also disclose that the specific attenuation is directly proportional to the increase in rain intensity levels in correspondent with the operational frequency. As a case in point, at 4GHz frequency, the attenuation level of about 20 dB/km level is attained for mean, minimum and maximum rain rates of 29.12, 12.23 and 50.22 mm/hr, respectively. But as the frequency increased from 4GHz to 20GHz, the attenuation level almost doubles from 20 to 45dB/km at still same rain rates. The above performance is so, because at higher radio-microwave frequencies, the wavelength of the propagated electromagnetic waves approaches the mean diameter of the raindrop. The results display gradual increase in attenuation levels as the diameter rain drop sizes and intensity increases or become broader. The attenuation grows because the raindrops interfere, distort, absorb and scatter major portion of the microwave energy. However, the gradual trend in the attenuation level increase becomes slower and tending to logarithm stability at larger rain drop values. This may suggest that the attenuation level may come to equilibrium state at higher rain drop diameters. The resultant outcome of this work can assist microwaves communication engineers and relevant stakeholders in the telecommunication sector with expedient information needed to manage specific attenuation problems over Earth–space links communication channels, particualry during rainy seasons.[...] Read more.
With aiming to design a novel image watermarking technique, this paper presents a novel method of image watermarking using lifting wavelet transform, discrete wavelet transform, and one-dimensional linear discriminate analysis. In this blind watermarking technique, statistical features of the watermarked image have been incorporated for preparing the training set and testing set. After that, the principal component analysis is applied to reduce the obtained feature set, so the training time is reduced to the desired level and accuracy is enhanced. The one-dimensional linear discriminate analysis is used for binary classification as it has the ability to classify with good accuracy. This technique applies discrete wavelet transform and lifting wavelet transform in two different watermarking schemes for the image transformation. Both transformations give higher tolerance against image distortion than other conventional transformation methods. One of the significant challenges of a watermarking technique is maintaining the proper balance between robustness and imperceptibility. The proposed blind watermarking technique exhibits the imperceptibility of 43.70 dB for Lena image in case of no attack for the first scheme (using LWT) and 44.71 dB for the second scheme (using DWT+LWT). The first watermarking scheme is tested for robustness, and it is seen that the given scheme is performing well against most of the image attacks in terms of robustness. This technique is compared using some existing similar watermarking methods, and it is found to be robust against most image attacks. It also maintains the excellent quality of the watermarked image.[...] Read more.