IJIGSP Vol. 10, No. 11, Nov. 2018
Cover page and Table of Contents: PDF (size: 244KB)
This paper presents a forensic perspective way of recognizing the weapons by processing wound patterns using ensemble learning that gives an effective forensic computational approach for the distinguished weapons used in most of crime cases. This will be one of the computational and effective substitutes to investigate the weapons used in crime, the methodology uses the collective wound patterns images from the human body for the recognition. The ensemble learning used in this proposed methodology improves the accuracy of machine learning methods by combining several methods and predicting the final accuracy by meta-classifier. It has given better recognition process compared to single individual model and the traditional method. Ensemble learning is more flexible in function and is better in the wound pattern recognition and their respective weapons as it overcomes the issue to overfit training data. The result achieved for weapon recognition based on wound patterns is 98.34%, from existing database of 800 images of pattern consisting of wounds of stabbed and gunshots. The authenticated experiments out-turns the preeminence of projected method over the widespread feature extraction approach considered in the work and also compares and suggest the false positive recognition verses false negative recognition. The proposed methodology has given better results compared to traditional method and will be helpful in forensic and crime investigation.[...] Read more.
With adding depth data to color data, it is possible to increase recognition accuracy significantly. Depth image mostly uses for calculating range or distance between object and sensor. Also they are used for making 3-D models of objects and increasing accuracy. Depending on the sensor’s depth quality, the recognition accuracy changes. Age estimation is useful for calculating the aging effects using prior patterns, which are recorded during years from subjects. In this paper, age estimation occurs using summation of RGB image edges gray value and summation of depth image’s entropy edges. Furthermore, a new face detection and extraction method for depth images is represented, which is based on standard deviation filter, ellipse fitting and some pre-post processing techniques. The advantage of this method is its speed and single image aspect capability. In this approach, there is no need to learning and classification process. Proposed method is between 10 to 20 times faster but lower accurate. System is validated with some benchmark color and color-depth (RGB-D) face databases, and in comparing with other age estimation methods, returned satisfactory and promising results. Because of the high speed in this method, it is possible to use it on real time applications. It is mentionable that this paper is the first age estimation research on RGB-D images.[...] Read more.
The most significant parameters of image processing are image resolution and speed of processing. Compressing the multimedia datasets, which are rich in quality and volume is challenging. Wavelet based image compression techniques are the best tools for lossless image compression, however, they suffer by low compression ratio. Conversely fractional cosine transform based compression is a lossy compression technique with less image quality. In this paper, an improved compression technique is proposed by using wavelet transform and discrete fractional cosine transform to achieve high quality of reconstruction of an image at high compression rate. The algorithm uses wavelet transform to decompose image into frequency spectrum with low and high frequency sub bands. Application of quantization process for both sub bands at two levels increases the number of zeroes, however rich zeroes from high frequency sub bands are eliminated by creating the blocks and then storing only non-zero values and kill all blocks with zero values to form reduced array. The arithmetic coding method is used to encode the sub bands. The Experimental results of proposed method are compared with its primitive two dimensional fractional cosine and fractional Fourier compression algorithms and some significant improvements can be observed in peak signal to noise ratio and self-similarity index mode at high compression ratio.[...] Read more.
This research presents a framework to detect a questionable observer depending on a specific activity named “frequent iris movement”. We have focused on some activities and behaviors upon which we can classify one as questionable. So this research area is not only an important part of computer vision and artificial intelligence, but also a major part of human activity recognition (HAR). We have used Haar Cascade Classifier to detect irises of both left and right eyes. Then running some morphological operation we have detected the midpoint between left and right irises; and based on some characteristics of midpoint movement we have detected a specific activity – frequent iris movement. Depending on this activity we are declaring someone as questionable observer. To validate this research we have created our own dataset with 86 videos, where 15 individuals have volunteered. We have achieved an accuracy of 90% for the first 100 frames or 3.33 seconds of each of our videos and an accuracy of 93% for the first 150 frames or 5.00 seconds of each of our videos. No work has been done yet on basis of this specific activity to detect someone as questionable and furthermore our work outperforms most of the existing work on questionable observe detection and suspicious activity recognition.[...] Read more.
Compression methods are increasingly used for medical images for efficient transmission and reduction of storage space. In this work, we proposed a compression scheme for colored biomedical image based on vector quantization and orthogonal transforms. The vector quantization relies on machine learning algorithm (K-Means and Splitting Method). Discrete Walsh Transform (DWaT) and Discrete Chebyshev Transform (DChT) are two orthogonal transforms considered. In a first step, the image is decomposed into sub-blocks, on each sub-block we applied the orthogonal transforms. Machine learning algorithm is used to calculate the centers of clusters and generates the codebook that is used for vector quantization on the transformed image. Huffman encoding is applied to the index resulting from the vector quantization. Parameters Such as Mean Square Error (MSE), Mean Average Error (MAE), PSNR (Peak Signal to Noise Ratio), compression ratio, compression and decompression time are analyzed. We observed that the proposed method achieves excellent performance in image quality with a reduction in storage space. Using the proposed method, we obtained a compression ratio greater than 99.50 percent. For some codebook size, we obtained a MSE and MAE equal to zero. A comparison between DWaT, DChT method and existing literature method is performed. The proposed method is really appropriate for biomedical images which cannot tolerate distortions of the reconstructed image because the slightest information on the image is important for diagnosis.[...] Read more.
This paper demonstrates mainly on feature extraction by analytic and holistic methods and proposes a novel approach for feature level fusion for efficient expression recognition. Gabor filter magnitude feature vector is fused with upper part geometrical features and phase feature vector is fused with lower part geometrical features respectively. Both these high dimensional feature dataset has been projected into low dimensional subspace for de-correlating the feature data redundancy by preserving local and global discriminative features of various expression classes of JAFFE, YALE and FD databases. The effectiveness of subspace of fused dataset has been measured with different dimensional parameters of Gabor filter. The experimental results reveal that performance of the subspace approaches for high dimensional proposed feature level fused dataset compared with state of art approaches.[...] Read more.
The subject of the research in this scientific paper is a description of the opportunity for navigation through the websites, with special emphasis on analyzing different types of navigation systems. The important part of my paper is analyzing the characteristics of various navigation structures on the web pages, as well as the analyzing technical methods of displaying navigation on various users' computers. In addition, users can have different browsers, different operating systems, and different preferences in terms of their computers' settings. All these technical issues will have an impact on how the web pages will look on the user's computer. Another thing described in my work is the interpretation of navigation structures on the user's computer monitor. A special overview is made for the correlation between web navigation and all the other graphic elements in the web pages from point of view of a visual harmony of the websites. Additionally, here I give overall directions for using navigation type and their characteristics when designing websites, same as the description of some opinions and advice on the same topic. After that, I analyze twelve problems which arise from displaying navigation on the websites on the user's computer. In this paper, I will come across a few solutions for all of them, as well as recommendations for when to choose which solution.[...] Read more.