Work place: Department of Computer Science Kwame Nkrumah University of Science and Technology
Research Interests: Image Processing, Image Manipulation, Image Compression, Computer Vision
Dr James Ben Hayfron-Acquah is a Senior Lecturer. He received the BSc degree in Computer Science from the Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana in 1991, his MSc Computer Science and Applications degree from Shanghai University of Science and Technology, Shanghai, China in 1996 and his PhD from the Southampton University, Southampton, England in 2003. He is currently a Senior Lecturer and the Head at the Department of Computer Science, KNUST. He has over 50 publications to his credit. His main research areas include Biometrics, Cloud Computing, Image Processing and Computer Vision.
DOI: https://doi.org/10.5815/ijigsp.2018.03.04, Pub. Date: 8 Mar. 2018
The process of generating histogram from a given image is a common practice in the image processing domain. Statistical information that is generated using histograms enables various algorithms to perform a lot of pre-processing task within the field of image processing and computer vision. The statistical subtasks of most algorithms are normally effectively computed when the histogram of the image is known. Information such as mean, median, mode, variance, standard deviation, etc. can easily be computed when the histogram of a given dataset is provided. Image brightness, entropy, contrast enhancement, threshold value estimation and image compression models or algorithms employ histogram to get the work done successfully. The challenge with the generation of the histogram is that, as the size of the image increases, the time expected to traverse all elements in the image also increases. This results in high computational time complexity for algorithms that employs the generation histogram as subtask. Generally the time complexity of histogram algorithms can be estimated as O(N2) where the height of the image and its width are almost the same. This paper proposes an approximated method for the generation of the histogram that can reduce significantly the time expected to complete a histogram generation and also produce histograms that are acceptable for further processing. The method can theoretically reduce the computational time to a fraction of the time expected by the actual method and still generate outputs of acceptable level for algorithms such as Histogram Equalization (HE) for contrast enhancement and Otsu automatic threshold estimation.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2017.02.01, Pub. Date: 8 Feb. 2017
Image processing techniques for object tracking, identification and classification have become common today as a result of improved quality of cameras as well as prices of cameras becoming cheaper and cheaper day by day. The use of cameras also make it possible for human analysis of video streams or images where it is difficult for robots or algorithms or machines to effectively deal with the images. However, the use of cameras for basic tracking and analysing do not come without challenges such as issues with sudden changes in illumination, shadows, occlusion, noise, and high computational time and space complexities of algorithms. A typical image processing task may involve several subtasks such as capturing, and pre-processing which demand high computational resources to complete. One of the main pre-processing tasks used in image processing is image segmentation which enables images to be divided into sections of interest in order to perform analysis on them. Background Subtraction is commonly used to segment images into Background and Foreground for further processing. Algorithms producing highly accurate results during this segmentation task normally demand high computation time or memory space, while algorithms that use smaller memory space and shorter time to complete this segmentation task may also suffer from limitations that may lead to undesired results at some point in time. Poor outputs from algorithms will eventually lead to system failure which must be avoided as much as possible. This paper proposes a median based background updating algorithm which determines the median of a buffer containing values that are highly correlated. The algorithm achieves this by deletingan extreme valuefrom the buffer whenever data is to be added to it.Experiments show that the method produces good results with less computational time which will make it possible to implement on devices that do not have much computation resources.[...] Read more.
Subscribe to receive issue release notifications and newsletters from MECS Press journals