Vo Hoai Viet

Work place: University of Science, Ho Chi Minh City, 700000, Viet Nam

E-mail: vhviet@fit.hcmus.edu.vn


Research Interests: Programming Language Theory, Image Processing, Image Manipulation, 2D Computer Graphics, Computer Graphics and Visualization, Computer Vision, Computer systems and computational processes


Vo Hoai Viet is a Lecturer and Senior Researcher at the University of Science, VNU-HCMC, Vietnam from 2012. He is currently working in Computer Vision at University of Science, VNU-HCMC, Vietnam. His research interests include Digital Image Processing, Programming Language, Computer Graphics, Computer vision, and Machine Learning.

Author Articles
Object Tracking: An Experimental and Comprehensive Study on Vehicle Object in Video

By Vo Hoai Viet Huynh Nhat Duy

DOI: https://doi.org/10.5815/ijigsp.2022.01.06, Pub. Date: 8 Feb. 2022

Tracking objects on camera or video is very important for automated surveillance systems. Along with the development of techniques and scientific research in object tracking, automatic surveillance systems have gradually become better. With the input of a frame including the object to be tracked and the location information of the object to be tracked in that video. The output will be the prediction of the position of the object to be tracked on the next frame. This paper presents the comparison and experiment of some traditional object tracking methods and suggestions for improvement between them. Firstly, we examined related studies, traditional object tracking models. Secondly, we examined image and video data sets for verification purposes. Thirdly, experimenting with some related research works in traditional object tracking problems, evaluation of the existing model, what has been achieved and what has not been achieved for the current models. Propose improvements based on the combination of traditional methods. Finally, we aggregate these results to evaluate for each type of object tracking model. The results show that Particles Filter method has the highest CDT with TO score of 0.907971 on VOT dataset and 0.866259 on UAV123 dataset. However, the most stable are the two hybrid methods, the Particle filter base on Mean shift method has a TF score of 31.1 on the VOT dataset and the Kalman Filter base on Mean shift method has a TME score of 28.8233 on the UAV dataset. Because low-level features cannot represent all the information of an object to be tracked during the completion of the experiment, we can conclude that combining deep learning network and using high-level feature into the tracking model can bring better performance in the future.

[...] Read more.
Spatial-Temporal Shape and Motion Features for Dynamic Hand Gesture Recognition in Depth Video

By Vo Hoai Viet Nguyen Thanh Thien Phuc Pham Minh Hoang Liu Kim Nghia

DOI: https://doi.org/10.5815/ijigsp.2018.09.03, Pub. Date: 8 Sep. 2018

Human-Computer Interaction (HCI) is one of the most interesting and challenging research topics in computer vision community. Among different HCI methods, hand gesture is the natural way of human-computer interaction and is focused on by many researchers. It allows the human to use their hand movements to interact with machine easily and conveniently. With the birth of depth sensors, many new techniques have been developed and gained a lot of achievements. In this work, we propose a set of features extracted from depth maps for dynamic hand gesture recognition. We extract HOG2 for shape and appearance of hand in gesture representation. Moreover, to capture the movement of the hands, we propose a new feature named HOF2, which is extracted based on optical flow algorithm. These spatial-temporal descriptors are easy to comprehend and implement but perform very well in multi-class classification. They also have a low computational cost, so it is suitable for real-time recognition systems. Furthermore, we applied Robust PCA to reduce feature’s dimension to build robust and compact gesture descriptors. The robust results are evaluated by cross-validation scheme using a SVM classifier, which shows good outcome on challenging MSR Hand Gestures Dataset and VIVA Challenge Dataset with 95.51% and 55.95% in accuracy, respectively. 

[...] Read more.
Other Articles