Shanmukhappa Angadi

Work place: Department of Computer Science and Engineering,Centre for Post Graduate Studies, VTU. Belagavi, India



Research Interests: Information-Theoretic Security, Data Structures and Algorithms, Image Processing, Pattern Recognition, Computational Science and Engineering


Shanmukhappa. A. Angadi is a Professor in the Department of Computer Science and Engineering, Centre for PG Studies, Visvesvaraya Technological University, Belagavi, Karnataka, India. He earned a Bachelor Degree in Electronics and Communication Engineering from Karnataka University, Dharwad and a Master Degree in Computer Engineering from University of Mysore, Mysore and a PhD in Computer science from the Department of Studies in Computer science, University of Mysore, Karnataka, India. He also completed PGDiploma in Opeartions Management (PGDOM), from IGNOU, New Delhi. He has completed many research and consulting projects of AICTE India under RPS, MODROBS, TAPTEC schemes and Research grant scheme of VTU Belagavi. His research areas are Image Processing and Pattern Recognition, Character Recognition, soft computing, Internet of things and Graph Theoretic techniques. He is a co-author of a book on C-programming language. He is life member of professional bodies like ISTE and IETE

Author Articles
Dynamic Summarization of Video Using Minimum Edge Weight Matching in Bipartite Graphs

By Shanmukhappa Angadi Vilas Naik

DOI:, Pub. Date: 8 Mar. 2016

To select the long-running videos from online archives and other collections, the users would like to browse, or skim through quickly to get a hint on the semantic content of the videos. Video summarization addresses this problem by providing a short video summary of a full-length video. An ideal video summary would include all the important segments of the video and remain short in length. The problem of summarization is extremely challenging and has been a widely pursued subject of recent research. There are many algorithms presented in literature for video summarization and they represent visual information of video in concise form. Dynamic summaries are constructed with collection of key frames or some smaller segments extracted from video and is presented in the form of small video clip. This paper describes an algorithm for constructing the dynamic summary of a video by modeling every 40 consecutive frames of video as a bipartite graph. The method considers every 20 consecutive frames from video as one set and next 20 consecutive frames as second set of bipartite graph nodes with frames of the video representing nodes of the graph and edges connecting nodes denoting the relation between frames and edge weight depicting the mutual information between frames. Then the minimum edge weight maximal matching in every bipartite graph (a set of pair wise non-adjacent edges) is found using Hungarian method. The frames from the matchings which are represented by the nodes connected by the edges with weight below some empirically defined threshold and two neighbor frames are taken as representative frames to construct the summary. The results of the experiments conducted on data set containing sports videos taken from YOUTUBE and videos of TRECVID MED 2011 dataset have demonstrated the satisfactory average values of performance parameters, namely Informativeness value of 94 % and Satisfaction value of 92 %. These values and duration (MSD) of summaries reveal that the summaries constructed are significantly concise and highly informative and provide highly acceptable dynamic summary of the videos. 

[...] Read more.
Other Articles