Sagnik Sarkar

Work place: Computer Science and Engineering, VIT University, Vellore 632014, India



Research Interests: Image Processing, Image Manipulation, Image Compression, Computer Graphics and Visualization, Computer Vision, Computational Learning Theory, Artificial Intelligence


Sagnik Sarkar is pursuing his Bachelor’s Degree in Computer Science and Engineering at Vellore Institute of Technology, India. His major fields of interest include Artificial Intelligence, Machine Learning, Computer Vision, Federated Learning, Image Processing, Data Visualization and has worked on several research projects and papers for the same. His recent works include the implementation of metaheuristic algorithms in the training of Neural Networks and Federated Learning Systems and novel activation functions in Neural Networks.

Author Articles
Convolutional Neural Network (CNN-SA) based Selective Amplification Model to Enhance Image Quality for Efficient Fire Detection

By Sagnik Sarkar Aditya Sunil Menon Gopalakrishnan T Anil Kumar Kakelli

DOI:, Pub. Date: 8 Oct. 2021

Fires spread quickly and are extremely difficult to contain, and cause a great deal of damage to people and property. Current domestic systems for detecting outbreaks of fire, such as smoke detectors, are prone to reliability issues and will benefit greatly from having a secondary system in place to confirm the presence of a fire in the premises. In this paper, we have proposed a novel image pre-processing algorithm known as the Selective Amplification. This technique enhances images that are to be used in Convolutional Neural Networks, which are then trained on pre-processed images to detect fires with high accuracy. The efficacy of the proposed technique is verified by training two identical Convolutional Neural Network models on the same dataset of images. We train the proposed model on a version of the dataset that uses Selective Amplification for data pre-processing. The proposed model then demonstrates an improvement in the accuracy of the detection of fire in real-time over by 12.85%, compared to an identical model trained on the dataset without any pre-processing performed beforehand.

[...] Read more.
Other Articles