Work place: Department of Electronics & Communication Engineering, Teerthanker Mahaveer University, Moradabad, Uttar Pradesh, India
E-mail: vibhorkrbhardwaj@gmail.com
Website:
Research Interests:
Biography
Dr. Vibhor Kumar Bhardwaj received his Ph.D. in Optical Feedback Interferometry and is an academic researcher in Electronics and Communication Engineering with over ten years of research and academic experience. His research focuses on self-mixing interferometry, adaptive and nonlinear signal demodulation, optical biosensing, and statistically robust sensor modeling. In recent years, he has emphasized the integration of artificial intelligence and machine learning with optical and electronic sensing systems, exploring physics-informed learning, data-driven interferometric feature extraction, and intelligent fault diagnosis. He has authored multiple SCI-indexed journal articles and international conference publications, with current work targeting explainable and computationally efficient AI models for high-precision industrial sensing.
By Rahul Vishnoi Alka Verma Vibhor Kumar Bhardwaj
DOI: https://doi.org/10.5815/ijisa.2026.01.06, Pub. Date: 8 Feb. 2026
Image dehazing is a critical preprocessing step in computer vision, enhancing visibility in degraded conditions. Conventional supervised methods often struggle with generalization and computational efficiency. This paper introduces a self-supervised image dehazing framework leveraging a depth-guided Swin Transformer with hybrid attention. The proposed hybrid attention explicitly integrates CNN-style channel and spatial attention with Swin Transformer window-based self-attention, enabling simultaneous local feature recalibration and global context aggregation. By integrating a pre-trained monocular depth estimation model and a Swin Transformer architecture with shifted window attention, our method efficiently models global context and preserves fine details. Here, depth is used as a relative structural prior rather than a metric quantity, enabling robust guidance without requiring haze-invariant depth estimation. Experimental results on synthetic and real-world benchmarks demonstrate superior performance, with a PSNR of 23.01 dB and SSIM of 0.879 on the RESIDE SOTS-indoor dataset, outperforming classical physics-based dehazing (DCP) and recent self-supervised approaches such as SLAD, achieving a PSNR gain of 2.52 dB over SLAD and 6.39 dB over DCP. Our approach also significantly improves object detection accuracy by 0.15 mAP@0.5 (+32.6%) under hazy conditions, and achieves near real-time inference (≈35 FPS at 256x256 resolution on a single GPU), confirming the practical utility of depth-guided features. Here, we show that our method achieves an SSIM of 0.879 on SOTS-Indoor, indicating strong structural and color fidelity for a self-supervised dehazing framework.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals