SANTOSH KUMAR VIPPARTHI

@iitrpr.ac.in

Associate Professor, School of Artificial Intelligence and Data Engineering
Indian Institute of Technology Ropar (IIT Ropar)



                             

https://researchid.co/skvipparthi

Dr. Santosh Kumar Vipparthi, with over 12 years of experience, is the Head and Associate Professor at the School of Artificial Intelligence and Data Engineering (sAIDE) at IIT Ropar. Previously, he contributed his expertise to IIT Guwahati’s School of Data Science and Artificial Intelligence and MNIT’s Computer Science and Engineering Department. His research spans visual perception tasks, including object detection and underwater exploration, with publications in top journals like IEEE-TIP. Dr. Vipparthi also serves on technical committees for major conferences. He is seeking motivated interns and PhD scholars. More details at [his website](.

RESEARCH INTERESTS

Computer Vision, Deep Learning, Facial Expression Recognition, Change Detection

80

Scopus Publications

1755

Scholar Citations

25

Scholar h-index

46

Scholar i10-index

Scopus Publications

  • A novel approach for image retrieval in remote sensing using vision-language-based image caption generation
    Prem Shanker Yadav, Dinesh Kumar Tyagi, and Santosh Kumar Vipparthi

    Springer Science and Business Media LLC

  • Probing Attention-Driven Normalizing Flow Network for Low-Light Image Enhancement
    Siddharth Singh, Nancy Mehta, K. N. Prakash, Santosh Kumar Vipparthi, and Subrahmanyam Murala

    Springer Nature Switzerland

  • Frequency Modulated Deformable Transformer for Underwater Image Enhancement
    Adinath Dukre, Vivek Deshmukh, Ashutosh Kulkarni, Shruti Phutke, Santosh Kumar Vipparthi, Anil B. Gonde, and Subrahmanyam Murala

    Springer Nature Switzerland

  • Attentive Color Fusion Transformer Network (ACFTNet) for Underwater Image Enhancement
    Mohd Ubaid Wani, Md Raqib Khan, Ashutosh Kulkarni, Shruti S. Phutke, Santosh Kumar Vipparthi, and Subrahmanyam Murala

    Springer Nature Switzerland

  • Fusing Image and Text Features for Scene Sentiment Analysis Using Whale-Honey Badger Optimization Algorithm (WHBOA)
    Prem Shanker Yadav, Dinesh Kumar Tyagi, and Santosh Kumar Vipparthi

    Springer Nature Switzerland

  • Triplet-set feature proximity learning for video anomaly detection
    Kuldeep Marotirao Biradar, Murari Mandal, Sachin Dube, Santosh Kumar Vipparthi, and Dinesh Kumar Tyagi

    Elsevier BV

  • Spectroformer: Multi-Domain Query Cascaded Transformer Network for Underwater Image Enhancement
    Md Raqib Khan, Priyanka Mishra, Nancy Mehta, Shruti S. Phutke, Santosh Kumar Vipparthi, Sukumar Nandi, and Subrahmanyam Murala

    IEEE
    Underwater images often suffer from color distortion, haze, and limited visibility due to light refraction and absorption in water. These challenges significantly impact autonomous underwater vehicle applications, necessitating efficient image enhancement techniques. To address these challenges, we propose a Multi-Domain Query Cascaded Transformer Network for underwater image enhancement. Our approach includes a novel Multi-Domain Query Cascaded Attention mechanism that integrates localized transmission features and global illumination features. To improve feature propagation from the encoder to the decoder, we propose a Spatio-Spectro Fusion-Based Attention Block. Additionally, we introduce a Hybrid Fourier-Spatial Up-sampling Block, which uniquely combines Fourier and spatial upsampling techniques to enhance feature resolution effectively. We evaluate our method on benchmark synthetic and real-world underwater image datasets, demonstrating its superiority through extensive ablation studies and comparative analysis. The testing code is available at: https://github.com/Mdraqibkhan/Spectroformer.

  • C<sup>2</sup>AIR: Consolidated Compact Aerial Image Haze Removal
    Ashutosh Kulkarni, Shruti S. Phutke, Santosh Kumar Vipparthi, and Subrahmanyam Murala

    IEEE
    Aerial image haze removal deals with improving the visibility and quality of images captured from aerial platforms, such as drones and satellites. Aerial images are commonly used in various applications such as environmental monitoring, and disaster response. These applications usually require cleaner data for accurate functioning. However, atmospheric conditions such as haze or fog can significantly degrade the quality of these images, reducing their contrast, color saturation, and sharpness, making it difficult to extract meaningful information from them. Existing methods rely on computationally heavy and haze density (light, moderate, dense) specific architectures for aerial image dehazing. In light of these limitations, we propose a novel lightweight and consolidated approach for aerial image dehazing. In this approach, we propose Density Aware Query Modulated Block for learning weather degradations in input features and guiding the restoration process. Further, we propose Cross Collaborative Feed-Forward Block for learning to restore varying sizes of the structures in the input images. Finally, we propose Gated Adaptive Feature Fusion block to achieve inter-scale and intra-feature attentive fusion, effective for aerial image restoration. Extensive analysis on benchmark aerial image dehazing datasets and real-world images, along with detailed ablation studies validate the effectiveness of the proposed approach. Further, we have analysed our method for other restoration task such as underwater image enhancement to experiment its wide applicability. The code is available at https://github.com/AshutoshKulkarni4998/C2AIR.

  • Multi-Medium Image Enhancement With Attentive Deformable Transformers
    Ashutosh Kulkarni, Shruti S. Phutke, Santosh Kumar Vipparthi, and Subrahmanyam Murala

    Institute of Electrical and Electronics Engineers (IEEE)

  • LUMINATE: LINGUISTIC UNDERSTANDING AND MULTI-GRANULARITY INTERACTION FOR VIDEO OBJECT SEGMENTATION
    Rahul Tekchandani, Ritik Maheshwari, Praful Hambarde, Satya Narayan Tazi, Santosh Kumar Vipparthi, and Subrahmanyam Murala

    IEEE

  • RefMOS: A Robust Referred Moving Object Segmentation framework based on text query
    Prafulla Saxena, Susim Mukul Roy, Dinesh Kumar Tyagi, Santosh Kumar Vipparthi, Subrahmanyam Murala, and R. Balasubramanian

    IEEE
    Referred Moving object segmentation is a very challenging task in automated video surveillance applications as it requires additional information to learn about object representation referred by natural language expression. In segmenting specific moving objects targeted by a text, suppressing other moving as well as stationary objects is a crucial task. A better context needs to be learned where linguistic, spatial, and temporal features need to be taken into account. In this work, we have proposed a robust referred moving object segmentation (RefMOS) framework to capture moving objects referred by text query. Most of the earlier state-of-the-art methods exploit a different type of supervision by treating video frames as images but lack temporal information during processing. In this work, we have proposed an inter-frame movement detector (IFCD) module, which extracts the movement information between the consecutive frames and helps integrate temporal information with spatial visual features. Language embedding is utilized to capture the information of referred moving objects in the text by extracting linguistic features from a pre-trained language model, i.e., BERT. Furthermore, the cross-entropy loss and SGD optimizer are used to train the network. Our RefMOS framework competes with the state-of-the-art approaches and achieves 48.6 mean IOU on the ref-DAVIS 17 dataset.

  • Zero Reference based Low-light Enhancement with Wavelet Optimization
    Vivek Deshmukh, Adinath Dukre, Ashutosh Kulkarni, Prashant W. Patil, Santosh Kumar Vipparthi, Subrahmanyam Murala, and Anil Balaji Gonde

    IEEE
    Images captured in low light conditions usually suffer from poor visibility, a high amount of noise, and little information stored in the dark image, which has a negative impact on subsequent processing for outdoor computer vision applications. Presently, numerous deep learning based methods achieved superior performance with multi-exposure paired training data or additional information. However, obtaining multi-exposure data samples is a tedious task in real-time scenarios. To mitigate this challenge, we propose a zero reference based learnable wavelet approach without multi-exposure paired training data requirement for low-light image enhancement. Our proposed approach generates the low light image and learns to project an image into noise free similar looking image, then we enhance the image using retinex theory. Further, we have proposed learnable wavelet block to remove the hidden noise amplified while enhancement. We introduce Gaussian-based supervision to improve the smoothness of the image. Extensive experimental analysis on synthetic as well as real-world images, along with thorough ablation study demonstrate the effectiveness of our proposed method over the existing state-of-the-art methods for low-light image enhancement. The code is provided at https://github.com/vision-lab-sggsiet/Zero-Reference-based-Low-light-Enhancement-with-Wavelet-Optimization.

  • AeroDehazeNet: Exploiting Selective Multi-Scale Transformers for Aerial Image Dehazing
    Kartik Gonde, Prashant W. Patil, Santosh Kumar Vipparthi, Subrahmanyam Murala, Pramod Patil, and Vinod Kimbahune

    IEEE
    Remote sensing is the task of analyzing and acquiring useful information from satellite images captured at a far distance from the earth’s surface. These images are vulnerable to degradation due to the presence of mist or haze. Existing methods either make use of prior information to estimate haze free images, or use CNN architectures based on generative adversarial networks (GANs) or Transformers. Though the state-of-the-art transformer-based architectures helped to dehaze the aerial images, they lacked the ability to capture multi-scale dependencies of the image. Identifying this shortcoming, we propose AeroDehazeNet based on a transformer that captures multi-scale dependencies along with global dependencies of the image. Our network comprises of three key components: (1) a multi-scale selective attention (MScA) network to attentively process the multi-scale information in an image, (2) residual attention network (RAN in feed forward network responsible for distilling non-degraded features passed from MScA, and (3) high frequency dominant skip connection (HFDS) block for passing diverse features (low frequency and high frequency) prominent with multi-scale edge features from encoder levels to adjacent decoder levels. The extensive quantitative and qualitative comparisons with existing methods on synthetic and realworld data plus exhaustive ablation study demonstrate the efficacy of our proposed network over transformer based state-of-the-art architectures with comparatively less number of parameters and FLOPs. Testing code is available at https://github.com/KartikGonde/AeroDehazeNet.

  • Cross-centroid ripple pattern for facial expression recognition
    Monu Verma and Santosh Kumar Vipparthi

    Springer Science and Business Media LLC
    AbstractIn this paper, we propose a new feature descriptor Cross-Centroid Ripple Pattern (CRIP) for facial expression recognition. CRIP encodes the transitional pattern of a facial expression by incorporating a cross-centroid relationship between two ripples located at radius r1 and r2 respectively. These ripples are generated by dividing the local neighborhood region into subregions. Thus, CRIP has the ability to preserve macro and microstructural variations in an extensive region, which enables it to deal with side views and spontaneous expressions. Furthermore, gradient information between cross centroid ripples provides strength to capture prominent edge features in active patches: eyes, nose, and mouth, that define the disparities between different facial expressions. Cross-centroid information also provides robustness to irregular illumination. Moreover, CRIP utilizes the averaging behavior of pixels at subregions that yields robustness to deal with noisy conditions. The performance of the proposed descriptor is evaluated on seven comprehensive expression datasets consisting of challenging conditions such as age, pose, ethnicity, and illumination variations. The experimental results show that our descriptor consistently achieved a better accuracy rate as compared to existing state-of-the-art approaches.

  • NTIRE 2024 Image Shadow Removal Challenge Report
    Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Cailian Chen, Radu Timofte, Wei Dong, Han Zhou, Yuqiong Tian, Jun Chen,et al.

    IEEE
    This work reviews the results of the NTIRE 2024 Challenge on Shadow Removal. Building on the last year edition, the current challenge was organized in two tracks, with a track focused on increased fidelity reconstruction, and a separate ranking for high performing perceptual quality solutions. Track 1 (fidelity) had 214 registered participants, with 17 teams submitting in the final phase, while Track 2 (perceptual) registered 185 participants, resulting in 18 final phase submissions. Both tracks were based on data from the WSRD dataset, simulating interactions between self-shadows and cast shadows, with a large variety of represented objects, textures, and materials. Improved image alignment enabled increased fidelity reconstruction, with restored frames mostly indistinguishable from the references images for top performing solutions.

  • SBI-DHGR: Skeleton-based intelligent dynamic hand gestures recognition
    Satya Narayan, Arka Prokash Mazumdar, and Santosh Kumar Vipparthi

    Elsevier BV

  • Neural Architecture Search for Image Dehazing
    Murari Mandal, Yashwanth Reddy Meedimale, M. Satish Kumar Reddy, and Santosh Kumar Vipparthi

    Institute of Electrical and Electronics Engineers (IEEE)

  • Deep Insights of Learning based Micro Expression Recognition: A Perspective on Promises, Challenges and Research Needs
    Monu Verma, Santosh Kumar Vipparthi, and Girdhari Singh

    Institute of Electrical and Electronics Engineers (IEEE)

  • Efficient neural architecture search for emotion recognition
    Monu Verma, Murari Mandal, Satish Kumar Reddy, Yashwanth Reddy Meedimale, and Santosh Kumar Vipparthi

    Elsevier BV

  • HyFiNet: Hybrid feature attention network for hand gesture recognition
    Gopa Bhaumik, Monu Verma, Mahesh Chandra Govil, and Santosh Kumar Vipparthi

    Springer Science and Business Media LLC

  • Preface



  • Blind Image Inpainting via Omni-dimensional Gated Attention and Wavelet Queries
    Shruti S. Phutke, Ashutosh Kulkarni, Santosh Kumar Vipparthi, and Subrahmanyam Murala

    IEEE
    Blind image inpainting is a crucial restoration task that does not demand additional mask information to restore the corrupted regions. Yet, it is a very less explored research area due to the difficulty in discriminating between corrupted and valid regions. There exist very few approaches for blind image inpainting which sometimes fail at producing plausible inpainted images. Since they follow a common practice of predicting the corrupted regions and then inpaint them. To skip the corrupted region prediction step and obtain better results, in this work, we propose a novel end-to-end architecture for blind image inpainting consisting of wavelet query multi-head attention transformer block and the omni-dimensional gated attention. The proposed wavelet query multi-head attention in the transformer block provides encoder features via processed wavelet coefficients as query to the multi-head attention. Further, the proposed omni-dimensional gated attention effectively provides all dimensional attentive features from the encoder to the respective decoder. Our proposed approach is compared numerically and visually with existing state-of-the-art methods for blind image inpainting on different standard datasets. The comparative and ablation studies prove the effectiveness of the proposed approach for blind image inpainting. The testing code is available at : https://github.com/shrutiphutke/Blind_Omni_Wav_Net

  • NTIRE 2023 Image Shadow Removal Challenge Report
    Florin-Alexandru Vasluianu, Tim Seizinger, Radu Timofte, Shuhao Cui, Junshi Huang, Shuman Tian, Mingyuan Fan, Jiaqi Zhang, Li Zhu, Xiaoming Wei,et al.

    IEEE
    This work reviews the results of the NTIRE 2023 Challenge on Image Shadow Removal. The described set of solutions were proposed for a novel dataset, which captures a wide range of object-light interactions. It consists of 1200 roughly pixel aligned pairs of real shadow free and shadow affected images, captured in a controlled environment. The data was captured in a white-box setup, using professional equipment for lights and data acquisition sensors. The challenge had a number of 144 participants registered, out of which 19 teams were compared in the final ranking. The proposed solutions extend the work on shadow removal, improving over the performance level describing state-of-the-art methods.

  • RNAS-MER: A Refined Neural Architecture Search with Hybrid Spatiotemporal Operations for Micro-Expression Recognition
    Monu Verma, Priyanka Lubal, Santosh Kumar Vipparthi, and Mohamed Abdel-Mottaleb

    IEEE
    Existing neural architecture search (NAS) methods comprise linear connected convolution operations and use ample search space to search task-driven convolution neural networks (CNN). These CNN models are computationally expensive and diminish the quality of receptive fields for tasks like micro-expression recognition (MER) with limited training samples. Therefore, we propose a refined neural architecture search strategy to search for a tiny CNN architecture for MER. In addition, we introduced a refined hybrid module (RHM) for inner-level search space and an optimal path explore network (OPEN) for outer-level search space. The RHM focuses on discovering optimal cell structures by incorporating a multilateral hybrid spatiotemporal operation space. Also, spatiotemporal attention blocks are embedded to refine the aggregated cell features. The OPEN search space aims to trace an optimal path between the cells to generate a tiny spatiotemporal CNN architecture instead of covering all possible tracks. The aggregate mix of RHM and OPEN search space availed the NAS method to robustly search and design an effective and efficient framework for MER. Compared with contemporary works, experiments reveal that the RNAS-MER is capable of bridging the gap between NAS algorithms and MER tasks. Furthermore, RNAS-MER achieves new state-of-the-art performances on challenging MER benchmarks, including 0.8511%, 0.7620%, 0.9078% and 0.8235% UAR on COMPOSITE, SMIC, CASME-II and SAMM datasets respectively.

RECENT SCHOLAR PUBLICATIONS

  • A novel approach for image retrieval in remote sensing using vision-language-based image caption generation
    PS Yadav, DK Tyagi, SK Vipparthi
    Multimedia Tools and Applications 84 (6), 2985-3014 2025

  • Former-HGR: Hand Gesture Recognition with Hybrid Feature-Aware Transformer
    M Verma, G Gopalani, S Bharara, SK Vipparthi, S Murala, ...
    Authorea Preprints 2025

  • Attentive Color Fusion Transformer Network (ACFTNet) for Underwater Image Enhancement
    MU Wani, MR Khan, A Kulkarni, SS Phutke, SK Vipparthi, S Murala
    International Conference on Pattern Recognition, 308-324 2025

  • Probing Attention-Driven Normalizing Flow Network for Low-Light Image Enhancement
    S Singh, N Mehta, KN Prakash, SK Vipparthi, S Murala
    International Conference on Pattern Recognition, 137-151 2025

  • Frequency Modulated Deformable Transformer for Underwater Image Enhancement
    A Dukre, V Deshmukh, A Kulkarni, S Phutke, SK Vipparthi, AB Gonde, ...
    International Conference on Pattern Recognition, 121-136 2025

  • Phaseformer: Phase-based Attention Mechanism for Underwater Image Restoration and Beyond
    MD Khan, A Negi, A Kulkarni, SS Phutke, SK Vipparthi, S Murala
    arXiv preprint arXiv:2412.01456 2024

  • Fusing Image and Text Features for Scene Sentiment Analysis Using Whale-Honey Badger Optimization Algorithm (WHBOA)
    PS Yadav, DK Tyagi, SK Vipparthi
    International Conference on Pattern Recognition, 446-462 2024

  • Luminate: Linguistic Understanding and Multi-Granularity Interaction for Video Object Segmentation
    R Tekchandani, R Maheshwari, P Hambarde, SN Tazi, SK Vipparthi, ...
    2024 IEEE International Conference on Image Processing (ICIP), 4028-4034 2024

  • Triplet-set feature proximity learning for video anomaly detection
    KM Biradar, M Mandal, S Dube, SK Vipparthi, DK Tyagi
    Image and Vision Computing 150, 105205 2024

  • Multi-Medium Image Enhancement With Attentive Deformable Transformers
    A Kulkarni, SS Phutke, SK Vipparthi, S Murala
    IEEE Transactions on Emerging Topics in Computational Intelligence 2024

  • RefMOS: A Robust Referred Moving Object Segmentation framework based on text query
    P Saxena, SM Roy, DK Tyagi, SK Vipparthi, S Murala, ...
    2024 IEEE International Conference on Advanced Video and Signal Based 2024

  • Deep Insights of Learning-Based Micro Expression Recognition: A Perspective on Promises, Challenges, and Research Needs
    SK Vipparthi, M Verma, G Singh
    2024

  • Neural Architecture Search for Image Dehazing
    M Reddy, SK Vipparthi, M Mandal, YR Meedimale
    2024

  • Cross-centroid ripple pattern for facial expression recognition
    M Verma, SK Vipparthi
    Multimedia Tools and Applications, 1-21 2024

  • SBI-DHGR: Skeleton-based intelligent dynamic hand gestures recognition
    AP Mazumdar, S Narayan, SK Vipparthi
    2024

  • WARMOS: Enhancing Weather-Affected Referred Moving Object Segmentation
    DK Tyagi, SK Vipparthi, S Murala
    Proceedings of the Asian Conference on Computer Vision, 102-114 2024

  • U-ENHANCE: Underwater Image Enhancement Using Wavelet Triple Self-Attention
    P Mishra, SK Vipparthi, S Murala
    Proceedings of the Asian Conference on Computer Vision, 84-101 2024

  • Occlusion Boundary Prediction and Transformer Based Depth-Map Refinement From Single Image
    P Hambarde, G Wadhwa, SK Vipparthi, S Murala, A Dhall
    ACM Transactions on Multimedia Computing, Communications and Applications 2024

  • C2AIR: consolidated compact aerial image haze removal
    A Kulkarni, SS Phutke, SK Vipparthi, S Murala
    Proceedings of the IEEE/CVF Winter Conference on Applications of Computer 2024

  • Spectroformer: Multi-domain query cascaded transformer network for underwater image enhancement
    R Khan, P Mishra, N Mehta, SS Phutke, SK Vipparthi, S Nandi, S Murala
    Proceedings of the IEEE/CVF winter conference on applications of computer 2024

MOST CITED SCHOLAR PUBLICATIONS

  • LEARNet: Dynamic imaging network for micro expression recognition
    M Verma, SK Vipparthi, G Singh, S Murala
    IEEE Transactions on Image Processing 29, 1618-1627 2019
    Citations: 148

  • An empirical review of deep learning frameworks for change detection: Model design, experimental frameworks, challenges and research needs
    M Mandal, SK Vipparthi
    IEEE Transactions on Intelligent Transportation Systems 23 (7), 6101-6122 2021
    Citations: 111

  • BerConvoNet: A deep learning framework for fake news classification
    M Choudhary, SS Chouhan, ES Pilli, SK Vipparthi
    Applied Soft Computing 110, 107614 2021
    Citations: 79

  • AVDNet: A small-sized vehicle detection network for aerial visual data
    M Mandal, M Shah, P Meena, S Devi, SK Vipparthi
    IEEE Geoscience and Remote Sensing Letters 17 (3), 494-498 2019
    Citations: 76

  • Color directional local quinary patterns for content based indexing and retrieval
    SK Vipparthi, SK Nagar
    Human-centric Computing and Information Sciences 4, 1-13 2014
    Citations: 70

  • Mor-uav: A benchmark dataset and baselines for moving object recognition in uav videos
    M Mandal, LK Kumar, SK Vipparthi
    Proceedings of the 28th ACM international conference on multimedia, 2626-2635 2020
    Citations: 65

  • 3DCD: Scene independent end-to-end spatiotemporal feature learning framework for change detection in unseen videos
    M Mandal, V Dhar, A Mishra, SK Vipparthi, M Abdel-Mottaleb
    IEEE transactions on image processing 30, 546-558 2020
    Citations: 64

  • NTIRE 2024 image shadow removal challenge report
    FA Vasluianu, T Seizinger, Z Zhou, Z Wu, C Chen, R Timofte, W Dong, ...
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 2024
    Citations: 59

  • Local Gabor maximum edge position octal patterns for image retrieval
    SK Vipparthi, S Murala, SK Nagar, AB Gonde
    Neurocomputing 167, 336-345 2015
    Citations: 58

  • Local directional mask maximum edge patterns for image retrieval and face recognition
    SK Vipparthi, S Murala, AB Gonde, QMJ Wu
    IET Computer Vision 10 (3), 182-192 2016
    Citations: 52

  • Expert image retrieval system using directional local motif XoR patterns
    SK Vipparthi, SK Nagar
    Expert Systems with Applications 41 (17), 8016-8026 2014
    Citations: 51

  • Regional adaptive affinitive patterns (RADAP) with logical operators for facial expression recognition
    M Mandal, M Verma, S Mathur, SK Vipparthi, S Murala, D Kranthi Kumar
    IET Image Processing 13 (5), 850-861 2019
    Citations: 50

  • Scene independency matters: An empirical study of scene dependent and scene independent evaluation for CNN-based change detection
    M Mandal, SK Vipparthi
    IEEE Transactions on Intelligent Transportation Systems 23 (3), 2031-2044 2020
    Citations: 48

  • Hinet: Hybrid inherited feature learning network for facial expression recognition
    M Verma, SK Vipparthi, G Singh
    IEEE Letters of the Computer Society 2 (4), 36-39 2019
    Citations: 47

  • SSSDET: Simple short and shallow network for resource efficient vehicle detection in aerial scenes
    M Mandal, M Shah, P Meena, SK Vipparthi
    2019 IEEE international conference on image processing (ICIP), 3098-3102 2019
    Citations: 43

  • Challenges in time-stamp aware anomaly detection in traffic videos
    KM Biradar, A Gupta, M Mandal, SK Vipparthi
    arXiv preprint arXiv:1906.04574 2019
    Citations: 43

  • 3DFR: A swift 3D feature reductionist framework for scene independent change detection
    M Mandal, V Dhar, A Mishra, SK Vipparthi
    IEEE Signal Processing Letters 26 (12), 1882-1886 2019
    Citations: 37

  • HyFiNet: Hybrid feature attention network for hand gesture recognition
    G Bhaumik, M Verma, MC Govil, SK Vipparthi
    Multimedia Tools and Applications 82 (4), 4863-4882 2023
    Citations: 36

  • ExtriDeNet: an intensive feature extrication deep network for hand gesture recognition
    G Bhaumik, M Verma, MC Govil, SK Vipparthi
    The Visual Computer 38 (11), 3853-3866 2022
    Citations: 36

  • Multi-joint histogram based modelling for image indexing and retrieval
    SK Vipparthi, SK Nagar
    Computers & Electrical Engineering 40 (8), 163-173 2014
    Citations: 36