SANTOSH KUMAR VIPPARTHI

@iitrpr.ac.in

Assistant Professor, Department of Electrical Engineering
Indian Institute of Technology Ropar (IIT Ropar)



                             

https://researchid.co/skvipparthi

Dr. Santosh Kumar Vipparthi, a senior member of IEEE, has 11+ years of experience in teaching and industry. Currently, he is an Assistant Professor in the Department of Electrical Engineering, Indian Institute of Technology, Ropar (IIT Ropar). Before this, he served as an Assistant Professor in the Mehta Family School of Data Science and Artificial Intelligence at the Indian Institute of Technology Guwahati (IIT Guwahati) and the Department of Computer Science and Engineering at Malaviya National Institute of Technology (MNIT), Jaipur (An Institute of National Importance and one of the top NITs fully funded by the Ministry of Education, Government of India) (2013-2022). Dr. Vipparthi’s research in computer vision, deep learning, etc. He successfully supervised 03 research scholars (PhD’s) and 11 PG students (M.Tech).

RESEARCH INTERESTS

Computer Vision, Deep Learning, Facial Expression Recognition, Change Detection

63

Scopus Publications

1410

Scholar Citations

23

Scholar h-index

36

Scholar i10-index

Scopus Publications

  • SBI-DHGR: Skeleton-based intelligent dynamic hand gestures recognition
    Satya Narayan, Arka Prokash Mazumdar, and Santosh Kumar Vipparthi

    Elsevier BV

  • Efficient neural architecture search for emotion recognition
    Monu Verma, Murari Mandal, Satish Kumar Reddy, Yashwanth Reddy Meedimale, and Santosh Kumar Vipparthi

    Elsevier BV

  • HyFiNet: Hybrid feature attention network for hand gesture recognition
    Gopa Bhaumik, Monu Verma, Mahesh Chandra Govil, and Santosh Kumar Vipparthi

    Springer Science and Business Media LLC

  • Blind Image Inpainting via Omni-dimensional Gated Attention and Wavelet Queries
    Shruti S. Phutke, Ashutosh Kulkarni, Santosh Kumar Vipparthi, and Subrahmanyam Murala

    IEEE
    Blind image inpainting is a crucial restoration task that does not demand additional mask information to restore the corrupted regions. Yet, it is a very less explored research area due to the difficulty in discriminating between corrupted and valid regions. There exist very few approaches for blind image inpainting which sometimes fail at producing plausible inpainted images. Since they follow a common practice of predicting the corrupted regions and then inpaint them. To skip the corrupted region prediction step and obtain better results, in this work, we propose a novel end-to-end architecture for blind image inpainting consisting of wavelet query multi-head attention transformer block and the omni-dimensional gated attention. The proposed wavelet query multi-head attention in the transformer block provides encoder features via processed wavelet coefficients as query to the multi-head attention. Further, the proposed omni-dimensional gated attention effectively provides all dimensional attentive features from the encoder to the respective decoder. Our proposed approach is compared numerically and visually with existing state-of-the-art methods for blind image inpainting on different standard datasets. The comparative and ablation studies prove the effectiveness of the proposed approach for blind image inpainting. The testing code is available at : https://github.com/shrutiphutke/Blind_Omni_Wav_Net

  • NTIRE 2023 Image Shadow Removal Challenge Report
    Florin-Alexandru Vasluianu, Tim Seizinger, Radu Timofte, Shuhao Cui, Junshi Huang, Shuman Tian, Mingyuan Fan, Jiaqi Zhang, Li Zhu, Xiaoming Wei,et al.

    IEEE
    This work reviews the results of the NTIRE 2023 Challenge on Image Shadow Removal. The described set of solutions were proposed for a novel dataset, which captures a wide range of object-light interactions. It consists of 1200 roughly pixel aligned pairs of real shadow free and shadow affected images, captured in a controlled environment. The data was captured in a white-box setup, using professional equipment for lights and data acquisition sensors. The challenge had a number of 144 participants registered, out of which 19 teams were compared in the final ranking. The proposed solutions extend the work on shadow removal, improving over the performance level describing state-of-the-art methods.

  • RNAS-MER: A Refined Neural Architecture Search with Hybrid Spatiotemporal Operations for Micro-Expression Recognition
    Monu Verma, Priyanka Lubal, Santosh Kumar Vipparthi, and Mohamed Abdel-Mottaleb

    IEEE
    Existing neural architecture search (NAS) methods comprise linear connected convolution operations and use ample search space to search task-driven convolution neural networks (CNN). These CNN models are computationally expensive and diminish the quality of receptive fields for tasks like micro-expression recognition (MER) with limited training samples. Therefore, we propose a refined neural architecture search strategy to search for a tiny CNN architecture for MER. In addition, we introduced a refined hybrid module (RHM) for inner-level search space and an optimal path explore network (OPEN) for outer-level search space. The RHM focuses on discovering optimal cell structures by incorporating a multilateral hybrid spatiotemporal operation space. Also, spatiotemporal attention blocks are embedded to refine the aggregated cell features. The OPEN search space aims to trace an optimal path between the cells to generate a tiny spatiotemporal CNN architecture instead of covering all possible tracks. The aggregate mix of RHM and OPEN search space availed the NAS method to robustly search and design an effective and efficient framework for MER. Compared with contemporary works, experiments reveal that the RNAS-MER is capable of bridging the gap between NAS algorithms and MER tasks. Furthermore, RNAS-MER achieves new state-of-the-art performances on challenging MER benchmarks, including 0.8511%, 0.7620%, 0.9078% and 0.8235% UAR on COMPOSITE, SMIC, CASME-II and SAMM datasets respectively.

  • A multilane traffic and collision generator for IoV
    Anamika Satrawala, Arka Prokash Mazumdar, and Santosh Kumar Vipparthi

    Elsevier BV


  • AutoMER: Spatiotemporal Neural Architecture Search for Microexpression Recognition
    Monu Verma, M. Satish Kumar Reddy, Yashwanth Reddy Meedimale, Murari Mandal, and Santosh Kumar Vipparthi

    Institute of Electrical and Electronics Engineers (IEEE)
    Facial microexpressions offer useful insights into subtle human emotions. This unpremeditated emotional leakage exhibits the true emotions of a person. However, the minute temporal changes in the video sequences are very difficult to model for accurate classification. In this article, we propose a novel spatiotemporal architecture search algorithm, AutoMER for microexpression recognition (MER). Our main contribution is a new parallelogram design-based search space for efficient architecture search. We introduce a spatiotemporal feature module named 3-D singleton convolution for cell-level analysis. Furthermore, we present four such candidate operators and two 3-D dilated convolution operators to encode the raw video sequences in an end-to-end manner. To the best of our knowledge, this is the first attempt to discover 3-D convolutional neural network (CNN) architectures with a network-level search for MER. The searched models using the proposed AutoMER algorithm are evaluated over five microexpression data sets: CASME-I, SMIC, CASME-II, CAS(ME)2, and SAMM. The proposed generated models quantitatively outperform the existing state-of-the-art approaches. The AutoMER is further validated with different configurations, such as downsampling rate factor, multiscale singleton 3-D convolution, parallelogram, and multiscale kernels. Overall, five ablation experiments were conducted to analyze the operational insights of the proposed AutoMER.

  • An Empirical Review of Deep Learning Frameworks for Change Detection: Model Design, Experimental Frameworks, Challenges and Research Needs
    Murari Mandal and Santosh Kumar Vipparthi

    Institute of Electrical and Electronics Engineers (IEEE)

  • Scene Independency Matters: An Empirical Study of Scene Dependent and Scene Independent Evaluation for CNN-Based Change Detection
    Murari Mandal and Santosh Kumar Vipparthi

    Institute of Electrical and Electronics Engineers (IEEE)
    Visual change detection in video is one of the essential tasks in computer vision applications. Recently, a number of supervised deep learning methods have achieved top performance over the benchmark datasets for change detection. However, inconsistent training-testing data division schemes adopted by these methods have led to documentation of incomparable results. We address this crucial issue through our own propositions for benchmark comparative analysis. The existing works have evaluated the model in scene dependent evaluation setup which makes it difficult to assess the generalization capability of the model in completely unseen videos. It also leads to inflated results. Therefore, in this paper, we present a completely scene independent evaluation strategy for a comprehensive analysis of the model design for change detection. We propose well-defined scene independent and scene dependent experimental frameworks for training and evaluation over the benchmark CDnet 2014, LASIESTA and SBMI2015 datasets. A cross-data evaluation is performed with PTIS dataset to further measure the robustness of the models. We designed a fast and lightweight online end-to-end convolutional network called ChangeDet (speed-58.8 fps and model size-1.59 MB) in order to achieve robust performance in completely unseen videos. The ChangeDet estimates the background through a sequence of maximum multi-spatial receptive feature (MMSR) blocks using past temporal history. The contrasting features are produced through the assimilation of temporal median and contemporary features from the current frame. Further, these features are processed through an encoder-decoder to detect pixel-wise changes. The proposed ChangeDet outperforms the existing state-of-the-art methods in all four benchmark datasets.

  • RIChEx: A ROBUST INTER-FRAME CHANGE EXPOSURE FOR SEGMENTING MOVING OBJECTS
    Prafulla Saxena, Kuldeep Biradar, Dinesh Kumar Tyagi, and Santosh Kumar Vipparthi

    IEEE

  • Deep Insights of Learning based Micro Expression Recognition: A Perspective on Promises, Challenges and Research Needs
    Monu Verma, Santosh Kumar Vipparthi, and Girdhari Singh

    Institute of Electrical and Electronics Engineers (IEEE)

  • Distributed Adaptive Recommendation & Time-stamp based Estimation of Driver-Behaviour
    Anamika Satrawala, Arka Prokash Mazumdar, and Santosh Kumar Vipparthi

    IEEE
    Driver behavior analysis is one of the critical issues that need to be addressed to prevent traffic accidents. It contributes to many real-time applications, such as usage-based pricing (UBI), pay-as-you-drive (PAY-D), and insurance premium calculations. Driver Behavior Profiling-Prognosis (DBP-P) is considered a quantitative risk assessment parameter in road accidents and is a fusion of two sub-processes, behavior scoring and classification of driving patterns. The selection of features like speed or acceleration is the essential and decisive factor in automobile driving behaviour. Though there exists a number of such schemes in the literature, most of them primarily focus on independently on each vehicle and score them. This goal, however, does not clearly indicate any driver's driving quality or its risk of collision with other vehicles. Therefore, to overcome the limitations of the literature, this paper proposes a relative, adaptive, and distributed driver behaviour profiling technique, named Distributed Adaptive Recommendation & Time-stamp based Estimation of Driver-Behaviour (DARTED), to generate driving scores to quantify and classify driver behavior as good or bad. Moreover, the driver scores can be computed at each timestamp with a classified label that can be used in various applications aiming for collision analysis. The experimental results indicate that the proposed method achieves significant accuracy in different traffic scenarios. The model may be helpful to researchers for study and enhance understanding and many real-time industrial applications.

  • Neural Architecture Search for Image Dehazing
    Murari Mandal, Yashwanth Reddy Meedimale, M. Satish Kumar Reddy, and Santosh Kumar Vipparthi

    Institute of Electrical and Electronics Engineers (IEEE)

  • MAG-Net: A Memory Augmented Generative Framework for Video Anomaly Detection Using Extrapolation
    Sachin Dube, Kuldeep Biradar, Santosh Kumar Vipparthi, and Dinesh Kumar Tyagi

    Springer International Publishing

  • HYPE: CNN Based HYbrid PrEcoding Framework for 5G and Beyond
    Deepti Sharma, Kuldeep M. Biradar, Santosh K. Vipparthi, and Ramesh B. Battula

    Springer International Publishing

  • Att-PyNet: An Attention Pyramidal Feature Network for Hand Gesture Recognition
    Gopa Bhaumik, Monu Verma, Mahesh Chandra Govil, and Santosh Kumar Vipparthi

    Springer Singapore

  • BerConvoNet: A deep learning framework for fake news classification
    Monika Choudhary, Satyendra Singh Chouhan, Emmanuel S. Pilli, and Santosh Kumar Vipparthi

    Elsevier BV

  • One for All: An End-to-End Compact Solution for Hand Gesture Recognition
    Monu Verma, Ayushi Gupta, and Santosh K. Vipparthi

    IEEE
    The HGR is a quite challenging task as its performance is influenced by various aspects such as illumination variations, cluttered backgrounds, spontaneous capture, etc. The conventional CNN networks for HGR are following two stage pipeline to deal with the various challenges: complex signs, illumination variations, complex and cluttered backgrounds. The existing approaches needs expert expertise as well as auxiliary computation at stage 1 to remove the complexities from the input images. Therefore, in this paper, we proposes an novel end-to-end compact CNN framework: fine grained feature attentive network for hand gesture recognition (Fit-Hand) to solve the challenges as discussed above. The pipeline of the proposed architecture consists of two main units: FineFeat module and dilated convolutional (Conv) layer. The FineFeat module extracts fine grained feature maps by employing attention mechanism over multiscale receptive fields. The attention mechanism is introduced to capture effective features by enlarging the average behaviour of multi-scale responses. Moreover, dilated convolution provides global features of hand gestures through a larger receptive field. In addition, integrated layer is also utilized to combine the features of FineFeat module and dilated layer which enhances the discriminability of the network by capturing complementary context information of hand postures. The effectiveness of Fit-Hand is evaluated by using subject dependent (SD) and subject independent (SI) validation setup over seven benchmark datasets: MUGD-I, MUGD-II, MUGD-III, MUGD-IV, MUGD-V, Finger Spelling and OUHANDS, respectively. Furthermore, to investigate the deep insights of the proposed Fit-Hand framework, we performed ten ablation study

  • A Review of Non-Invasive HbA1c and Blood Glucose Measurement Methods
    Gaurav Jain, Amit M. Joshi, Ravi Kumar Maddila, and Santosh Kumar Vipparthi

    IEEE
    Hemoglobin is a protein in Red Blood Cells (RBC) which supplies oxygen to the human body. A person’s hemoglobin becomes glycosylated as per the increase in the level of blood sugar. Glycated hemoglobin (HbA1c) is a widely used measure of glycemic control which measures the glucose attached to hemoglobin. Different methods are adopted and utilized for the measurement of HbA1c. Several invasive methods are widely used in pathological laboratories across the globe. The current status of non-invasive HbA1c and blood glucose measurement techniques is summarized in this paper.

  • 3DCD: Scene Independent End-to-End Spatiotemporal Feature Learning Framework for Change Detection in Unseen Videos
    Murari Mandal, Vansh Dhar, Abhishek Mishra, Santosh Kumar Vipparthi, and Mohamed Abdel-Mottaleb

    Institute of Electrical and Electronics Engineers (IEEE)
    Change detection is an elementary task in computer vision and video processing applications. Recently, a number of supervised methods based on convolutional neural networks have reported high performance over the benchmark dataset. However, their success depends upon the availability of certain proportions of annotated frames from test video during training. Thus, their performance on completely unseen videos or scene independent setup is undocumented in the literature. In this work, we present a scene independent evaluation (SIE) framework to test the supervised methods in completely unseen videos to obtain generalized models for change detection. In addition, a scene dependent evaluation (SDE) is also performed to document the comparative analysis with the existing approaches. We propose a fast (speed-25 fps) and lightweight (0.13 million parameters, model size-1.16 MB) end-to-end 3D-CNN based change detection network (3DCD) with multiple spatiotemporal learning blocks. The proposed 3DCD consists of a gradual reductionist block for background estimation from past temporal history. It also enables motion saliency estimation, multi-schematic feature encoding-decoding, and finally foreground segmentation through several modular blocks. The proposed 3DCD outperforms the existing state-of-the-art approaches evaluated in both SIE and SDE setup over the benchmark CDnet 2014, LASIESTA and SBMI2015 datasets. To the best of our knowledge, this is a first attempt to present results in clearly defined SDE and SIE setups in three change detection datasets.

  • AffectiveNet: Affective-Motion Feature Learning for Microexpression Recognition
    Monu Verma, Santosh Kumar Vipparthi, and Girdhari Singh

    Institute of Electrical and Electronics Engineers (IEEE)
    Microexpressions are hard to spot due to fleeting and involuntary moments of facial muscles. Interpretation of microemotions from video clips is a challenging task. In this article, we propose an affective-motion imaging that cumulates rapid and short-lived variational information of microexpressions into a single response. Moreover, we have proposed an AffectiveNet: Affective-motion feature learning network that can perceive subtle changes and learns the most discriminative dynamic features to describe the emotion classes. The AffectiveNet holds two blocks: MICRoFeat and MFL block. MICRoFeat block conserves the scale-invariant features, which allows network to capture both coarse and tiny edge variations. Whereas, the MFL block learns microlevel dynamic variations from two different intermediate convolutional layers. Effectiveness of the proposed network is tested over four datasets by using two experimental setups: person independent and cross dataset validation. The experimental results of the proposed network outperform the state-of-the-art approaches with significant margin for MER approaches.

  • Parity Check Based Descriptor for Hand Gesture Detection and Recognition
    Satya Narayan, S. K. Vipparthi, and A.P. Mazumdar

    IEEE
    Feature extraction is one of the most important technique in many pattern recognition applications. More specifically, the performance of a hand gesture detection and recognition system depends on the robustness of the designed feature descriptor. In this paper, we propose a parity check based descriptor (PCBD) for hand gesture recognition. The descriptor extracts the intensity variations by establishing the bit-plane relationship between the neighboring pixels. The bit level thresholding is used to encode the patterns and extracted features trained by SVM classifier with HGRI database that improves the efficiency of hand gesture recognition with more discriminability with low memory storage. The experimental results show better performance of the proposed method as compared to existing state-of-the-art approaches.

  • Hand Gesture Recognition with Gaussian Scaling and Kirsch Edge Rotation
    Satya Narayan, S. K. Vipparthi, and A.P. Mazumdar

    IEEE
    Hand gesture recognition is a vital aspect of robotic vision models. This paper presents a fusion based approach for hand gesture recognition. In this approach, we first extract the Gaussian scale space of an image and compute features on different scales. Kirsch’s convolution mask is then applied on the feature map. The aim of the proposed approach is to remove unwanted information extract scale, rotation, and illumination invariant patterns from hand gestures. The final feature vector is aggregated through the concatenation of multiscale histograms. The Support Vector Machine classifier is demonstrated using extracted features. Moreover, we calculate the progress efficiency of proposed methods on three distinct databases by conducting experiments viz, Thomson, Bochum, and HGRI. The proposed method achieves classification accuracies of 94.25%, 92.77%, and 95.78% respectively on the investigated databases that outperform the existing approaches for hand gesture recognition

RECENT SCHOLAR PUBLICATIONS

  • Occlusion Boundary Prediction and Transformer Based Depth-Map Refinement From Single Image
    P Hambarde, G Wadhwa, SK Vipparthi, S Murala, A Dhall
    ACM Transactions on Multimedia Computing, Communications and Applications 2024

  • C2AIR: Consolidated Compact Aerial Image Haze Removal
    A Kulkarni, SS Phutke, SK Vipparthi, S Murala
    Proceedings of the IEEE/CVF Winter Conference on Applications of Computer 2024

  • Spectroformer: Multi-Domain Query Cascaded Transformer Network for Underwater Image Enhancement
    R Khan, P Mishra, N Mehta, SS Phutke, SK Vipparthi, S Nandi, S Murala
    Proceedings of the IEEE/CVF Winter Conference on Applications of Computer 2024

  • SBI-DHGR: Skeleton-based intelligent dynamic hand gestures recognition
    S Narayan, AP Mazumdar, SK Vipparthi
    Expert Systems with Applications 232, 120735 2023

  • Efficient neural architecture search for emotion recognition
    M Verma, M Mandal, SK Reddy, YR Meedimale, SK Vipparthi
    Expert Systems with Applications 224, 119957 2023

  • Hyfinet: hybrid feature attention network for hand gesture recognition
    G Bhaumik, M Verma, MC Govil, SK Vipparthi
    Multimedia Tools and Applications 82 (4), 4863-4882 2023

  • Blind image inpainting via omni-dimensional gated attention and wavelet queries
    SS Phutke, A Kulkarni, SK Vipparthi, S Murala
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 2023

  • NTIRE 2023 image shadow removal challenge report
    FA Vasluianu, T Seizinger, R Timofte, S Cui, J Huang, S Tian, M Fan, ...
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 2023

  • RNAS-MER: A Refined Neural Architecture Search with Hybrid Spatiotemporal Operations for Micro-Expression Recognition
    M Verma, P Lubal, SK Vipparthi, M Abdel-Mottaleb
    Proceedings of the IEEE/CVF Winter Conference on Applications of Computer 2023

  • Deep insights of learning based micro expression recognition: A perspective on promises, challenges and research needs
    M Verma, SK Vipparthi, G Singh
    IEEE Transactions on Cognitive and Developmental Systems 2022

  • A multilane traffic and collision generator for IoV
    A Satrawala, AP Mazumdar, SK Vipparthi
    Simulation Modelling Practice and Theory 120, 102588 2022

  • ExtriDeNet: an intensive feature extrication deep network for hand gesture recognition
    G Bhaumik, M Verma, MC Govil, SK Vipparthi
    The Visual Computer 38 (11), 3853-3866 2022

  • RIChEx: A Robust Inter-Frame Change Exposure for Segmenting Moving Objects
    P Saxena, K Biradar, DK Tyagi, SK Vipparthi
    2022 IEEE International Conference on Image Processing (ICIP), 2172-2176 2022

  • Neural architecture search for image dehazing
    M Mandal, YR Meedimale, MSK Reddy, SK Vipparthi
    IEEE Transactions on Artificial Intelligence 2022

  • Distributed adaptive recommendation & time-stamp based estimation of driver-behaviour
    A Satrawala, AP Mazumdar, SK Vipparthi
    2022 IEEE Region 10 Symposium (TENSYMP), 1-6 2022

  • RARITYNet: Rarity Guided Affective Emotion Learning Framework
    M Verma, SK Vipparthi
    arXiv preprint arXiv:2205.08595 2022

  • Att-PyNet: An Attention Pyramidal Feature Network for Hand Gesture Recognition
    G Bhaumik, M Verma, MC Govil, SK Vipparthi
    Edge Analytics: Select Proceedings of 26th International Conference—ADCOM 2022

  • HYPE: CNN Based HYbrid PrEcoding Framework for 5G and Beyond
    D Sharma, KM Biradar, SK Vipparthi, RB Battula
    International Conference on Advanced Information Networking and Applications 2022

  • Cross-centroid ripple pattern for facial expression recognition
    M Verma, P Saxena, SK Vipparthi, G Singh
    arXiv preprint arXiv:2201.05958 2022

  • A Review of Non-Invasive HbA1c and Blood Glucose Measurement Methods
    G Jain, AM Joshi, RK Maddila, SK Vipparthi
    2021 IEEE International Symposium on Smart Electronic Systems (iSES), 339-342 2021

MOST CITED SCHOLAR PUBLICATIONS

  • LEARNet: Dynamic imaging network for micro expression recognition
    M Verma, SK Vipparthi, G Singh, S Murala
    IEEE Transactions on Image Processing 29, 1618-1627 2019
    Citations: 122

  • An empirical review of deep learning frameworks for change detection: Model design, experimental frameworks, challenges and research needs
    M Mandal, SK Vipparthi
    IEEE Transactions on Intelligent Transportation Systems 23 (7), 6101-6122 2021
    Citations: 81

  • Color directional local quinary patterns for content based indexing and retrieval
    SK Vipparthi, SK Nagar
    Human-centric Computing and Information Sciences 4, 1-13 2014
    Citations: 69

  • AVDNet: A small-sized vehicle detection network for aerial visual data
    M Mandal, M Shah, P Meena, S Devi, SK Vipparthi
    IEEE Geoscience and Remote Sensing Letters 17 (3), 494-498 2019
    Citations: 60

  • Local Gabor maximum edge position octal patterns for image retrieval
    SK Vipparthi, S Murala, SK Nagar, AB Gonde
    Neurocomputing 167, 336-345 2015
    Citations: 58

  • BerConvoNet: A deep learning framework for fake news classification
    M Choudhary, SS Chouhan, ES Pilli, SK Vipparthi
    Applied Soft Computing 110, 107614 2021
    Citations: 57

  • 3DCD: Scene independent end-to-end spatiotemporal feature learning framework for change detection in unseen videos
    M Mandal, V Dhar, A Mishra, SK Vipparthi, M Abdel-Mottaleb
    IEEE Transactions on Image Processing 30, 546-558 2020
    Citations: 54

  • Local directional mask maximum edge patterns for image retrieval and face recognition
    SK Vipparthi, S Murala, AB Gonde, QMJ Wu
    IET Computer Vision 10 (3), 182-192 2016
    Citations: 52

  • Mor-uav: A benchmark dataset and baselines for moving object recognition in uav videos
    M Mandal, LK Kumar, SK Vipparthi
    Proceedings of the 28th ACM international conference on multimedia, 2626-2635 2020
    Citations: 50

  • Expert image retrieval system using directional local motif XoR patterns
    SK Vipparthi, SK Nagar
    Expert Systems with Applications 41 (17), 8016-8026 2014
    Citations: 50

  • Regional adaptive affinitive patterns (RADAP) with logical operators for facial expression recognition
    M Mandal, M Verma, S Mathur, SK Vipparthi, S Murala, D Kranthi Kumar
    IET Image Processing 13 (5), 850-861 2019
    Citations: 46

  • Scene independency matters: An empirical study of scene dependent and scene independent evaluation for CNN-based change detection
    M Mandal, SK Vipparthi
    IEEE Transactions on Intelligent Transportation Systems 23 (3), 2031-2044 2020
    Citations: 42

  • Hinet: Hybrid inherited feature learning network for facial expression recognition
    M Verma, SK Vipparthi, G Singh
    IEEE Letters of the Computer Society 2 (4), 36-39 2019
    Citations: 41

  • SSSDET: Simple short and shallow network for resource efficient vehicle detection in aerial scenes
    M Mandal, M Shah, P Meena, SK Vipparthi
    2019 IEEE international conference on image processing (ICIP), 3098-3102 2019
    Citations: 37

  • Challenges in time-stamp aware anomaly detection in traffic videos
    KM Biradar, A Gupta, M Mandal, SK Vipparthi
    arXiv preprint arXiv:1906.04574 2019
    Citations: 35

  • Multi-joint histogram based modelling for image indexing and retrieval
    SK Vipparthi, SK Nagar
    Computers & Electrical Engineering 40 (8), 163-173 2014
    Citations: 35

  • 3DFR: A swift 3D feature reductionist framework for scene independent change detection
    M Mandal, V Dhar, A Mishra, SK Vipparthi
    IEEE Signal Processing Letters 26 (12), 1882-1886 2019
    Citations: 34

  • Directional local ternary patterns for multimedia image indexing and retrieval
    SK Vipparthi, SK Nagar
    International Journal of Signal and Imaging Systems Engineering 8 (3), 137-145 2015
    Citations: 33

  • MotionRec: A unified deep framework for moving object recognition
    M Mandal, LK Kumar, MS Saran
    Proceedings of the IEEE/CVF Winter Conference on Applications of Computer 2020
    Citations: 31

  • Local extreme complete trio pattern for multimedia image retrieval system
    SK Vipparthi, SK Nagar
    International Journal of Automation and Computing 13, 457-467 2016
    Citations: 30