Jitesh Pradhan

@ism.ac.in

Senior Research Fellow, Department of Computer Science and Engineering
IIT (ISM) Dhanbad

EDUCATION

2021 Ph.D., Department of Computer Science and Engineering, Indian Institute of
Technology (Indian School of Mines), Dhanbad, Jharkhand-826 004, India.
2015 M-Tech., First Division (with Distinction), 9.28 CGPA, Department of Computer
Science and Engineering, Indian Institute of Technology (Indian School of Mines),
Dhanbad, Jharkhand-826 004, India.
2012 B. E., First Division (with Distinction), 83.18% with 9.18 CPI, Department of
Computer Science and Engineering, CSVTU, Bhilai, Chhattisgarh-491107, India.
2007 Intermediate(10+2), First Division in Science (Physics, Chemistry, Mathematics),
71%, Jawahar Navodaya Vidyalaya Bhupdevpur, Raigarh, Chhattisgarh-496661,
India.
2005 High School(10th), First Division, 74.20 %, Jawahar Navodaya Vidyalaya Bhupdevpur,
Raigarh, Chhattisgarh-496661, India.

RESEARCH INTERESTS

(1) Content-based Image Retrieval (CBIR).
(2) Content-based Medical Image Retrieval.
(3) Feature Extraction.
(4) Image Classification.
(5) Image Fusion.
(6) DNA Coding based CBIR.
(7) Object Detection.
(8) Machine Learning.
(9) Deep Learning.
(10) DNA Feature Extraction.
(11) Deep Feature Fusion

29

Scopus Publications

Scopus Publications

  • An evolutionary supply chain management service model based on deep learning features for automated glaucoma detection using fundus images
    Santosh Kumar Sharma, Debendra Muduli, Rojalina Priyadarshini, Rakesh Ranjan Kumar, Abhinav Kumar, and Jitesh Pradhan

    Elsevier BV

  • Diagnostic Accuracy of Artificial Intelligence-Based Algorithms in Automated Detection of Neck of Femur Fracture on a Plain Radiograph: A Systematic Review and Meta-analysis
    Manish Raj, Arshad Ayub, Arup Kumar Pal, Jitesh Pradhan, Naushad Varish, Sumit Kumar, and Seshadri Reddy Varikasuvu

    Springer Science and Business Media LLC

  • DNA Encoding-based Nucleotide Pattern and Deep Features for Instance and Class-based Image Retrieval
    Jitesh Pradhan, Arup Kumar Pal, SK Hafizul Islam, and Chiranjeev Bhaya

    Institute of Electrical and Electronics Engineers (IEEE)
    Recently, DNA encoding has shown its potential to store the vital information of the image in the form of nucleotides, namely A, C, T, and G, with the entire sequence following run-length and GC-constraint. As a result, the encoded DNA planes contain unique nucleotide strings, giving more salient image information using less storage. In this paper, the advantages of DNA encoding have been inherited to uplift the retrieval accuracy of the content-based image retrieval (CBIR) system. Initially, the most significant bit-plane-based DNA encoding scheme has been suggested to generate DNA planes from a given image. The generated DNA planes of the image efficiently capture the salient visual information in a compact form. Subsequently, the encoded DNA planes have been utilized for nucleotide patterns-based feature extraction and image retrieval. Simultaneously, the translated and amplified encoded DNA planes have also been deployed on different deep learning architectures like ResNet-50, VGG-16, VGG-19, and Inception V3 to perform classification-based image retrieval. The performance of the proposed system has been evaluated using two corals, an object, and a medical image dataset. All these datasets contain 28,200 images belonging to 134 different classes. The experimental results confirm that the proposed scheme achieves perceptible improvements compared with other state-of-the-art methods.

  • A vision transformer-based automated human identification using ear biometrics
    Ravishankar Mehta, Sindhuja Shukla, Jitesh Pradhan, Koushlendra Kumar Singh, and Abhinav Kumar

    Elsevier BV


  • An empirical evaluation of extreme learning machine uncertainty quantification for automated breast cancer detection
    Debendra Muduli, Rakesh Ranjan Kumar, Jitesh Pradhan, and Abhinav Kumar

    Springer Science and Business Media LLC

  • Cybersecurity Attack-resilience Authentication Mechanism for Intelligent Healthcare System
    Preeti Soni, Jitesh Pradhan, Arup Kumar Pal, and Sk Hafizul Islam

    Institute of Electrical and Electronics Engineers (IEEE)

  • Content-Based Image Retrieval using DNA Transcription and Translation
    Jitesh Pradhan, Chiranjeev Bhaya, Arup Kumar Pal, and Arpit Dhuriya

    Institute of Electrical and Electronics Engineers (IEEE)
    DNA carries the genetic information of almost all the living beings on the earth. The flow of genetic information takes place by a series of transcription and translation reactions in which the DNA gets converted into amino-acid sequences which determine the phenotype of an organism. This property of DNA has been used in the proposed CBIR technique in which the images are first stored in DNA sequences and then their corresponding amino-acid sequences are extracted which are used to form the feature-vectors. This not only ensures the reduction of the dimension of the feature-vectors but also the preservation of the necessary information. These feature-vectors are then given as input to various classifiers for training and testing purpose. Ensemble learning is then applied to enhance the retrieval efficiency of the algorithm. The proposed algorithm is a novel approach that uses the efficiency of DNA-based computing to increase the efficiency of classifiers for image retrieval. Experimental results show that the proposed method is more efficient than the existing state-of-the-art algorithms.

  • Computational intelligence based secure three-party CBIR scheme for medical data for cloud-assisted healthcare applications
    Mukul Majhi, Arup Kumar Pal, Jitesh Pradhan, SK Hafizul Islam, and Muhammad Khurram Khan

    Springer Science and Business Media LLC

  • Radiological image retrieval technique using multi-resolution texture and shape features
    Sumit Kumar, Jitesh Pradhan, Arup Kumar Pal, SK Hafizul Islam, and Muhammad Khurram Khan

    Springer Science and Business Media LLC
    Medical image analysis plays a very indispensable role in providing the best possible medical support to a patient. With the rapid advancements in modern medical systems, these digital images are growing exponentially and reside in discrete places. These images help a medical practitioner in understanding the problem and then the best suitable treatment. Radiological images are very often found to be the critical constituent of medical images. So, in health care, manual retrieval of visually similar images becomes a very tedious task. To address this issue, we have suggested a content-based medical image retrieval (CBMIR) system that effectively analyzes a Radiological image’s primitive visual features. Since radiological images are in gray-scale form, these images contain rich texture and shape features only. So, we have suggested a novel multi-resolution radiological image retrieval system that uses texture and shape features for content analysis. Here, we have employed a multi-resolution modified block difference of inverse probability (BDIP) and block-level variance of local variance (BVLC) for shape and texture features, respectively. Our proposed scheme uses a multi-resolution and variable window size feature extraction strategy to maintain the block-level co-relation and extract more salient visual features. Further, we have used the MURA x-ray image dataset, which has 40561 images captured from 12173 different patients to demonstrate the proposed scheme’s retrieval performance. We have also performed and compared image retrieval experiments on Brodatz and STex texture, Corel-1K, and GHIM-10K natural image datasets to demonstrate the robustness and improvement over other contemporaries.


  • A Systematic Literature Review on Latest Keystroke Dynamics Based Models
    Soumen Roy, Jitesh Pradhan, Abhinav Kumar, Dibya Ranjan Das Adhikary, Utpal Roy, Devadatta Sinha, and Rajat Kumar Pal

    Institute of Electrical and Electronics Engineers (IEEE)

  • Adaptive tetrolet based color, texture and shape feature extraction for content based image retrieval application
    Sumit Kumar, Jitesh Pradhan, and Arup Kumar Pal

    Springer Science and Business Media LLC

  • Medical image fusion using deep learning
    Ashif Sheikh, Jitesh Pradhan, Arpit Dhuriya, and Arup Kumar Pal

    Springer International Publishing

  • Medical image retrieval system using deep learning techniques
    Jitesh Pradhan, Arup Kumar Pal, and Haider Banka

    Springer International Publishing

  • Fusion of region based extracted features for instance- and class-based CBIR applications
    Jitesh Pradhan, Arup Kumar Pal, Haider Banka, and Prabhat Dansena

    Elsevier BV

  • A Post Dynamic Clustering Approach for Classification-based Image Retrieval
    Jitesh Pradhan, Arup Kumar Pal, Mohammad S. Obaidat, and SK Hafizul Islam

    IEEE
    Content-based Image Retrieval (CBIR) is the process of retrieving images similar to an input query image from a large image dataset. One of the currently trending techniques in this field is classification-based CBIR, which aims to reduce the search space and speed up the final image retrieval. However, owing to the thousands of images in the reduced search space, it takes considerable time to retrieve relevant images. This paper proposes a novel post dynamic clustering-based approach for classification-based CBIR to enhance retrieval accuracy and speed. Initially, a pre-trained CNN architecture is used to predict the class of the input query image and reduce the image search space. Here, clusters of the produced feature space. Next, a semantic cluster sorting technique is suggested to sort all these clusters based on their semantic order. Finally, an optimal subset of these sorted clusters is selected for final image retrieval, which comprises more semantically similar images. The performance of the proposed approach has been tested on five different image datasets. The experimental outcomes confirm that the proposed method is more efficient and faster than competing state-of-the-art schemes.

  • Multi-level colored directional motif histograms for content-based image retrieval
    Jitesh Pradhan, Ashok Ajad, Arup Kumar Pal, and Haider Banka

    Springer Science and Business Media LLC
    Color features and local geometrical structures are the two basic image features which are sufficient to convey the image semantics. Both of these features show diverse nature on the different regions of a natural image. Traditional local motif patterns are standard tools to emphasize these local visual image features. These motif-based schemes consider either structural orientations or limited directional patterns which are not sufficient to realize the detailed local geometrical properties of an image. To address these issues, we have proposed a new multi-level colored directional motif histogram (MLCDMH) for devising a content-based image retrieval scheme. The proposed scheme extracts local structural features at three different levels. Initially, MLCDMH scheme extracts directional structural patterns from a 3×3\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$3 \\times 3$$\\end{document} pixel grids of an image. This reflects the 99\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$9^9$$\\end{document} different structural arrangements using 28 directional patterns. Next, we have used a weighted neighboring similarity (WNS) scheme to exploit the uniqueness of each motif pixel in its local surrounding. The WNS scheme will compute the importance of each directional motif pattern in its 3×3\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$3 \\times 3$$\\end{document} local neighborhood. In the last level, we have fused all directional motif images into a single directional difference matrix which reflects the local structural and directional motif features in detail and also reduces the computation overhead. The MLCDMH considers all possible permutations and rotations of the motif patterns to generate rotational invariant structural features. The image retrieval performance of this proposed scheme has been evaluated using different Corel/natural, object, texture and heterogeneous image datasets. The results of the retrieval experiments have shown satisfactory improvement over other motif- and non-motif-based CBIR approaches.

  • Texture and colour region separation based image retrieval using probability annular histogram and weighted similarity matching scheme
    Jitesh Pradhan, Sumit Kumar, Arup Kumar Pal, and Haider Banka

    Institution of Engineering and Technology (IET)
    Content-based image retrieval (CBIR) uses primitive image features for retrieval of similar images from a dataset. Generally, researchers extract these visual features from the whole image. Therefore, the extracted features contain overlapped information of texture, colour, and shape features, and it is a critical challenge in the field of CBIR. This problem can be overcome by extracting the colour features from the colour as well as shape and texture features from the intensity dominant part only. In this study, the authors have proposed an iterative algorithm to separate colour and texture dominant part of the image into two different images. Here, a combination of edge maps and gradients has been used to achieve separate colour and texture images. Further, scale-invariant feature transform and 2D dual-tree complex wavelet transform has been realised to extract unique shape and texture features from the texture image. Simultaneously, a probability-based semantic centred annular histogram has been suggested to extract unique colour features from the colour image. Finally, a novel weighted distance-based feature comparison scheme has been proposed for similarity matching and retrieval. All the image retrieval experiments have been carried out on seven standard datasets and demonstrated significant improvements over other state-of-arts CBIR systems

  • Multi-scale Image Fusion Scheme Based on Gray-Level Edge Maps and Adaptive Weight Maps
    Jitesh Pradhan, Ankesh Raj, Arup Kumar Pal, and Haider Banka

    Springer Singapore
    Digital cameras and other digital devices cannot focus on all significant objects within the single frame due to their limited depth of focus. Consequently, few objects get attention in the captured image while the rest of the objects becomes background information. This problem can be overcome using different multi-focus image fusion techniques because it combines all the partially focused objects of different parent images into a single fully focused fused image. Hence, the final fused image focuses on each and every object of the parent images. In this paper, a novel multi-focus image fusion technique has been proposed which uses different edge-finding operators (like Sobel, Prewitt, Roberts, and Scharr) on preprocessed images. These different edge-finding operators have great pixel discrimination property which helps us to locate all the vital textural information of different partially focused images. Subsequently, an adaptive weight calculation approach has been introduced to generate weight maps of different parent images. Finally, all these parent image weight maps have been deployed into the winner-take-all scheme to integrate all parent images into a single fused image. Further, we have also considered different sets of partially focused images for experimental analysis. The experimental outcomes reveal that the proposed scheme is outperforming as compared to recent state of the arts.

  • An Efficient Content Based Image Retrieval Scheme with Preserving the Security of Images∗
    Mukul Majhi, Jitesh Pradhan, and Arup Kumar Pal

    IEEE
    In this paper, an efficient content based image retrieval scheme is proposed which incorporates the security of images in a retrieval system. The main contribution of the proposed scheme is two folded that is to develop an efficient image retrieval process, and to provide security to the retrieved images. To achieve this goal, features from the foreground region as well as background features are extracted, where more importance is given to the foreground region which is identified based on Itti Koch Saliency Map. From the obtained foreground region texture features along with color features are extracted. Simultaneously, the background is divided into four regions and color features are extracted. Finally, the foreground and background features are combined to perform image retrieval process. Now to prevent the feature vector from illegitimate users, suitable cryptographic approaches are used to provide security in terms of confidentiality and integrity. To protect the integrity, the hash value of the feature vector is computed and the obtained hash value along with feature vector are enciphered together. This enciphered value is then communicated to the user. The content owner verifies the integrity of the feature and performs the retrieval process. The retrieved results are then watermarked to protect copyright and are further enciphered so that unauthorized user's cannot access the images. This security mechanism ensures that the retrieved images are shared only to the authorized user. The retrieval process is performed on Corel dataset and the results are comparable to the existing state-of-arts methods. Finally, this process is suitable to protect the security of images in a retrieval system.

  • Principal texture direction based block level image reordering and use of color edge features for application of object based image retrieval
    Jitesh Pradhan, Arup Kumar Pal, and Haider Banka

    Springer Science and Business Media LLC
    In this paper, the authors have presented a novel content-based image retrieval (CBIR) scheme based on the combination of color, shape, and texture visual image features. Initially, the combined features of color and shape are derived from the object region of an image using the proposed color edge map approach. This approach is suitable to extract both the color and shape based features simultaneously from image object region. We have preserved more information associated with the object region and some significant information from the background region for enabling better retrieval efficiency. In the subsequent stage, we have extracted texture features from the preprocessed image. This preprocessed image is obtained after decomposition of an image into non-overlapping blocks followed by reordering all blocks based on their principal texture direction. The notion supports the variation present on image data can be controlled by rearranging each block as per their principal direction and some texture based parameters derived from the preprocessed image. The final feature vector consists of color, shape, and texture-related features in their correct proportions. Proposed CBIR scheme is extensively tested using four coral image databases (i.e. 1,000 color images from 10 different classes, 10,000 color images from 20 different classes, 7,200 images from 100 different classes and 17,125 images from 20 different classes). Experimental results show that the proposed CBIR scheme has better retrieval efficiency in terms of precision and recall than other related schemes.

  • A CBIR Scheme Using GLCM Features in DCT Domain
    Sumit Kumar, Jitesh Pradhan, and Arun Kumar Pal

    IEEE
    The performance and effectiveness of CBIR scheme are directly associated with construction of small dimensional as well as salient image features respectively. So, in this paper, we have carried out the image retrieval process with small dimensional salient Image components or features as compare to the original image size and the imaze retrieval accuracy has been improved due to the consideration of local information rather than global information of Image data. Initially, the Image data is exploited in block level by discrete cosine transformation (DCT) and subsequently, some significant DCT coefficients are selected from each block as salient image components. However, all the DCT coefficients are not equally important in terms of visual perception and their dimension is also not negligible even in Image retrieval process. Later, selected AC coefficients are divided into four different groups. Later, some statistical parameters are computed from each group. Each statistical value is placed at a particular matrix. So, from each group, the number of matrices constructed is equal to the number of statistical parameters evaluated and one more matrix for DC coefficients is considered. Further, for construction of small dimensional feature vectors, gray level co-occurrence matrix (GLCM) is employed on all constructed matrices to derive the feature vector for a color component. The same procedure is employed on all three components and all feature vectors are combined toaether to form the final feature vector. The proposed CBIR structure is tested on three standard Image database i.e. Corel-lK, GHIM-10K and Olivia database and the experimental results demonstrate satisfactorily image retrieval and performance outperform other state-of-art schemes in many instances with respect to their precision values.


  • Multi-scale image fusion scheme based on non-sub sampled contourlet transform and four neighborhood Shannon entropy scheme
    Ankesh Raj, Jitesh Pradhan, Arup Kumar Pal, and Haider Banka

    IEEE
    Optical lenses have limited depth-of-focus, which makes it impossible to capture all significant objects in focus within single picture. Multi-focus image fusion techniques can be adopted to solve the above issue because this technique precisely selects every focused point from all parent images to create final fused image. So, the final fused image contains significantly more information regarding every salient objects. In this paper, we have proposed a novel image fusion technique for fusion of multi-focused images using non-sub sampled contourlet transform. Here, we have adopted contourlet transform due to its high edge discrimination property which enables us to capture all salient object edges from different parent images. In this approach first we have generated noise free gray scale parent images using non-linear anisotropic diffusion technique. Further, in all noise free images we have employed contourlet transform to discover all salient object edges. Later, we have used 4 neighborhood entropy calculation technique based winner-take-all approach to generate final fused image. We have also used different multi-focused image sets for experimental analysis. The outcomes of all the image fusion experiments show better performance as compared to the current-sate-of-arts.