@reva.edu.in
Research Assistant
REVA University
Computer Vision, Deep Learning, Image Processing, Robotics Perception, Embedded Systems, Image Processing, Object Detection, Image Classification and Semantic Segmentation
Scopus Publications
Scholar Citations
Scholar h-index
Siddhanta Mandal, Kartik E. Cholachgudda, Lohith V. Chamakura, Rajashekhar C. Biradar, and Geetha D. Devanagavi
IEEE
Stereo vision is an important technique in computer vision that enables depth information to be extracted from a pair of cameras. Depth estimation has numerous applications across various fields. Some of the notable applications include autonomous vehicles, augmented reality (AR) and virtual reality (VR), robotics, 3D reconstruction, surveillance and security and medical imaging. In this research, the authors develop and test a low-cost real-time stereo vision system, JetVision, that uses two Raspberry Pi cameras, a Jetson computing platform and CUDA programming. The algorithm developed for JetVision is designed to efficiently analyze the differences between the images captured by each camera to estimate the depth of objects in the scene using Semi-Global Matching (SGM) method. To achieve real-time performance, the algorithm is implemented using CUDA programming, which enables efficient parallel processing on Jetson’s GPU. The algorithm is tested on a dataset of real-world images captured by JetVision and is compared its result with high-grade Intel® RealSenseTM depth camera and ground truth depth information. The results demonstrate that JetVision is capable of accurately estimating depth information in real-time with an FPS ranging between 15 to 28. This research contributes to the field of computer vision by demonstrating the potential of stereo vision algorithms in real-world applications using cost-effective hardware and efficient parallel processing.
Kartik E. Cholachgudda, Rajashekhar C. Biradar, Kouame Yann Olivier Akansie, Aditya A. Sannabhadti, and Geetha D. Devanagavi
IEEE
High-throughput phenotyping using imaging techniques allows for non-destructive, automated assessments of plant morphological and physiological traits. Currently, these techniques require intricate hardware and software to generate factual data. This paper presents a unique data acquisition system, AgRECA (Agricultural REsearch CAmera), with hardware and software design for high throughput phenotyping using RGB, multispectral and thermal imaging and environmental sensors; due to its compact size and ability to be used as a standalone device, it is developed and integrated into any ground-based platform or can be used handheld. The authors also propose a new data format, called NTME, containing all the information captured by AgRECA embedded into a single file.
Shaikh F. Shahnoor, Kishanlal Suthar, Ravi Kumar, Manish Rathore, Rajashekhar C. Biradar, and Kartik Cholachgudda
ACM
Detection of threat elements during dangerous land missions such as rescue operations, bomb disposal, surveillance and reconnaissance using an unmanned ground vehicle (UGV) plays a significant role in technological warfare. Accurate and rapid detection of enemy arsenal and threats can help better plan and deploy military armaments while greatly reducing human casualty and economic losses. Unmanned ground systems with good maneuverability, superior wireless communication, and powerful multi-sensor and data processing capabilities can significantly advantage over the enemies. One of the key data processing tasks, i.e., land-warfare threat detection, is developed and tested in this paper. The goal is to identify and differentiate between threats in a warfare scenario. Four such threats have been defined and used to develop the detection algorithm for the study: soldiers, tanks, tents, and helicopters. Different object detection algorithms (ODAs) such as Single Shot Detector (SSD), CenterNet and Faster R-CNN based on pre-trained Convolutional Neural Networks were tested and compared. COCO evaluation metrics were used as performance parameters to evaluate each detection algorithm on the different threats selected. The results show that CenterNet ODA performs better in evaluation and inference when compared to other ODAs, obtaining the highest mean Average Precision of 85.89% and 93.75%, respectively, at 0.5 IoU with Resent101 V1 CNN architecture. The best trade-off between all performance parameters was obtained again using CenterNet Resent101 V1 FPN. The inference was tested on a Raspberry Pi-based UGV streamed data. It was concluded that such systems have a key role in future warfare with a strong communication system and a lighter version of ODA. Further, researchers and engineers can use the work achieved in this paper to develop robust detection and data processing models and incorporate it into various applications and domains.
R. Lohith, Kartik E. Cholachgudda, and Rajashekhar C. Biradar
IEEE
Plant diseases detection and management practices are a major concern in the agriculture sector. Automating the process of plant disease detection with acceptable accuracy and speed using computer-aided systems could help develop an early diagnosis while substantially reducing economic losses. Recent advancements in deep neural networks have allowed researchers to drastically improve the accuracy of image classification and recognition systems. This paper presents a comprehensive assessment of the deep-learning approaches based on the pre-trained convolutional neural network models and the PyTorch framework to classify disease infections in tomato plants. Models such as EfficientNet-B0, ResNext-50_32x4d and MobileNet-V2, which relatively exhibit improved performance under different trade-offs, were tested using images captured with the natural background. Key performance indices were evaluated with varying hyperparameters during training and validation on a GPU-based system. The results show that ResNext-50_32x4d delivers the best accuracy of 90.14% (0.001 LR; 8 Batch Size), with the right hyperparameter optimization, and MobileNet-V2 delivers the lowest loss of 0.356 (0.001 LR; 16 Batch Size) during model validation for the given dataset and system constraints. ResNext-50_32x4d also performs better in inference than the other two models tested. The assessment performed in this paper will help researchers and developers decide on selecting an appropriate model for precision agriculture and smart farming deployments.
Kartik E. Cholachgudda, Rajashekhar C. Biradar, Kouame Yann Olivier Akansie, Geetha D. Devanagavi, and Aditya A. Sannabhadti
IEEE
Early detection of plant disease is a challenging research topic in crop protection and precision agriculture. Progress in remote sensing, embedded sensors and AI has enabled researchers to develop innovative data processing algorithms and models to predict plant disease symptoms. However, continuous research is required to update and develop new solutions for detecting plant disease early in the infection. It requires extensive experimentation and timelapse collection of data with different imaging sensors, platforms, plant samples, and environmental conditions before developing a robust model. This paper proposes a conceptual design of an apparatus that can be used for data collection from different imaging sensors to perform high-throughput phenotyping and analysis for early plant disease diagnosis. The paper includes two different design concepts for two distinct scenarios of data collection, 1. on-field standalone device; and 2. laboratory desktop apparatus. Each design is explained with various operational and technical considerations, an electronic block diagram and a mechanical 3D sketch. The paper aims to provide a generic design of an apparatus to enable researchers worldwide to carry out phenotypic studies and develop innovative solutions to benefit the end-user, i.e., the farmer, and reduce global economic and environmental losses.
Kartik E. Cholachgudda, Rajashekhar C. Biradar, Kouame Yann Olivier Akansie, R. Lohith, and Aras Amruth.Raj Purushotham
IEEE
In recent years, automatic plant disease recognition has gained huge interest in academia and industry. It is considered one of the promising technologies in precision agriculture. With the advancement of deep neural networks (DNNs), it is possible to develop various solutions for plant disease recognition. This paper analyzes the feasibility of using CPU-based desktop computers and GPU-based cloud-hosted services as back-end systems to develop tomato leaf disease classification models. The paper conducts a comprehensive analysis of state-of-the-art DNN architectures proposed for image classification. For each DNNs, various performance indices are measured. The attributes of these indices and their combinations are analyzed and discussed. The results show that EfficientNetBO and MobileNetV2 will provide the best results under most of the circumstances compared to other DNNs considered. In comparison with CPU-based systems, GPU-based systems perform better in almost every analysis performed in this study. The experiments conducted in this paper will help researchers and practitioners to select appropriate DNN architectures that better fit their resource constraints for practical deployment and applications.
Kartik E. Cholachgudda, Rajashekhar C. Biradar, and M. Lokanath
IEEE
The joint information provided by the depth sensor and the colour camera has many applications in the field of computer vision. These two imaging devices require preliminary calibration (both intrinsic and extrinsic) to be used for accurate measurements. In this paper, we propose a novel approach to calibrate Intel Creative Senz3D depth sensor that is based on Time-of-Flight (ToF) technology. Since identifying feature points in-depth maps is difficult, we have designed a 2.5D checkerboard pattern with systematically placed holes to be accurately detected from low-resolution depth maps of a ToF camera as well as from high-resolution images of a colour camera. Based on the identified feature points we establish an accurate correspondence between world coordinate and camera coordinate system using the obtained intrinsic and extrinsic parameters of the respective cameras. Evaluation of the proposed method indicates an improvement of 27.13% in average pixel re-projection error with respect to the manufacturer’ s calibration.