@sru.edu.in
SR University
B. Tech, M. Tech, Ph.D.
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
R. Suganya, L.M.I. Leo Joseph, and Sreedhar Kollem
Elsevier BV
Dontabhaktuni Jayakumar, Modugu Krishnaiah, Sreedhar Kollem, Samineni Peddakrishna, Nadikatla Chandrasekhar, and Maturi Thirupathi
MDPI AG
This study presents a novel approach to emergency vehicle classification that leverages a comprehensive set of informative audio features to distinguish between ambulance sirens, fire truck sirens, and traffic noise. A unique contribution lies in combining time domain features, including root mean square (RMS) and zero-crossing rate, to capture the temporal characteristics, like signal energy changes, with frequency domain features derived from short-time Fourier transform (STFT). These include spectral centroid, spectral bandwidth, and spectral roll-off, providing insights into the sound’s frequency content for differentiating siren patterns from traffic noise. Additionally, Mel-frequency cepstral coefficients (MFCCs) are incorporated to capture the human-like auditory perception of the spectral information. This combination captures both temporal and spectral characteristics of the audio signals, enhancing the model’s ability to discriminate between emergency vehicles and traffic noise compared to using features from a single domain. A significant contribution of this study is the integration of data augmentation techniques that replicate real-world conditions, including the Doppler effect and noise environment considerations. This study further investigates the effectiveness of different machine learning algorithms applied to the extracted features, performing a comparative analysis to determine the most effective classifier for this task. This analysis reveals that the support vector machine (SVM) achieves the highest accuracy of 99.5%, followed by random forest (RF) and k-nearest neighbors (KNNs) at 98.5%, while AdaBoost lags at 96.0% and long short-term memory (LSTM) has an accuracy of 93%. We also demonstrate the effectiveness of a stacked ensemble classifier, and utilizing these base learners achieves an accuracy of 99.5%. Furthermore, this study conducted leave-one-out cross-validation (LOOCV) to validate the results, with SVM and RF achieving accuracies of 98.5%, followed by KNN and AdaBoost, which are 97.0% and 90.5%. These findings indicate the superior performance of advanced ML techniques in emergency vehicle classification.
Sreedhar Kollem, Chandrasekhar Sirigiri, and Samineni Peddakrishna
Elsevier BV
Sreedhar Kollem
Springer Science and Business Media LLC
Y. Srikanth, Ch. Rajendra Prasad, S. Srinvas, Sreedhar Kollem, and Rajesh Thota
AIP Publishing
Ch. Rajendra Prasad, Srikanth Yalabaka, Sreedhar Kollem, Srinivas Samala, and P. Ramchandar Rao
AIP Publishing
Srinivas Samala, Ch. Rajendra Prasad, Sreedhar Kollem, Srikanth Yalabaka, and P. Ramchandar Rao
AIP Publishing
Sreedhar Kollem
Springer Science and Business Media LLC
M. Naresh, V. Siva Nagaraju, Sreedhar Kollem, Jayendra Kumar, and Samineni Peddakrishna
Elsevier BV
Ch. Rajendra Prasad, Pillalamarri Shivapriya, Naragani Bhargavi, Nagaraj Ravula, Supraja Lakshmi Devi Sripathi, and Sreedhar Kollem
AIP Publishing
Ch. Rajendra Prasad, J. Saikrishna, K. Akash, K. Sai Ganesh, Sreedhar Kollem, and Chakradhar Adupa
IEEE
Diabetes has an impact on the eyes and may cause diabetic retinopathy (DR). It happens when the blood capillaries in the retina, the tissue that reacts to light at the back of the eye, are damaged, resulting in blindness. However, due to slow progression, the disease shows few signs in the early stages, hence making disease detection a difficult task. Therefore, a fully automated system is required to support the detection and screening process at early stages. In this paper, an automated DR classification using a modified VGG16 model was proposed. The proposed system employed DR 224 by 224 Gaussian Filtered dataset images. These images are pre-processed before applying to the VGG 16 pretrained model. The pre-trained VGG 16 model employed for the classification of DR images. The proposed model achieves an accuracy of 91.11 % and a loss of 20.17%. The proposed model helps physicians identify specific class DR.
Dontabhaktuni Jayakumar, Simran Saikia, Shaik Yasmin Roshni, Samineni Peddakrishna, Sreedhar Kollem, M Naresh, and Modugu Krishnaiah
IEEE
The implementation of secure technology in intelligent vehicles is essential, particularly with lane detection playing a crucial role in autonomous driving systems for enhanced navigation and traffic management efficiency. This paper presents a methodology for optimizing a UNet-based model for lane detection. The UNet architecture is utilized for image segmentation tasks, with an extensive evaluation of model performance conducted using various metrics. The training of the model involves fine-tuning hyperparameters within a specified search space that includes learning rates, batch sizes, and dropout rates. By iteratively adjusting these parameters based on validation loss, we aim to identify the most effective hyperparameters that will enhance the model’s performance. Additionally, a five-fold cross-validation is conducted to ensure the model’s robustness and ability to generalize to different datasets. The final model is constructed based on the best hyperparameters obtained from the tuning process. Throughout the training process, the model demonstrates a high level of accuracy, achieving a training accuracy of $\\mathbf{9 9. 9 3 \\%}$ and a corresponding loss of $\\mathbf{0. 0 0 3 1}$. Furthermore, the validation accuracy stands at an impressive 99.81% with a validation loss of 0.01. The model achieves a high accuracy of $\\mathbf{9 9. 7 8 \\%}$ on the test set, with an IoU score of 0.8861 and an F1 score of 0.9396. These metrics suggest that the model effectively learns and generalizes proficiently to new, unseen data.
Sreedhar Kollem, Samineni Peddakrishna, P Joel Josephson, Sridevi Cheguri, Garaga Srilakshmi, and Y Rama Lakshmanna
International Journal of Experimental Research and Review
Image denoising and segmentation play a crucial role in computer graphics and computer vision. A good image-denoising method must effectively remove noise while preserving important boundaries. Various image-denoising techniques have been employed to remove noise, but complete elimination is often impossible. In this paper, we utilize Partial Differential Equation (PDE) and generalised cross-validation (GCV) within Adaptive Haar Wavelet Transform algorithms to effectively denoise an image, with the digital image serving as the input. After denoising, the image is segmented using the Histon-related fuzzy c-means algorithm (H-FCM), with the processed image serving as the output. The proposed method is tested on images exposed to varying levels of noise. The performance of image denoising and segmentation techniques is evaluated using metrics such as Peak Signal-to-Noise Ratio (PSNR) of 77.42, Mean Squared Error (MSE) of 0.0011, and Structural Similarity Index (SSIM) of 0.7848. Additionally, segmentation performance is measured with a sensitivity of 99%, specificity of 98%, and an accuracy of 98%. The results demonstrate that the proposed methods outperform conventional approaches in these metrics. The implementation of the proposed methods is carried out on the MATLAB platform.
R. Suganya, Leo Joseph, and Sreedhar Kollem
IEEE
Energy storage systems in electric vehicles come across boundaries interrelated to perilous parameters. There are challenging factors like charging infrastructure, constrained energy density which affects driving range, and battery degradation. The proposed system studies lithium-ion batteries' energy storage ability by considering three parameters: current, voltage, and temperature. The proposed model is simulated using MATLAB/ Simulink and studies the interplay of the considered parameters and is observed to be the energy-storing technique with their graphical analysis. The three-parameter outperforms the capacity of energy storage by its values that are not exceeded and limited to the ideal values which yields superior results, also essential for sustainable renewable energy sources, also for grid applications.
Ch.Rajendra Prasad, Kodakandla Srividya, Kaparthi Jahnavi, Teppa Srivarsha, Sreedhar Kollem, and Srikanth Yelabaka
IEEE
Brain tumours are critical malignancies that develop as a result of aberrant cell division. Typically, tumour classification involves a biopsy, which is conducted after the final brain operation. Technological advances have facilitated the utilization of medical imaging by physicians to diagnose a wide range of symptoms within the domain of medicine. In this project, we propose the Comprehensive CNN method for the detection and classification of brain tumours. For experimentation, we used the SARTAJ, Br35H, and Figshare datasets. This proposed model outperforms in terms of accuracy, recall, F1 score, and precision as compared to other traditional methods. This research contributes to the ongoing efforts to enhance the capabilities of medical imaging and paves the way for more accurate and efficient brain tumor analysis.
Ch.Rajendra Prasad, Gaddam Bilveni, Bhukya Priyanka, Chinthapally Susmitha, Dubasi Abhinay, and Sreedhar Kollem
IEEE
Skin cancer is considered to be a very perilous kind of cancer and is recognized as a significant contributor to global mortality rates. The timely identification of skin cancer offers a chance to reduce the cumulative rate of death. The primary method of diagnosing skin cancer predominantly relies on visual inspection, a technique that is known to possess limitations in terms of accuracy. There have been proposals to employ deep-learning algorithms to assist physicians in promptly and accurately detecting skin cancers. This paper presents skin cancer perdition with Deep Transfer Learning (DTL). The DTL model utilized in the proposed model is EfficientNet-B3. The dataset utilized in this study was obtained from the Kaggle skin cancer dataset. Prior to being applied to the updated EfficientNet-B3, the data undergo preprocessing techniques such as rescaling and random adjustments to brightness and/or contrast, with a range of ±20%. Prior to putting it to the DTL model, the data undergoes pre-processing and augmentation. For training 80% and for testing 20% of data is used. The proposed model's training accuracy is around 98.64%, and its validation accuracy is approximately 90.6%.
Srinivas Samala, Udutha Sahithi, Avunoori Bharath Kumar, Odela Sravan Kumar, Veladandi Ramya Sri, Ch. Rajendra Prasad, and Sreedhar Kollem
IEEE
The agricultural industry is increasingly adopting Deep Learning methodologies to tackle obstacles related to weed identification and categorization, with the ultimate goal of enhancing crop productivity. However, the complexity stems from the striking similarity in colours, forms, and textures between weeds and crops, specifically when they are in the process of growing. Automated and precise weed identification is of the utmost importance to minimize agricultural losses and maximize the use of resources. The analysis of the literature under review enhances comprehension of the obstacles, remedies, and prospects associated with weed identification and categorization via CNN models. To address these obstacles, we have devised a solution that entails the construction and refinement of a customized Convolutional Neural Network model. The experiment employs the Four-class weed dataset obtained from Kaggle and utilizes the Adaptive Moment Estimation optimizer during the training process. The accuracy of 96.58% is demonstrated by the proposed model in accurately identifying and categorizing weeds in the fields.
Sreedhar Kollem, Pati Harika, Janagam Vignesh, Peddoju Sairam, Adunuthula Ramakanth, Samineni Peddakrishna, Srinivas Samala, and Ch. Rajendra Prasad
IEEE
The multimodal MRI scans described in this article are used to categorize brain tumors based on their location and size. Brain tumors need to be categorized in order to assess the tumors and choose the appropriate course of treatment for each class. Many different imaging methods are used to detect brain tumors. However, because MRI does not use ionizing radiation and generates better images, it is commonly used. Using deep learning (DL), a branch of machine learning has recently demonstrated impressive results, particularly in segmentation and classifiable tasks. This paper proposes a convolutional neural network-based deep learning model (DL) that uses transfer learning and EfficientNet to classify various kinds of brain cancers using publically accessible datasets. The first divides cancers into three categories: glioma, meningioma, and pituitary tumor. Compared to conventional deep learning techniques, the suggested approach produces superior results. The Python platform can be used to complete the task.
Srinivas Samala, Aakash Sreeram, Lakshmi Sree Vindhya Sarva, Sreedhar Kollem, Kedhareshwar Rao Vanamala, and Chandrashekar Valishetti
IEEE
Due to its aggressiveness and the difficulties in detecting it in time, lung cancer is a leading cause of cancer-related deaths. Unfortunately, it is often detected at an advanced stage. Although it is a significant difficulty, early detection is essential for individual survival. Radiographs of the chest and computed tomography scans are the first lines of diagnostics. On the other hand, incorrect diagnoses could result from the possibility of benign nodules. Early on, it is especially difficult to differentiate benign nodules from malignant ones due to their extremely comparable characteristics. To address this problem, a novel AdaBoost-SVM model is suggested to improve the accuracy of malignant nodule diagnosis. Kaggle is the source of the dataset that is used to train the model. The proposed model exhibits a remarkable accuracy rate of 97.96%, surpassing the performance of conventional SVM methods. This development imparts the potential for enhanced precision and dependability in the crucial initial phases of lung cancer diagnosis
Sreedhar Kollem, Kodari Poojitha, Naroju Brahma Chary, Pulluri Saicharan, Kampelly Anvesh, Samineni Peddakrishna, and Ch. Rajendra Prasad
IEEE
The viability of agriculture and the security of the world's food supply are seriously threatened by plant diseases. Detecting these diseases promptly and accurately is crucial for effective disease control and minimizing crop output losses. Deep learning algorithms have shown possibilities recently as a method for accurately and automatically classifying plant diseases. This research presents an innovative deep-learning framework designed for plant disease classification, incorporating transfer learning and customized convolutional neural networks (CNNs). The proposed framework comprises three main phases: data pre-processing or transfer learning, feature extraction, and disease classification. This article presents a new approach to plant disease categorization using deep learning. It combines convolutional neural networks (CNNs) with transfer learning. Through this method, plant diseases can be identified with precision and automation across diverse plant species and types of disease. This facilitates more effective disease management, safeguarding the security of the global food supply. Comparative analysis indicates that the proposed method outperforms traditional approaches, yielding superior results.
Sreedhar Kollem, Ch. Rajendra Prasad, J. Ajayan, Sreejith S., LMI Leo Joseph, and Patteti Krishna
Bentham Science Publishers Ltd.
Background: In image processing, image segmentation is a more challenging task due to different shapes, locations, image intensities, etc. Brain tumors are one of the most common diseases in the world. So, the detection and segmentation of brain tumors are important in the medical field. Objective: The primary goal of this work is to use the proposed methodology to segment brain MRI images into tumor and non-tumor segments or pixels. Methods: In this work, we first selected the MRI medical images from the BraTS2020 database and transferred them to the contrast enhancement phase. Then, we applied thresholding for contrast enhancement to enhance the visibility of structures like blood arteries, tumors, or abnormalities. After the contrast enhancement process, the images were transformed into the image denoising phase. In this phase, a fourth-order partial differential equation was used for image denoising. After the image denoising process, these images were passed on to the segmentation phase. In this segmentation phase, we used an elephant herding algorithm for centroid optimization and then applied the multi-kernel fuzzy c-means clustering for image segmentation. Results: Peak signal-to-noise ratio, mean square error, sensitivity, specificity, and accuracy were used to assess the performance of the proposed methods. According to the findings, the proposed strategy produced better outcomes than the conventional methods. Conclusion: Our proposed methodology was reported to be a more effective technique than existing techniques.
Sandip Bhattacharya, Mohammed Imran Hussain, John Ajayan, Shubham Tayal, Louis Maria Irudaya Leo Joseph, Sreedhar Kollem, Usha Desai, Syed Musthak Ahmed, and Ravichander Janapati
Wiley
S. Sreejith, L.M.I. Leo Joseph, Sreedhar Kollem, V.T. Vijumon, and J. Ajayan
Elsevier BV
Sreedhar Kollem, Katta Ramalinga Reddy, and Duggirala Srinivasa Rao
Springer Science and Business Media LLC
Sreedhar Kollem, Katta Ramalinga Reddy, Ch. Rajendra Prasad, Avishek Chakraborty, J. Ajayan, S. Sreejith, Sandip Bhattacharya, L. M. I. Leo Joseph, and Ravichander Janapati
Wiley