@vnrvjiet.ac.in
Assistant Professor, CSE
VNR VJIET
Image processing, Remote sensing, Machine Learning, Deep Learning.
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
P Deepanramkumar, A Helen Sharmila, Niranchana Radhakrishnan, Devulapalli Sudheer, Jeethu V. Devasia, Ch. Pradeep Reddy, Gokul Yenduri, and N. Jaisankar
Institute of Electrical and Electronics Engineers (IEEE)
The advancement of 6G cognitive radio networks aims to reduce latency in rural and remote areas. Very few studies have been conducted on this technology. Therefore, this study utilizes massive multiple-input, multiple-output (MIMO) technology for secure data transmission at 6G base stations. Blockchain technology authenticates IDs and maintains secure records for network users, with decentralization achieved through the chimp optimization algorithm. The availability of the spectrum is monitored using the Q-learning hidden sparse variate logistic regression model, and the channel-state information is predicted using the quasi-Newton iterative unscented Kalman filter algorithm. Additionally, beamforming is enhanced through cooperative strategies. Secure routing is facilitated by the golden eagle optimization-hyper elliptic curve cryptography algorithm, where data are routed according to paths determined by the Dijkstra algorithm. The MIMO-6G-cognitive radio-based Internet-of-Things framework performs better compared to existing methods.
Devulapalli Sudheer, S. Nagini, Naga Sreenija Meka, Yasaswini Kolli, Anudeep Eloori, Nithish Kumar Chowdam, and Rushikesh Reddy Dorolla
CRC Press
Devulapalli Sudheer, S. Nagini, Naga Sreenija Meka, Yasaswini Kolli, Anudeep Eloori, Nithish Kumar Chowdam, and Rushikesh Reddy Dorolla
CRC Press
Nilankar Bhanja, Akila A, Devulapalli Sudheer, Ashok Kumar, PramitBrata Chanda, and Rakesh Dani
IEEE
Globally, the pancreatic tumor is one of the principal sources of cancer death. This is because of a deficiency in promising tools for prompt identification of this cancer. Nowadays, the automatic discovery of pancreatic cancers with the help of novel computed tomography is extensively used for the analysis and presentation of pancreatic tumors. Conventional approaches are capable of extracting only low-level features. Tumors in pancreatic malignant that extremely impends the life span of infected people. Categorization of tumors without human intervention is a really challenging task. But image segmentation and classification have real-world complications, such as unbalanced categorization accuracy, a heavy workload, and the final outcomes determined by the subjective judgment of the medical expert during the analysis and presentation of pancreatic cancers. In addition, precise prediction of pancreatic cancers could help the clinical experts to provide the best therapeutic schedule for infected people of various stages. In this research work, Region-Based Segmentation (RBS) is used to segment the input images of pancreatic cancers. In case of feature extraction, Particle Swarm Optimization (PSO) _ Convolutional Neural Network (CNN), Cuckoo Algorithm _ Convolutional Neural Network (CNN), Modified Cuckoo Algorithm _ Convolutional Neural Network (CNN) are adopted. Results are evaluated based on Accuracy, Precision, Recall, time period. Results have proven that the proposed Modified Cuckoo Algorithm_ Convolutional Neural Network (CNN) performs better in all aspects.
Nilankar Bhanja, Akila A, Devulapalli Sudheer, Ashok Kumar, Pramit Brata Chanda, and Rakesh Dani
IEEE
The problem of atmospheric air pollution is one of the key environmental problems. In order to determine the factors that make the greatest contribution to air pollution and to counter them in a timely manner, it becomes necessary to constantly monitor the air environment. Currently, monitoring is carried out at stationary sources of pollutants, however, the share of pollution by exhaust gases of motor vehicles has increased. Thus, in order to obtain an objective picture, it is necessary to monitor pollution by motor vehicles, which, with the classical approach, using a variety of gas analyzers, is extremely costly. It is proposed to assess the state of the atmosphere indirectly, through calculations, based on the state of weather conditions, terrain, traffic intensity and car models, from which it is possible to obtain information on the type and amount of emitted pollutants. The article discusses the applicability of machine learning algorithms to the problem of predicting the state of air pollution. A review of the main prediction models was carried out, as well as the effectiveness of their application. Model prediction time estimates are obtained for a fixed error value.
Rajakumar Krishnan, Arunkumar Thangavelu, Prabhavathy Panneer, Sudheer Devulapalli, Arundhati Misra, and Deepak Putrevu
Springer Science and Business Media LLC
Sudheer Devulapalli, Venkatesh B., and Ramasubbareddy Somula
IGI Global
This chapter aims to investigate pandemic crisis in the various business fields like real estate, restaurants, gold, and the stock market. The importance of deep learning models is to analyse the business data for future predictions to overcome the crisis. Most of the recent research articles are published on intelligent business models in sustainable development and predicting the growth rate after the pandemic crisis. This clear study will be presented based on all reputed journal articles and information from business magazines on the various business domains. Comparison of best intelligent models in business data analysis will be done to transform the business operations and the global economy. Different deep learning applications in business data analysis will be addressed. The deep learning models are investigated which are applied on descriptive, predictive, and prescriptive business analytics.
Devulapalli Sudheer, Jothiaruna N, Anupama Potti, Gangappa M, and Somula RamaSubbareddy
IEEE
Brain computer interface (BCI) is used to identify electrical activity in human brain using the electroencephalog-raphy (EEG). EEG records the electrical activity by placing the electrodes on the scalp. By using the recorded information it will able to classify the different types of abnormalities happening in brain. For extracting the information from signal without losing any information, some feature extraction methods have been used using deep learning concepts. The methods are Time Frequency distribution, Fast Fourier Transform, Eigen Vector, Wavelet Transform, Auto Regressive, Independent Component Analysis, Principal Component Analysis, Empirical Method Decomposition, Hilbert Huang Transform and Local Discriminant bases. Comparison have been done between the methods to show how effectively methods without losing an information. The classification accuracies of the feature methods are compared with each other. The study concluded the hybrid methods such as domain specific and automated features togeather shown better performance.
Rajakumar Krishnan, Arunkumar Thangavelu, P. Prabhavathy, Devulapalli Sudheer, Deepak Putrevu, and Arundhati Misra
Emerald
PurposeExtracting suitable features to represent an image based on its content is a very tedious task. Especially in remote sensing we have high-resolution images with a variety of objects on the Earth's surface. Mahalanobis distance metric is used to measure the similarity between query and database images. The low distance obtained image is indexed at the top as high relevant information to the query.Design/methodology/approachThis paper aims to develop an automatic feature extraction system for remote sensing image data. Haralick texture features based on Contourlet transform are fused with statistical features extracted from the QuadTree (QT) decomposition are developed as feature set to represent the input data. The extracted features will retrieve similar images from the large image datasets using an image-based query through the web-based user interface.FindingsThe developed retrieval system performance has been analyzed using precision and recall and F1 score. The proposed feature vector gives better performance with 0.69 precision for the top 50 relevant retrieved results over other existing multiscale-based feature extraction methods.Originality/valueThe main contribution of this paper is developing a texture feature vector in a multiscale domain by combining the Haralick texture properties in the Contourlet domain and Statistical features using QT decomposition. The features required to represent the image is 207 which is very less dimension compare to other texture methods. The performance shows superior than the other state of art methods.
Sudheer Devulapalli, Anupama Potti, Rajakumar Krishnan, and Md. Sameeruddin Khan
Elsevier BV
P Srilatha, Somula Ramasubbareddy, and Devulapalli Sudheer
Springer Science and Business Media LLC
Sudheer Devulapalli and Rajakumar Krishnan
SPIE-Intl Soc Optical Eng
Abstract. Deep learning techniques have become increasingly popular for classifying large-scale image and video data. Remote sensing applications require robust search engines to retrieve similar information dependent on an example-based query instead of a tag-based query. Deep features can be extracted automatically by training raw data without having any domain-specific knowledge. However, the training time for a massive amount of multimedia datasets is high. Training complexity is reduced using pre-trained GoogleNet weights for initial feature extraction. To fine-tune the feature vector and reduce the dimensionality, a one dimension convolutional neural network (1D-CNN) is applied. There is a loss of information while resizing the input image to a pre-trained network with an acceptable input size. We proposed a new feature set by integrating handcrafted features at detailed scales and deep features to improve the system’s efficiency. The curvelet transform was used to decompose the image into coarse and detailed scales. Haralick texture features were extracted from the detail coefficients in four directions and fused with fine-tuned deep features. The proposed feature set was assessed using standard performance metrics from the literature. The proposed technique achieved improved performance with 89% accuracy for retrieval of the top 50 relevant results.
Sudheer Devulapalli and Rajakumar Krishnan
SPIE-Intl Soc Optical Eng
Abstract. Image fusion is an important technique in remote sensing to improve visual interpretation and classification. Pansharpening is the procedure of fusing panchromatic (PAN) and multispectral images to produce high spatial and spectral resolution images. Synthesized pansharpening is performed on Linear Imaging Self-Scanning Sensor III and Advanced Wide Field Sensor data, which are freely available and provided by the National Remote Sensing Center. The Adaptive Neuro-Fuzzy Inference System (ANFIS) in multiscale transform domain for multisensor image fusion application is evaluated. The state-of-the-art method has been evaluated by various quality metrics. The computational cost of ANFIS with wavelet, contourlet, shearlet, and curvelet transform is investigated. This study proves that curvelet with ANFIS-based fusion technique outperformed state-of-the-art techniques. The application will be used to incorporate the missing spectral information in the high spatial resolution PAN image to identify objects, highlighting the regions clearly.
Devulapalli Sudheer and Rajakumar Krishnan
ASTES Journal
Article history: Received: 07 September, 2019 Accepted: 28 November, 2019 Online: 16 December, 2019
D. Sudheer, R. SethuMadhavi, and P. Balakrishnan
Springer Singapore