@nagarjunauniversity.ac.in
ASSISTANT PROFESSOR
ACHARYA NAGARJUNA UNIVERSITY
Computer Science Applications, Computer Science, Artificial Intelligence, Computer Science Applications
Scopus Publications
Kampa Lavanya, Y Sowmya Reddy, Donthireddy Chetana Varsha, Nerella Vishnu Sai, and Kukkadapu Lakshmi Meghana
Springer Nature Singapore
Kampa Lavanya, Juluru Jahnavi Sai Aasritha, Mohan Krishna Garnepudi, and Vamsi Krishna Chellu
Springer Nature Singapore
A. Sarvani, Yalla Sowmya Reddy, Y. Madhavi Reddy, R. Vijaya, and Kampa Lavanya
Springer Nature Singapore
N. Parashuram, K. Bhanu Nikitha, U. Jaya Sree, S. Lakshmi Prasanna, and K. Lavanya
Springer Nature Switzerland
Kampa Lavanya, Pemula Rambabu, G. Vijay Suresh, and Rahul Bhandari
Inderscience Publishers
K. Lavanya, G. V. Vijay Suresh, and Anu Priya Koneru
Springer Nature Singapore
G.N. Basavaraj, K. Lavanya, Y Sowmya Reddy, and B. Srinivasa Rao
Elsevier BV
Saddi Jyothi, Y. Sowmya Reddy, and K. Lavanya
Springer Nature Singapore
K. Lavanya, G. Raja Gopal, M. Bhargavi, and V. Akhil
Springer Nature Singapore
Lavanya K, L.S.S. Reddy, and B. Eswara Reddy
IGI Global
Multiple imputations (MI) are predominantly applied in such processes that are involved in the transaction of huge chunks of missing data. Multivariate data that follow traditional statistical models undergoes great suffering for the inadequate availability of pertinent data. The field of distributed computing research faces the biggest hurdle in the form of insufficient high dimensional multivariate data. It mainly deals with the analysis of parallel input problems found in the cloud computing network in general and evaluation of high-performance computing in particular. In fact, it is a tough task to utilize parallel multiple input methods for accomplishing remarkable performance as well as allowing huge datasets achieves scale. In this regard, it is essential that a credible data system is developed and a decomposition strategy is used to partition workload in the entire process for minimum data dependence. Subsequently, a moderate synchronization and/or meager communication liability is followed for placing parallel impute methods for achieving scale as well as more processes. The present article proposes many novel applications for better efficiency. As the first step, this article suggests distributed-oriented serial regression multiple imputation for enhancing the efficiency of imputation task in high dimensional multivariate normal data. As the next step, the processes done in three diverse with parallel back ends viz. Multiple imputation that used the socket method to serve serial regression and the Fork Method to distribute work over workers, and also same work experiments in dynamic structure with a load balance mechanism. In the end, the set of distributed MI methods are used to experimentally analyze amplitude of imputation scores spanning across three probable scenarios in the range of 1:500. Further, the study makes an important observation that due to the efficiency of numerous imputation methods, the data is arranged proportionately in a missing range of 10% to 50%, low to high, while dealing with data between 1000 and 100,000 samples. The experiments are done in a cloud environment and demonstrate that it is possible to generate a decent speed by lessening the repetitive communication between processors.
Now a day’s cancer has become a deathly disease due to the abnormal growth of the cell. Many researchers are working in this area for the early prediction of cancer. For the proper classification of cancer data, demands for the identification of proper set of genes by analyzing the genomic data. Most of the researchers used microarrays to identify the cancerous genomes. However, such kind of data is high dimensional where number of genes are more compared to samples. Also the data consists of many irrelevant features and noisy data. The classification technique deal with such kind of data influences the performance of algorithm. A popular classification algorithm (i.e., Logistic Regression) is considered in this work for gene classification. Regularization techniques like Lasso with L1 penalty, Ridge with L2 penalty, and hybrid Lasso with L1/2+2 penalty used to minimize irrelevant features and avoid overfitting. However, these methods are of sparse parametric and limits to linear data. Also methods have not produced promising performance when applied to high dimensional genome data. For solving these problems, this paper presents an Additive Sparse Logistic Regression with Additive Regularization (ASLR) method to discriminate linear and non-linear variables in gene classification. The results depicted that the proposed method proved to be the best-regularized method for classifying microarray data compared to standard methods
Lavanya K., L.S.S. Reddy, and B. Eswara Reddy
IGI Global
Multiple imputations (MI) are predominantly applied in such processes that are involved in the transaction of huge chunks of missing data. Multivariate data that follow traditional statistical models undergoes great suffering for the inadequate availability of pertinent data. The field of distributed computing research faces the biggest hurdle in the form of insufficient high dimensional multivariate data. It mainly deals with the analysis of parallel input problems found in the cloud computing network in general and evaluation of high-performance computing in particular. In fact, it is a tough task to utilize parallel multiple input methods for accomplishing remarkable performance as well as allowing huge datasets achieves scale. In this regard, it is essential that a credible data system is developed and a decomposition strategy is used to partition workload in the entire process for minimum data dependence. Subsequently, a moderate synchronization and/or meager communication liability is followed for placing parallel impute methods for achieving scale as well as more processes. The present article proposes many novel applications for better efficiency. As the first step, this article suggests distributed-oriented serial regression multiple imputation for enhancing the efficiency of imputation task in high dimensional multivariate normal data. As the next step, the processes done in three diverse with parallel back ends viz. Multiple imputation that used the socket method to serve serial regression and the Fork Method to distribute work over workers, and also same work experiments in dynamic structure with a load balance mechanism. In the end, the set of distributed MI methods are used to experimentally analyze amplitude of imputation scores spanning across three probable scenarios in the range of 1:500. Further, the study makes an important observation that due to the efficiency of numerous imputation methods, the data is arranged proportionately in a missing range of 10% to 50%, low to high, while dealing with data between 1000 and 100,000 samples. The experiments are done in a cloud environment and demonstrate that it is possible to generate a decent speed by lessening the repetitive communication between processors.
K. Lavanya, L. S. S. Reddy, and B. Eswara Reddy
Springer Singapore
K. Balaji and K. Lavanya
Springer International Publishing