KAMAP LAVANYA

@nagarjunauniversity.ac.in

ASSISTANT PROFESSOR
ACHARYA NAGARJUNA UNIVERSITY

RESEARCH, TEACHING, or OTHER INTERESTS

Computer Science Applications, Computer Science, Artificial Intelligence, Computer Science Applications

17

Scopus Publications

Scopus Publications

  • IDS-PSO-BAE: The Ensemble Method for Intrusion Detection System Using Bagging–Autoencoder and PSO
    Kampa Lavanya, Y Sowmya Reddy, Donthireddy Chetana Varsha, Nerella Vishnu Sai, and Kukkadapu Lakshmi Meghana

    Springer Nature Singapore

  • A Customer Churn Prediction Using CSL-Based Analysis for ML Algorithms: The Case of Telecom Sector
    Kampa Lavanya, Juluru Jahnavi Sai Aasritha, Mohan Krishna Garnepudi, and Vamsi Krishna Chellu

    Springer Nature Singapore

  • A Multi-level Optimized Strategy for Imbalanced Data Classification Based on SMOTE and AdaBoost
    A. Sarvani, Yalla Sowmya Reddy, Y. Madhavi Reddy, R. Vijaya, and Kampa Lavanya

    Springer Nature Singapore

  • Developing a System Based on Block Chain Technology for e-Voting Mechanism
    N. Parashuram, K. Bhanu Nikitha, U. Jaya Sree, S. Lakshmi Prasanna, and K. Lavanya

    Springer Nature Switzerland

  • Gene expression data classification with robust sparse logistic regression using fused regularisation
    Kampa Lavanya, Pemula Rambabu, G. Vijay Suresh, and Rahul Bhandari

    Inderscience Publishers

  • Predictive Analysis of COVID-19 Data Using Two-Step Quantile Regression Method
    K. Lavanya, G. V. Vijay Suresh, and Anu Priya Koneru

    Springer Nature Singapore


  • Microarray Data Classification Using Feature Selection and Regularized Methods with Sampling Methods
    Saddi Jyothi, Y. Sowmya Reddy, and K. Lavanya

    Springer Nature Singapore

  • Visualizing Missing Data: COVID-2019
    K. Lavanya, G. Raja Gopal, M. Bhargavi, and V. Akhil

    Springer Nature Singapore

  • Distributed Based Serial Regression Multiple Imputation for High Dimensional Multivariate Data in Multicore Environment of Cloud
    Lavanya K, L.S.S. Reddy, and B. Eswara Reddy

    IGI Global
    Multiple imputations (MI) are predominantly applied in such processes that are involved in the transaction of huge chunks of missing data. Multivariate data that follow traditional statistical models undergoes great suffering for the inadequate availability of pertinent data. The field of distributed computing research faces the biggest hurdle in the form of insufficient high dimensional multivariate data. It mainly deals with the analysis of parallel input problems found in the cloud computing network in general and evaluation of high-performance computing in particular. In fact, it is a tough task to utilize parallel multiple input methods for accomplishing remarkable performance as well as allowing huge datasets achieves scale. In this regard, it is essential that a credible data system is developed and a decomposition strategy is used to partition workload in the entire process for minimum data dependence. Subsequently, a moderate synchronization and/or meager communication liability is followed for placing parallel impute methods for achieving scale as well as more processes. The present article proposes many novel applications for better efficiency. As the first step, this article suggests distributed-oriented serial regression multiple imputation for enhancing the efficiency of imputation task in high dimensional multivariate normal data. As the next step, the processes done in three diverse with parallel back ends viz. Multiple imputation that used the socket method to serve serial regression and the Fork Method to distribute work over workers, and also same work experiments in dynamic structure with a load balance mechanism. In the end, the set of distributed MI methods are used to experimentally analyze amplitude of imputation scores spanning across three probable scenarios in the range of 1:500. Further, the study makes an important observation that due to the efficiency of numerous imputation methods, the data is arranged proportionately in a missing range of 10% to 50%, low to high, while dealing with data between 1000 and 100,000 samples. The experiments are done in a cloud environment and demonstrate that it is possible to generate a decent speed by lessening the repetitive communication between processors.

  • An additive sparse logistic regularization method for cancer classification in microarray data
    Zarqa University
    Now a day’s cancer has become a deathly disease due to the abnormal growth of the cell. Many researchers are working in this area for the early prediction of cancer. For the proper classification of cancer data, demands for the identification of proper set of genes by analyzing the genomic data. Most of the researchers used microarrays to identify the cancerous genomes. However, such kind of data is high dimensional where number of genes are more compared to samples. Also the data consists of many irrelevant features and noisy data. The classification technique deal with such kind of data influences the performance of algorithm. A popular classification algorithm (i.e., Logistic Regression) is considered in this work for gene classification. Regularization techniques like Lasso with L1 penalty, Ridge with L2 penalty, and hybrid Lasso with L1/2+2 penalty used to minimize irrelevant features and avoid overfitting. However, these methods are of sparse parametric and limits to linear data. Also methods have not produced promising performance when applied to high dimensional genome data. For solving these problems, this paper presents an Additive Sparse Logistic Regression with Additive Regularization (ASLR) method to discriminate linear and non-linear variables in gene classification. The results depicted that the proposed method proved to be the best-regularized method for classifying microarray data compared to standard methods

  • Additive tuning lasso (At-lasso): A proposed smoothing regularization technique for shopping sale price prediction


  • Distributed based serial regression multiple imputation for high dimensional multivariate data in multicore environment of cloud
    Lavanya K., L.S.S. Reddy, and B. Eswara Reddy

    IGI Global
    Multiple imputations (MI) are predominantly applied in such processes that are involved in the transaction of huge chunks of missing data. Multivariate data that follow traditional statistical models undergoes great suffering for the inadequate availability of pertinent data. The field of distributed computing research faces the biggest hurdle in the form of insufficient high dimensional multivariate data. It mainly deals with the analysis of parallel input problems found in the cloud computing network in general and evaluation of high-performance computing in particular. In fact, it is a tough task to utilize parallel multiple input methods for accomplishing remarkable performance as well as allowing huge datasets achieves scale. In this regard, it is essential that a credible data system is developed and a decomposition strategy is used to partition workload in the entire process for minimum data dependence. Subsequently, a moderate synchronization and/or meager communication liability is followed for placing parallel impute methods for achieving scale as well as more processes. The present article proposes many novel applications for better efficiency. As the first step, this article suggests distributed-oriented serial regression multiple imputation for enhancing the efficiency of imputation task in high dimensional multivariate normal data. As the next step, the processes done in three diverse with parallel back ends viz. Multiple imputation that used the socket method to serve serial regression and the Fork Method to distribute work over workers, and also same work experiments in dynamic structure with a load balance mechanism. In the end, the set of distributed MI methods are used to experimentally analyze amplitude of imputation scores spanning across three probable scenarios in the range of 1:500. Further, the study makes an important observation that due to the efficiency of numerous imputation methods, the data is arranged proportionately in a missing range of 10% to 50%, low to high, while dealing with data between 1000 and 100,000 samples. The experiments are done in a cloud environment and demonstrate that it is possible to generate a decent speed by lessening the repetitive communication between processors.

  • A Study of High-Dimensional Data Imputation Using Additive LASSO Regression Model
    K. Lavanya, L. S. S. Reddy, and B. Eswara Reddy

    Springer Singapore

  • Recent trends in deep learning with applications
    K. Balaji and K. Lavanya

    Springer International Publishing


  • Framework for enhancing level of security to the ATM customers with DCT based palm print recognition