@vit.ac.in
Research Scholar, School of Information Technology and Engineering,
Vellore Institute of Technology
Machine Learning, Computer vision, Image processing, Thermal imaging
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Mohamad Mulham Belal and Divya Meena Sundaram
IOS Press
The security defenses that are not comparable to sophisticated adversary tools, let the cloud as an open environment for attacks and intrusions. In this paper, an intelligent protection framework for intrusion detection in a cloud computing environment based on a covariance matrix self-adaptation evolution strategy (CMSA-ES) and multi-criteria decision-making (MCDM) is proposed. The proposed framework constructs an optimal intrusion detector by using CMSA-ES algorithm which adjusts the best parameter set for the attack detector. Moreover, the proposed framework uses a MEREC-VIKOR, a hybrid standardized evaluation technique. MEREC-VIKOR generates the own performance metrics (S, R, and Q) of the proposed framework which is a combination of multi-conflicting criteria. The proposed framework is evaluated for attack detection by using CICIDS 2017 dataset. The experiments show that the proposed framework can detect cloud attacks accurately with low S (utility), R (regret), and Q (integration between S and R). The proposed framework is analyzed with respect to several evolutionary algorithms such as GA, IGASAA, and CMA-ES. The performance analysis demonstrates that the proposed framework that depends on CMSA-ES converges faster than the other evolutionary algorithms such as GA, IGASAA, and CMA-ES. The outcomes also demonstrate that the proposed model is comparable to the state-of-the-art techniques.
Mohammed Abdulmajeed Moharram and Divya Meena Sundaram
Elsevier BV
S Divya Meena, G Sai Shankar Mithesh, Ruchitha Panyam, Mandhadapu Samsritha Chowdary, Vamsi Suhas Sadhu, and J Sheela
IEEE
The metaverse is a single, shared, immersive 3D virtual space where people can interact with one another and experience life in ways that are not possible in the real world. The Metaverse is not just an emerging new technology that is currently in the hype cycle. It builds on years of research in immersive interactivity and artificial intelligence and will significantly alter education and other sectors. Recent techniques in metaverse-based education such as virtual reality (VR), augmented reality (AR), and 3D simulations face challenges related to accessibility, privacy concerns, and in overcoming existing technological limitations. The proposed objective is to leverage metaverse in education, aiming to enhance interactive learning experiences and effectively addressing challenges to promote inclusive educational practices. This work discusses how metaverse relates to education and learning, including possible applications and potential prospects that are explained by a few case studies.
Mohamad Mulham Belal and Divya Meena Sundaram
Institute of Electrical and Electronics Engineers (IEEE)
In recent studies, convolutional neural networks (CNNs) are mostly used as dynamic techniques for visualization-based malware classification and detection. Though vision transformer (ViT) proved its efficiency in image classification, a few of the earlier studies developed a ViT-based malware classifier. This paper proposes a butterfly construction-based vision transformer (B_ViT) model for visualization-based malware classification and detection. B_ViT has four phases: (1) image partitioning and patches embeddings; (2) local attention; (3) global attention; and (4) training and malware classification. B_ViT is an enhanced ViT architecture that supports the parallel processing of image patches and captures local and global spatial representations of malware images. B_ViT is a transfer learning-based model that uses a pre-trained ViT model on the ImageNet dataset to initialize the training parameters of transformers. Four B_ViT variants are experimented and evaluated on grayscale malware images collected from MalImg, Microsoft BIG datasets or converted from portable executable imports. The experiments show that B_ViT variants outperform the Input Enhanced vision transformer (IEViT) and ViT variants, achieving an accuracy equal to 99.49% and 99.99% for malware classification and detection respectively. The experiments also show that B_ViT is time effective for malware classification and detection where the average speed-up of B_ViT variants over IEViT and ViT variants are equal to 2.42 and 1.81 respectively. The analysis proves the efficiency of texture-based malware detection as well as the resilience of B_ViT to polymorphic obfuscation. Finally, the proposed B_ViT-based malware classifier outperforms the CNN-based malware classification methods in well.
S Divya Meena, Chinta Sai Siri, Paruchuri Sindhura Lakshmi, Nalukurthı Sheena Doondı, and J Sheela
IEEE
Coronavirus has changed the entire world. Studies indicate that to minimize the spread of the virus it is advisable to use masks for maximizing safety and keeping the community safe by slowing down the spread of the coronavirus. However, it becomes tedious and un-feasible to manually check each and every person who wears a mask or not. In that regard, technology presents digital and innovative solutions for this complex problem. This research study proposes a novel face mask detection algorithm. For face mask detection, Real Face Mask Detection dataset which consists of 4095 images in which mask images are 2165 and without mask 1930 images are taken to train and test the model using various pre-trained algorithms like (VGG19) besides proposing 2 different algorithms. The other models compared and tested are ResNet50, and MobileNetV2. These models are trained and tested in a Google Collaboratory environment with the help of TensorFlow and Keras software. A comparative study is made between these algorithms to decide which is the one that is the most suitable algorithm for the environment based on different parameters. The final model is applied to random images to check the accuracy of the model.
S. Divya Meena, Katakam Ananth Yasodharan Kumar, Devendra Mandava, Kanneganti Bhavya Sri, Lopamudra Panda, and J. Sheela
Springer Nature Singapore
S. Divya Meena, Jahnavi Chakka, Srujan Cheemakurthi, and J. Sheela
Springer Nature Singapore
Mohammed Abdulmajeed Moharram and Divya Meena Sundaram
Springer Science and Business Media LLC
Mohamad Mulham Belal and Divya Meena Sundaram
Elsevier BV
Mohammed Abdulmajeed Moharram and Divya Meena Sundaram
SPIE-Intl Soc Optical Eng
Abstract. Hyperspectral images (HSIs) have recently been exploited in several aspects as HSIs contain many contiguous and narrow discriminative spectral bands. The problem of dimensionality is a significant dilemma for HSIs due to there being plenty of irrelevant and redundant spectral bands and highly correlated bands that lead to Hughes phenomenon. To this end, we present an approach to selecting the most informative and relevant spectral bands for HSI dimensionality reduction using the Krill Herd (KH) algorithm. Moreover, KH is a heuristic search method that seeks to reach the optimum global solution within the search space and effectively evade falling into the local optima. Then an edge-preserving filter was employed to extract the spatial features while reducing noise and obtaining a suitable smoothing that improves the classification performance. Finally, the support vector machine classifier was performed at the pixel level for HSI classification. Furthermore, the proposed work was compared with the harmony search, genetic algorithm, bat algorithm, particle swarm optimization, and firefly algorithm. The experimental results demonstrated outstanding performance with an overall accuracy equal to 96.54%, 98.93%, 99.78%, and 98.66% on four hyperspectral datasets: Indian Pines scene, Pavia University scene, Salinas scene, and Botswana scene, respectively.
S. Divya Meena and L. Agilandeeswari
Springer Science and Business Media LLC
S Divya Meena and Agilandeeswari Loganathan
Springer Science and Business Media LLC
Animal-Vehicle Collision (AVC) is a predominant problem in both urban and rural roads and highways. Detecting animals on the road is challenging due to factors like the fast movement of both animals and vehicles, highly cluttered environmental settings, noisy images, and occluded animals. Deep learning has been widely used for animal applications. However, they require large training data; henceforth, the dimensionality increases, leading to a complex model. In this paper, we present an animal detection system for mitigating AVC. The proposed system integrates sparse representation and deep features optimized with FixResNeXt. The deep features extracted from candidate parts of the animals are represented in a sparse form using a feature-efficient learning algorithm called Sparse Network of Winnows (SNoW). The experimental results prove that the proposed system is invariant to the viewpoint, partial occlusion, and illumination. On the benchmark datasets, the proposed system has achieved an average accuracy of 98.5%.
Divya Meena and L. Agilandeeswari
Springer Science and Business Media LLC
Divya Meena Sundaram and Agilandeeswari Loganathan
Springer Science and Business Media LLC
Divya Meena Sundaram and Agilandeeswari Loganathan
SPIE-Intl Soc Optical Eng
Abstract. With the advances in remote sensing, wild animals sprawling over a vast area can be easily and quickly captured using low-cost unmanned aerial vehicle imagery. We propose an aerial animal detection and counting network (DetCountNet) framework called FSSCaps-DetCountNet, using fuzzy soft sets (FSS) and capsule network (CapsNet). Similarity measures based on FSS have been used to discriminate the target animals from both nontargets and the background. Of particular interest to aerial images, CapsNet requires very few training data and is robust to rotation and affine transformation. With superpixel segmentation and attention maps, FSSCaps-DetCountNet works well on challenging image conditions, such as dense background with sparse animals and overlapping/cluttered animals. The model is trained and tested on benchmark aerial animal datasets, namely, the aerial elephants and the livestock datasets with an accuracy index of 99.84% and 99.86%, respectively. Also, the overall omission and commission errors are 0.02% and 0.03%, respectively. The experimental results and comparative study with other state-of-the-art conventional models demonstrate the effectiveness and robustness of FSSCaps-DetCountNet for real-time animal detection and counting from aerial images.
S. Divya Meena and L. Agilandeeswari
Springer Singapore
S. Divya Meena and L. Agilandeeswari
Institute of Electrical and Electronics Engineers (IEEE)
The automatic classification of animal images is an onerous task due to the challenging image conditions, especially when it comes to animal breeds. In this paper, we built a semi-supervised learning based Multi-part Convolutional Neural Network (MP-CNN) that classifies 35,992 animal images from ImageNet into 27 different classes of animals. The proposed model classifies the animals on both generic and fine-grained level. The animal breeds are accurately classified using Multi-part Convolutional Neural Network with a hybrid feature extraction framework of Fisher Vector based Stacked Autoencoder. Furthermore, with Semi-supervised learning based pseudo-labels, the model classifies new classes of unlabeled images too. Modified Hellinger Kernel classifier has been used to re-train the misclassified classes of animals and thereby improve the performance obtained from MP-CNN. The model has experimented with varied tasks to analyze its performance in each of the cases. The experimental results have proved that the coalesced approach of MP-CNN with pseudo-labels can accurately classify animal breeds and we have achieved an accuracy of 99.95% from the proposed model.