@sru.edu.in
Assistant Professor, Department of Computer Science and Artificail Intelligence
SR University
M.Tech (CSE)
Computer Networks and Communications, Computer Engineering, Artificial Intelligence, Computer Science Applications
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Veluru Chinnaiah, Vadlamani Veerabhadram, Ravi Aavula, and Srinivas Aluvala
Faculty of Electrical Engineering, Computer Science and Information Technology Osijek
We proposed a deep learning-based process mining framework known as PMiner for automatic detection of anomalies in business processes. Since there are thousands of business processes in real-time applications such as e-commerce, in the presence of concurrency, they are prone to exhibit anomalies. Such anomalies if not detected and rectified, cause severe damage to businesses in the long run. Our Artificial Intelligence (AI) enabled framework PMiner takes business process event longs as input and detects anomalies using a deep autoencoder. The framework exploits a deep autoencoder technique which is well-known for Its ability to discriminate anomalies. We proposed an algorithm known as Intelligent Business Process Anomaly Detector (IBPAD) to realize the framework. This algorithm learns from historical data and performs encoding and decoding procedures to detect business process anomalies automatically. Our empirical results using the BPI Challenge dataset, released by the IEEE Task Force on Process Mining, revealed that PMiner outperforms state-of-the-art methods in detecting business process anomalies. This framework helps businesses to identify process anomalies and rectify them in time to leverage business continuity prospects.
Goli Sunil, Srinivas Aluvala, Chinthala Sujitha, Akarapu Mahesh, Areefa, Kanegonda Ravi Chythanya, and Gadde Aruna
AIP Publishing
Vishali Sivalenka, Srinivas Aluvala, Khaja Mannanuddin, G. Sunil, J. Vedika, and V. Pranathi
AIP Publishing
Sarthak Malik, Shikha Agarwal, Srinivas Aluvala, Jyoti S. Bali, Rachit Garg, and Mohemmed Hussien
IEEE
The world is currently going through a huge transition towards digitalization and sustainability. Sustainability focuses on responsible resource management and environmental preservation, whereas digitalization entails the conversion of many processes and systems into connected, digital forms. Energy management, which aims to offer clean and affordable energy solutions while minimising waste and environmental damage, is one of the important sectors garnering significant attention in this context. Energy management involves the methodical optimisation of energy use in a variety of contexts, including commercial and industrial facilities and residential settings. The energy metre, a tool used to measure electricity, gas, or other energy kinds, is essential to this project. Traditional energy metres are evolving into "smart" energy metres, which may send consumption data periodically or in real-time in addition to measuring energy usage. The usability of energy metres is considerably increased by this feature for both consumers and energy suppliers. Industry 4.0 refers to the fourth industrial revolution, which is characterised by the incorporation of digital technologies into manufacturing and industrial processes, in the broader context of business and technology. Internet of Things (IoT), artificial intelligence (AI), and automation are key technologies advancing this transformation. These technologies are essential for improving productivity, sustainability, and efficiency across sectors. A key component of increasing energy management practises is the Internet of Things (IoT). The internet of things (IoT) is a network of connected gadgets and sensors that can gather and send data. IoT-enabled devices are crucial to the real-time monitoring and management of energy use in the field of energy management. These gadgets are capable of collecting information from many sources, such as energy metres, and transmitting it for analysis and well-informed decision-making. An IoT-improved Automatic Metre Reading (AMR) system is shown in the study. Energy metres can be remotely read using AMR technology, eliminating the requirement for manual intervention. In this case, IoT technology enhances the capabilities of the AMR system by being easily incorporated into it. Wireless sensors and specialised communication protocols are used in the energy metres as part of this integration to enable real-time data monitoring and transfer.
Pallavi Raj, Poonam Rawat, Jitendra Singh, Shweta Pandey, Srinivas Aluvala, and Vikrant Pachouri
IEEE
Atreyi Pramanik, Ajay Singh, Gouri Rani, Shival Dubey, and Srinivas Aluvala
IEEE
Tuberculosis remains a significant global problem. The modification in smart technology (ST) invention generates novel chances to reform tuberculosis (TB) controlling. This work explores the potential of ST for filtering MTB diagnostics and therapy. ST includes telemedicine, mobile health uses (mHealth), wearable devices (WD) and data analytical that deliver many options of improving TB control. Telemedicine licenses distant consulting thus empowering medics to access patients in remote areas, offer quick detects as well as recommend treatment. mHealth apps certify that individuals have right of entry to education materials, can remind them when it’s time for medication and offers self-monitoring tools which enhance adherence to the therapy by keeping the patient involved during the course of treatment. WD support real-time (RT) monitoring of heart rates, blood pressure level, physical activity and other vital signs that are important in coming up with modified remedy plans while also detecting response to cure earlier enough. Data analytics combined with AI can examine large amounts of patient data to identify patterns, forecast drug effects and optimize treatment strategies resulting in more efficient individualized care. These novelties through intellectual technologies may be used as solutions for tasks experienced during TB management, leading to improved patient outcomes and a reduction in the TB burden.
Priyam Agarwal, Siddharth Swami, Mohammed Ismail Iqbal, Divya Rawat, Lalit Mohan Joshi, and Srinivas Aluvala
IEEE
Culture plays a significant role in shaping audience’s behavior, especially in this digitalized era, where large portion of any individual’s day is spent online resulting in building global integration of people belonging to different cultures. Therefore, Cultural Products and Heritage Sites are also equally important as it holds a major part of any country’s glorious and rich history and can earn recognition at a global level for any country. The main aim of this paper was to identify latest advancements in 4.0 enabling technologies such as Artificial Intelligence (AI), Virtual Reality (VR) and big data in innovatively digitally designing intangible cultural products and measures for ensuring long term sustainability of Cultural Heritage Sites (CHS). This paper has also given some future recommendations for the smooth management and establishing creative digital data base set design for CHS.
Chiranjit Dutta, Suveg Moudgil, Srinivas Aluvala, Virat Raj Saxena, K. Devi, and Jagendra Singh
IEEE
This study applies machine learning to determine rice crop production using sensor information from temperature, humidity, and water levels. This project looks forward to providing insights to maximize agricultural practices and allow data-driven decision-making in rice cultivation. This is in line with how well these disparate machine learning models can predict rice crop production when measured against precision, accuracy, recall, and F1 score. To some extent, this is evident, as this study found that the machine learning models can predict yield quite satisfactorily, of which the model Random Forest performs very well in terms of precision and accuracy. Among them were the exactitude of sensor data and the algorithm selection, along with subsequent model assessment, refinement, and the ingression of great datasets inclusive of a broader set of environmental characteristics and crop yields—that were considered to be crucial in improving the precision of yield prediction algorithms. Additionally, the use of machine learning in farming has a number of benefits, such as enabling farmers to make decisions about crop management, irrigation, and optimization of yield. Combining machine learning methodologies with sensor technology can optimize resource allocation, save water, and improve overall agricultural productivity. Further, this study will facilitate the implementation of smart irrigation systems and set the stage for data-driven decision-making in agriculture for better farming practices and food production worldwide.
Keerthi Kumar M, Parameshachari B D, Kay Hooi Keoy, Srinivas Aluvala, and Ammar Hameed Shnain
IEEE
An efficient Indian Sign Language recognition (ISL) identifies the sign language gestures to ease communication among the non-signer and signer community. It is a graphic form of communication where the community of hearing-impaired exists and this sign language raises independently from local spoken language. However, sign language recognition and translation face constraints due to variability in sign production between individuals and contextual ambiguity which makes it difficulties for models to generalize and accurately interpret signs in various contexts. This research proposes Self-Attention Long-Short-Term Memory with Shape Autotuning Activation Function (SALSTM-SAAF) to recognize and translate sign language. RA allows the model to focus on essential parts and LSTM captures temporal dependencies which ensures effective sequence learning. SAAF adjusts to different input shapes which optimizes the learning process and enhances the model’s performance. Data augmentation and min-max normalization are applied to increase the dataset size and normalize the obtained data. Then, the ResNet is used to extract the features from pre-processed data effectively. The SALSTM-SAAF achieves a better accuracy of 99.87% compared to existing methods like VGG19-Bidirectional LSTM (VGG19-BiLSTM).
E G Satish, Srinivas Aluvala, Hassan M. Al-Jawahry, G Vasukidevi, and M. Surya Bhupal Rao
IEEE
The accurate classification of wildlife habitats is necessary as it provides necessary resources and condition of wildlife population. To improve this prediction task, the proposed research developed Machine Learning (ML) framework using the remote sensing data. The data is initially pre-processed using Adaptive Histogram Equalization (AHE) and the features such as spectral, special, and texture features are extracted. The features are then selected using Principal Component Analysis (PCA) and Recursive Feature Elimination (RFE). The features are then optimised using Particle Swarm Optimization (PSO) for optimal prediction and then classified using Ensemble Machine Learning (EML) algorithms such as Support Vector Machines (SVM), Random Forests (RF), and Artificial Neural Networks (ANN). The method showed better capabilities to classify different kinds of wildlife habitats and the values of integrating optimization methods with EML algorithms for solving ecological problems. The experimental results showed that the proposed PSO-EML algorithm achieved better classification accuracy and optimized prediction with an accuracy of 99.61%, and precision of 97.39% when compared to the existing methods Boosted Regression Tree (BRT) and Hierarchical clustering in handling high dimensions and noise features of ecological data.
Srinivas Aluvala, Keshoju Bhargavi, Jula Deekshitha, Banoth Suresh, Gujja Nitesh Rao, and Athirajula Sravani
IEEE
Malaria affects public health issues significantly and is one of the most severe infectious diseases in the world. Anopheles mosquitoes attack humans who carry the virus in order to disseminate it. To manage the sickness and get the best potential treatment outcomes, accurate parasite identification is essential. A critical first step in the diagnosis and treatment of malaria is the traditional method of using a microscope to search blood samples for malarial parasites. A diagnosis made using this approach is time-consuming since it relies heavily on the examiner's expertise and experience. To improve the speed and accuracy of diagnosis, this study suggests a deep learning model for malarial parasite prediction. In this study, we report on a Convolutional Neural Networks (CNN) model, also called the VGG-19 model, which detects malaria parasites with 97% accuracy using microscopic images of blood samples. Enhancing the efficacy and precision of the diagnosis is the aim of this method. This model has been trained on a set of images of blood smears and is capable of accurately distinguishing between samples that are infected and those that are not. Malaria may be less common in areas where it is endemic if this automated diagnostic method is successfully implemented and results in early diagnosis and treatment.
Hima Bindu Valiveti, Muntather Almusawi, E G Satish, Srinivas Aluvala, and E S Challaraj Emmanuel
IEEE
Weeds are one of the main problem in agriculture that affect the crop yield. Accurate classification and recognition is major challenges in weeding because of same visual characteristics between plants and weeds. To solve this problem, in this research patch based image recognition is integrated with the Deep Learning (DL) approach to improving the classification and recognition of weeds. The weed image data are collected from the Deepweeds dataset. For pre-processing Enhanced Super Resolution Generative Adversarial Network (ESRGAN) is utilized for change the image size and then generated to small patches. Selection of relative important patches are performed by Fast Fourier Transform (FFT) and Laplacian Filter (LF) are trained to the DL model. The proposed Global Hybrid Attention mechanism with Densenet-169 model (GHA-Densenet-169) employed for classification of weeds with respective classes. The evaluation results of the proposed method using performance metrics are Accuracy, Recall, Precision and F1-score. The proposed approach attained high accuracy of 99.28%, recall of 99.26%, precision of 99.27% and f1-score of 99.25% which is greater than existing deep learning models such as Densenet-201, VGG-SVM, Adapting MobilenetV2, Resnet-SVM, and Swin Transformer and Two-stage Transfer Learning (ST-TSTL).
Mohanarangan Veerappermal Devarajan, Srinivas Aluvala, Vinaye Armoogum, S. Sureshkumar, and H T Manohara
IEEE
The Industrial Internet of Things (IIoT) is experiencing rapid growth, and robust cyber security measures to protect against from the cyber-attacks with the help of Anomaly Detection System (ADS) and signature-based detection system. The sensors collect vast amount data from the environment, presenting functionality challenges for device. To overcome this problem, many Network Intrusion Detection Systems (NIDS) had developed for secure to IIoT systems. But NIDS faces challenges due to complexity of information collection required for threat detection. So, propose study work introduces a Recurrent rule-based Feature Selection (RFS) for IIoT system. NSL-KDD and UNSW-NB15 dataset are used for performing operation and relevant features are selected by using hybrid rule-based algorithm. Then RFS model classifies the data and predict the attacks. the proposed method performance is superior than existing method and better results in terms of accuracy rates of 99.0% and 98.9%, detection rates of 99.0% and 99.9%, and low false positive rates of $1.0 \\%$ and $1.1 \\%$ respectively.
Nishant Pritam, Sonal Malhotra, Srinivas Aluvala, Kanwarpartap Singh Gill, and Swati Devliyal
IEEE
This research examines and access the efficacy of the Random Forest Classifier in precisely detecting the likelihood of bankruptcy in financial datasets. Additionally, it investigates the use of Synthetic Minority Over-sampling Technique (SMOTE) to address the widespread issue of data imbalance in bankruptcy prediction. This research examines the effectiveness of the Random Forest model in detecting bankruptcy and evaluates the impact of SMOTE in enhancing the accuracy of classification. The findings suggest that Random Forest has significant promise in forecasting bankruptcy, while the use of SMOTE to address imbalanced data yields a favourable effect, bolstering the dependability of financial risk assessment models. An accuracy rate of $\\mathbf{9 7 \\%}$ is reached by taking into account a diverse variety of optimisation parameters. Future study on bankruptcy detection may have significant ramifications and applications across several industries. Subsequent investigations may prioritise enhancing the precision and promptness of bankruptcy prediction models. Financial institutions, investors, and enterprises would find it advantageous to use this approach in order to effectively handle and reduce financial risks linked to prospective bankruptcies. To summarise, future research on bankruptcy identification has the potential to contribute to breakthroughs in financial risk management, economic analysis, regulatory compliance, corporate governance, and the ethical use of artificial intelligence. This has the capacity to provide significant knowledge and resources for individuals and organisations in many industries and sectors.
Muntather Almusawi, Srinivas Aluvala, S Trisheela, Mukesh Soni, and Revathi. R
IEEE
The detection of fingerprint liveness has affected through spoofing, that is the major threat for fingerprint-based biometric systems. The issue of forgery detection is well studied and forged fingerprints gives huge impact of outcomes in biometric depended on security systems. In this research, the Convolutional Neural Network (CNN) with Convolutional Block Attention Mechanism (CBAM) for the detection of forgery in fingerprint images. The dataset used for this research are LivDet-2013 and LivDet-2015 and it is pre-processed by using Circular Hough Transform (CHT) method. Then, the features are extracted by using the Local Binary Pattern (LBP) method that extracts the meaningful features. The detection and classification are performed by using CNN with CBAM method that focuses much on detected patterns and detected the forgery with high accuracy. The proposed CNN with CBAM method attained 98.12% accuracy on LivDet-2013 and 97.05% accuracy on LivDet-2015 datasets while compared to existing methods like Hybrid Fingerprint Presentation Attack Detection (HyFiPAD).
Revatthy Krishnamurthy, Myasar Mundher Adnan, Sunil Kumar V, T. Aditya Sai Srinivas, and Srinivas Aluvala
IEEE
Recent days, with the globalization of the world economy the ships count are increasing for marine transportation and the Waterways are growing increasingly overcrowded than previous. This situation may leads to problem of collision of ships which may cause losing of life and damage for property and to nature. Many automatically collision models of ships are implemented but many focused only on the ship-ship encounter situation only. By using a grid sensor which is virtual sensor, agents of Deep Reinforcement Learning (DRL) classify approach of a multiple ships. This framework introduces an automatically collision detection algorithm for ships using DRL in continuous action spaces. DRL is used for avoidance of collision with a maximum distance of safe passing between ships. A unique method is developed named inside Obstacle Zone by Target (OZT) used to change learning capability that expands the OZT. Using Bi-directional Long Short-Term Memory (BI-LSTM) cell, network is redesigned and continuous action spaces training is carried out to train a model with longer safe distance of ships. In collision detection model that the bow cross range is effective for COLREGs compliant collision avoidance that is proposed in this model. The propose model also validates a scenario that included more ships and have passed that Imazu problem. The proposed BI-LSTM model achieved 80.43 % of accuracy, 95.67% of precision, 85.89% of recall and 93.54% of f1 score values.
Prantik Kumar Mahata, Mukul Jain Saklecha, Sushruta Mishra, Hrudaya Kumar Tripathy, Biswajit Brahma, Rajeev Sobti, and Srinivas Alluvala
Springer Nature Singapore
Vasim Ahmad, Madhu Arora, Rakesh Kumar, Srinivas Aluvala, Ashish Vishnoi, and Lalit Goyal
IEEE
In recent years, blockchain technology has become more popular in many fields as it can make things safer and efficient. In future, blockchain technology could be very helpful for the financial field. Accounting is an important part of any business, and the success of the business depends on how accurate and reliable it is. On the other hand, traditional accounting methods are slow and prone to mistakes that can cost a lot of money. Blockchain technology has the potential to change the accounting field by making it safer, open and efficient. People have said that the blockchain technology is a revolutionizing technology that can change many fields, including accounting. Cryptocurrencies like Bitcoin use a technology called decentralized and distributed ledger, which could be used to make financial systems safer and more efficient. This study will discuss about how blockchain technology could change accounting and what benefits it might have.
Siddhi Nath Rajan, Preeti Sharma, Deepti Srivastava, Kanchan Koul, Srinivas Aluvala, and Shashikant
IEEE
The effectiveness and security of logistics operations are crucial to the management of supply chains and the success of organizations in today's linked, globalized environment. Transporting goods, especially inside heavy-duty containers, calls for accuracy as well as increased security measures to protect priceless assets while in transit. The Advanced Container Tracking System is a novel response to these urgent needs. This technology provides real-time monitoring and automatic notifications in order to strengthen container security and enhance logistics. Supply chain management has always relied heavily on container monitoring to guarantee the timely and secure delivery of commodities. Traditional approaches frequently relied on manual tracking or recurrent updates, which could result in errors and security flaws. In response, the suggested Advanced Container Tracking System makes use of cutting-edge technologies to deliver a reliable, comprehensive tracking solution. The combination of these technologies makes continuous real-time position tracking and data storage possible, guaranteeing that container movements are continually documented and easily accessible for analysis and security needs. Additionally, the system has cloud connectivity, allowing authorized stakeholders to access real-time container data using a specially developed mobile application that can be used whenever and wherever they choose. System performance is enhanced through algorithm optimization, the application of data compression techniques, and consideration of hardware upgrades. The system's capacity to spot and address irregularities in container movement is one of its standout features. When a container stays still for longer than expected and crosses certain thresholds, the system will immediately send alerts to the owner or authorized administrators. This proactive strategy might potentially reduce risks associated with unauthorized stops or unexpected delays while also adding an additional degree of security. The proposed system acquires real-time latitude and longitude coordinates through continuous GPS updates, temporarily storing this data in a buffer and periodically saving it to the SD card for local storage. Here, are the technical details of the Advanced Container Tracking System in this document, giving a thorough explanation of its architecture, parts, and operating principles. Additionally, this innovation integrates cloud technology and creates a user-friendly mobile app for real-time monitoring. In addition, the investigate the advantages and possible uses of the Advanced Container Tracking System for boosting container security, raising logistics efficiency, and lowering operating hazards. Establishing secure cloud connectivity with HTTPS, implementing robust authentication for the mobile application, and applying data encryption for both stored data on the SD card and during transmission contribute to the system's security. The major objective of introducing this cutting-edge container tracking system is to promote supply chain and logistics management. The goal is to provide a complete solution that not only tackles operational efficiency issues but also security issues. This is thoroughly analyzed in the parts that follow in this article, covering installation information, test results, and the potential for its use in various logistics scenarios.
Sagar Saxena, Anil Kumar Dixit, Shweta Pandey, Vikrant Pachouri, Srinivas Aluvala, and Ashima Juyal
IEEE
Internet has emerged as a platform for propagating hatred. This study examines the relationship between the usage of social networking sites and hate crimes. The research findings imply that social media may serve as a channel for the spread of hated digital information and violent behaviors. As well as the usage of Machine learning, which is a subfield of AI to the study of hate crimes and the elements that lead to them on the internet. Further in this study authors in this paper investigate the causes of hate crimes and the elements that contribute to their development. Because of their unique significance, hate crimes require consideration. Crimes motivated by prejudice intimidate the survivor and the victim's society. That is why it necessary to investigate the side of protection and prevention policies of international laws that are given to be implemented by the different international bodies. The article's methodology is based on an extensive review of the literature, study of websites through them the researchers examine importantly, fighting hate crimes requires first monitoring and analyzing their dynamics to properly comprehend their nature. Since the propagation of hateful communication can be an early indicator of violence, especially heinous crimes, curbing hate crime could help minimize its effects. Thus, this paper addresses the targets of hatred on the Internet, presents a framework through which issues may be detected and remedied by emphasizing moral and social responsibility, and outlines potential legislatives to counteract this growing scourge on the Internet.
R. Dineshkumar, Srinivas Aluvala, Suma Srinath, Zaid Alsalami, and V T Krishnaprasath
IEEE
Semi-conductor manufacturing companies are growing rapidly by forming collaborative design and wafer fabrication. The yield prediction techniques use production data and provide improved operational efficiency and decrease production costs. Random forest classification method is proposed for semiconductor manufacturing Final Test (FT) yield prediction. Data pre-processing methods include Gaussian mixture Model, one hot encoder and Label encoder whereas improved grid search algorithm is used for feature selection. The selected features are then given to the implemented classification Random forest (RF) model. The RF method was evaluated by conducting experiments for three different products which are provided by Silicon Laboratories. The RF method outperformed in handling both numerical and categorical production data and is evaluated by the metrics such as accuracy, precision, recall and F-measure. The developed method achieved 0.985 of accuracy when compared with the existing methods such as LSTM-AM and SH-DNN.
Thirusubramanian Ganesan, Ramy Riad Al-Fatlawy, Suma Srinath, Srinivas Aluvala, and R. Lakshmana Kumar
IEEE
The growth and development of 6 G networks bring a crucial task onto the table which is introducing AI to manage network functions and cope with more users in the future. Distributed Learning (DL), as service to technologies, acts as one of the overriding forces motivating the development of 6 G communication system. The 6 G will help in various applications, which are a large set of services; then you will be able to manage the network in a smart and dynamic way. While relatively, many deep learning (DL) methods is deployed within the resource constrained vehicular domain, this setting nonetheless poses significant challenges. The reciprocating development of distributed computing and communication resources, like the edge-cloud continuum and integrated terrestrial-non-terrestrial networks (T/NTN), offers a solution for this. To use these resources and multiple DL methods in an integrated way, the NS becomes the best option. This paper covers the methods of DL that apply best in vehicular environments and examines NS roles alongside these methods, especially in the context of resource management which is dynamic. This research is designed to build a architecture for DL-as-a-Service (DLaaS) letting it to be hosted on a distributed network platform and allowing the implementation of DL algorithms at a proactive basis. This method, in addition to the dynamic resource allocation strategies, permits the efficient control of the different services which have different requirements. Effectiveness of the model is exhibited in a full case step by step in a vehicular T/NTN. The run-down of the DLaaS approach is perceived from its strong points, namely flexibility, performance upgrades, improved network intelligence, and they lead eventually to heightened customer satisfaction from the given cross vehicle or non cross vehicle highway traffic scenarios.
Srinivas Aluvala, Jagadevi N Kalshetty, Saif O. Husain, Jagroop Kaur, and Harpreet Kaur Thind
IEEE
The drug deals with every living organism on the earth, for classification of drugs based on their action is an important element in drug development. This research paper presents a machine learning model that predicts the action of a drug, utilizing a large drug network. The model integrates J-48 algorithm and random forest algorithm, and form the hybrid bagging technique with the neural network model demonstrating. The model uses the struck2vec data feature, which derived from the large network. Then the data features and dataset are trained on the hybrid bagging technique for drug classification according to the drug properties. The study’s findings highlight the important of machine learning models in drug classification which offers a valuable tool for researchers and pharmaceutical companies in the drug development process. The output includes the top classes of drugs, the prediction of action and aiding in new drugs discovery with higher accuracy of 95.83% which is superior over other learning models.
Muskan Singla, Kanwarpartap Singh Gill, Mukesh Kumar, Ruchira Rawat, and Srinivas Aluvala
IEEE
In the context of financial risk assessment, the ability to predict bankruptcy has considerable significance in ensuring the stability of economic systems. One of the enduring challenges in this specific domain is imbalanced datasets, where the frequency of cases reflecting bankruptcy is much lower compared to instances representing non-bankrupt scenarios. The objective of this research is to investigate the use of the Synthetic Minority Over-sampling Technique (SMOTE) in combination with the CatBoost classification algorithm. The focus is on achieving data equalisation and enhancing bankruptcy prediction. The use of the Synthetic Minority Over-sampling Technique (SMOTE) algorithm in combination with the CatBoost algorithm efficiently leverages the distinct qualities and benefits provided by each methodology. The Synthetic Minority Over-sampling Technique (SMOTE) is a technique designed to address the problem of class imbalance by creating synthetic samples for the minority class. This social strategy improves the model's capacity to gather and acquire patterns from the class that is not well represented. The CatBoost algorithm, which accesses categorical feature handling skills with an efficient boosting methodology, is used to analyse the enlarged dataset and develop a robust prediction model for the task of bankruptcy detection. The main aim of this study is to employ the Catboost classifier in order to classify Bankruptcy detection. The precision will be achieved by the use of SMOTE Analysis, a technique particularly designed to address the issue of unbalanced data. The research study will use the classification report and the confusion matrix as evaluation metrics to assess the anticipated accuracy level of 97 percent. The proposed research would use visual tools to analyse and show the results.
Khushi Mittal, Kanwarpartap Singh Gill, Rahul Chauhan, Hemant Singh Pokhariya, and Srinivas Aluvala
IEEE
The identification of faults in traditional approaches often depends on intricate algorithms and considerable preparation of data. On the other hand, decision tree classifiers provide a more simple but effective method for automated fault identification and classification. The aim of this study is to evaluate how well a Decision Tree Classifier performs in the field of detecting and categorizing electrical faults. Electrical systems are vulnerable to a multitude of errors that have the potential to compromise the dependability and security of the whole infrastructure. The study utilises a dataset that consists of electrical signals obtained from various failure situations, such as short circuits, overloads, and ground faults. The information is used to train the Decision Tree Classifier, which aims to construct a prediction model for the purpose of recognising and categorising various forms of electrical failures. The research assesses the performance of the model by analysing important metrics like accuracy, precision, recall, and F1 score. The results indicate that the Decision Tree Classifier is capable of efficiently recognizing and classifying electrical defects, showcasing its adaptability in different fault scenarios. The results of this study provide significant contributions to the understanding of how decision tree classifiers may be used in the context of problem detection in electrical systems. These findings emphasise the efficacy of decision tree classifiers as a means of improving the dependability and robustness of power distribution networks. The study findings have significant implications for enhancing maintenance techniques and advancing the development of intelligent systems that provide real-time problem monitoring in electrical infrastructure.