@srmuniv.ac.in
SRM Institute of Science and Technology
Computer Science, Software
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Javvaji Venkatarao, V. Deeban Chakravarthy, Singaraju Ramya, V. Siva Prasad, Srikanth Cherukuvada, and Annaram Soujanya
IEEE
While malware has posed a danger to businesses for years, advances in malware detection have lagged. Malware can cause damage to a system by starting unneeded services, which increases the system's workload and prevents it from operating smoothly. Malware detection can be done in one of two ways: the traditional signature-based approach or the more modern behavior-based approach. When malware is activated on a system, it conducts specific actions, such as launching malicious OS services or downloading malicious files from the web, that characterize its behaviour. The described technique detects malicious software based on its actions. The suggested model in this paper combines Support Vector Machine and Principal Component Analysis.
V. Deeban Chakravarthy, K L. N. C. Prakash, Kadiyala Ramana, and Thippa Reddy Gadekallu
Springer Nature Singapore
C. Jothi Kumar, V. Deeban Chakravarthy, Kadiyala Ramana, Praveen Kumar Reddy Maddikunta, Qin Xin, and G. Surya Narayana
Springer Science and Business Media LLC
V. Deeban Chakravarthy and Balakrishnan Amutha
Wiley
In today's modern era, the internet usage has been developing tremendously. In data center network (DCN), the traffic has been rising constantly in the past few years. Hence, greater traffic of network needs several services such as Domain Name Service (DNS) to manage. It is not possible for one server to manage entire requests coming from client because of huge amount of traffic. To resolve this issue, load balancing is used. The major purpose of load balancing is to forward incoming client requests and distribute the traffic across various servers using customized algorithm which is deployed in the load balancer. Traditional load balancers are very costly and inflexible hardware. A substitute of this hardware is to use software‐defined network load balancers. The software‐defined networking (SDN) load balancers offer the facility to programmers to design and construct their own strategy of load balancing, which makes the software‐defined network load balancer flexible and programmable. This research proposes a novel algorithm that performs load balancing through calculating in advance the capacity of each and every switch across the path to which the packets are routed.
Tanmay Agrawal and V. Deeban Chakravarthy
IEEE
Bullying has been prevalent since the beginning of time, It’s just the ways of bullying that have changed over the years, from physical bullying to cyberbullying. According to Williard (2004), there are eight types of cyberbullying such as harassment, denigration, impersonation, etc. It’s been around 2 decades since social media sites came into the picture, but there haven’t been a lot of effective measures to curb social bullying and it has become one of the alarming issues in recent times.Our paper presents an analytical review of cyberbullying detection approaches and assesses methods to recognize hate speech on social media. We aim to apply traditional supervised classification methods as well as some novel ensemble machine learning techniques using a manually annotated open-source dataset for this purpose. This paper does a comparative study of various Supervised algorithms, including standard, as well as ensemble methods. The evaluations of the result based upon the scores obtained by accuracy shows that Ensemble supervised methods have the potential to perform better than traditional supervised methods.
L. N. C. Prakash K, M. Vimaladevi, V. Deeban Chakravarthy, G. Surya Narayana, and Asadi Srinivasulu
Hindawi Limited
In analysis of data, objects have mostly been characterized by a set of characteristics known as attributes, which together contained only one value for each object. Besides that, a few attributes in reality could include with more than a single value; such as from a human beside multiple profession characterizations, practises, communication methods, and capabilities, in addition to shipping addresses, of that kind of attributes are referred to as multivalued attributes and are typically regarded as null attributes when data is processed employing machine learning procedures. Throughout this article, another similarity mechanism is introduced that is defined around including multivalued characteristics which can be used for grouping. We propose a model to analyse each factor’s relative prominence for different data collection challenges in order to enable the selection among the most suited multivalued elements. The suggested methodology is a clustering technique for development and evolution that employs fuzzy c-means clustering and retains the new and more effective membership component by implementing the proposed similarity metric. Clustering of multivalued variables using fuzzy c-means is the efficient grouping criteria that results; any methodology to group-related data appears viable. The results show that our assessment not only improves previous segmentation methods on the multivalued cluster-based architecture but also helps in the improvement of the standard similarity metrics.
V. Deeban Chakravarthy, Dara Sujith Chandra, and Seemakurthi Sri Sathya Pavan
Springer Nature Singapore
K. Prasanna, Kadiyala Ramana, Gaurav Dhiman, Sandeep Kautish, and V. Deeban Chakravarthy
Hindawi Limited
Internet of Things (IoT) is a phenomenon involving connecting things or objects with sensors. The IoT market is growing rapidly, and there are strong incentives for companies to follow the trend of IoT growth and development. However, the percentage of IoT measures that are considered successful seems low. The complexity of carrying out an IoT project lies in the need to adjust all the pieces of the puzzle: assets, sensors, communications, technology, coverage, and geographical locations with precision of the measures and regulations. All these requirements determine the economic viability of the business and its benefit. This study, therefore, examines how the project methodology can support the development of the concept and ensure the business value of IoT initiatives. The project methodology developed in this study is called PoC Design. A case study was evaluated, in which defects in street lighting were investigated and carried out. The evaluation of the methodology highlighted the importance of defining problems and solutions based on business value, calculating the potential of an IoT initiative, determining the continuation of the project, involving stakeholders at an early stage, and creating a PoC to validate the concept with stakeholders.
C. Jothikumar, Kadiyala Ramana, V. Deeban Chakravarthy, Saurabh Singh, and In-Ho Ra
Hindawi Limited
The Internet of Things grew rapidly, and many services, applications, sensor-embedded electronic devices, and related protocols were created and are still being developed. The Internet of Things (IoT) allows physically existing things to see, hear, think, and perform a significant task by allowing them to interact with one another and exchange valuable knowledge when making decisions and caring out their vital tasks. The fifth-generation (5G) communications require that the Internet of Things (IoT) is aided greatly by wireless sensor networks, which serve as a permanent layer for it. A wireless sensor network comprises a collection of sensor nodes to monitor and transmit data to the destination known as the sink. The sink (or base station) is the endpoint of data transmission in every round. The major concerns of IoT-based WSNs are improving the network lifetime and energy efficiency. In the proposed system, Optimal Cluster-Based Routing (Optimal-CBR), the energy efficiency, and network lifetime are improved using a hierarchical routing approach for applications on the IoT in the 5G environment and beyond. The Optimal-CBR protocol uses the k-means algorithm for clustering the nodes and the multihop approach for chain routing. The clustering phase is invoked until two-thirds of the nodes are dead and then the chaining phase is invoked for the rest of the data transmission. The nodes are clustered using the basic k-means algorithm during the cluster phase and the highest energy of the node nearest to the centroid is selected as the cluster head (CH). The CH collects the packets from its members and forwards them to the base station (BS). During the chaining phase, since two-thirds of the nodes are dead and the residual energy is insufficient for clustering, the remaining nodes perform multihop routing to create chaining until the data are transmitted to the BS. This enriches the energy efficiency and the network lifespan, as found in both the theoretical and simulation analyses.
V. Deeban Chakravarthy and B. Amutha
Elsevier BV
V. Deeban Chakravarthy and B. Amutha
Institute of Advanced Engineering and Science
Due to the increase in the number of users on the internet and the number of applications that is available in the cloud makes Data Center Networking (DCN) has the backbone for computing. These data centre requires high operational cost and also experience the link failures and congestions often. Hence the solution is to use Software Defined Networking (SDN) based load balancer which improves the efficiency of the network by distributing the traffic across multiple paths to optimize the efficiency of the network. Traditional load balancers are very expensive and inflexible. These SDN load balancers do not require costly hardware and can be programmed, which it makes it easier to implement user-defined algorithms and load balancing strategies. In this paper, we have proposed an efficient load balancing technique by considering different parameters to maintain the load efficiently using Open FlowSwitches connected to ONOS controller.
V Deeban Chakravarthy and V Nagarajan
Indian Society for Education and Environment
Objectives: The Data Center Network (DCN) is the collection of diverse classes of resources providing storage, processing and network functionalities. The technology has evolved to a large extent such that the DCN is capable of dealing a huge quantum of data being used by people worldwide throughout the day. The DCN also produces enormous heat which requires an additional cooling kit to lessen the radiation. The power consumed by the DCNs is more than 1% of the total power consumption worldwide. This survey includes the objectives and the advantages of various methods proposed to optimize the energy utilization in the DCN. Methods/Statistical Analysis: There are several techniques which mainly focused on two main factors: 1. The topology of the DCN; Topology is built by using less number of high capacity routers and servers. 2. Optimized Selection of Routers available in the Topology to handle the traffic. There are technologies which use the resources based on the Service and Traffic Load. The resources which are unemployed are put into sleep mode. Findings: In this study, we presented a survey on various techniques and methodologies that are used to reduce the amount of power consumed in the data centers. Application/Improvement: This survey provides a wide knowledge about various methods to optimize the power consumption in the DCN. It can be referred by those who desire to explore and do experiments with the power optimization of the DCN.
Archana G.K. and V.Deeban Chakravarthy
IEEE
Big data is the technology which is designed to handle both structured and unstructured data which has high intensity. Hadoop and MapReduce are two important aspects of big data. Task assignment in MapReduce is done through scheduling algorithms. Scheduling algorithms assign the tasks to a selected data node. Selection of a healthy and available data node to perform the Map and reduce is done based on the availability and the location of the data on which the processing should be done. Creating an algorithm for the node selection is essential to discipline and optimize and improve the performance of the MapReduce. The proposed Health, Priority, Capacity and Availability based Node selection algorithm [HPCA based Node Selection Algorithm] creates a queue of the nodes that are available for accepting the new tasks through scheduling algorithms. This algorithm optimizes the node selection task and provides better performance. It also introduces a failover mechanism to handle the tasks that fail during the execution.