@vt.edu
Collegiate Associate Professor, Bradley Department of Electrical and Computer Engineering
Virginia Tech
Dr. Nagender Aneja is a Collegiate Associate Professor at the Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, Virginia, USA. He previously worked as a research scholar in the Department of Computer Science at Purdue University in West Lafayette, Indiana. He also has five years of industry experience as an associate IP Lead on the Microsoft Patent Research Services Team at CPA Global India, where he drafted responses to office actions for US Patent Applications and developed innovative NLP tools for patent analysis.
Ph.D. Computer Engineering
M.E. Computer Technology and Applications
Computer Science, Artificial Intelligence, Computer Vision and Pattern Recognition, Computer Science Applications
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Zizheng Liu, Bharat K. Bhargava, and Nagender Aneja
Institute of Electrical and Electronics Engineers (IEEE)
Manas Goel, V. S. Bakkialakshmi, and Nagender Aneja
Springer Nature Singapore
Azryl Elmy Sarih, Nagender Aneja, and Ong Wee Hong
Elsevier BV
Nishtha Jatana, Mansehej Singh, Charu Gupta, Geetika Dhand, Shaily Malik, Pankaj Dadheech, Nagender Aneja, and Sandhya Aneja
Springer Science and Business Media LLC
Sandhya Aneja and Nagender Aneja
IEEE
Browser fingerprinting is the identification of a browser through the network traffic captured during communication between the browser and server. This can be done using the HTTP protocol, browser extensions, and other methods. This paper discusses browser fingerprinting using the HTTPS over TLS 1.3 protocol. The study observed that different browsers use a different number of messages to communicate with the server, and the length of messages also varies. To conduct the study, a network was set up using a UTM hypervisor with one virtual machine as the server and another as a VM with a different browser. The communication was captured, and it was found that there was a $30 \\%-35 \\%$ dissimilarity in the behavior of different browsers.
Hassaan Haider Syed, Nagender Aneja, Owais Ahmed Malik, and Mohammed Abdul Rahman Minhaj
IEEE
Traditionally, medical images have been analyzed manually. However, this manual analysis can be subjective, depending on the judgment of radiologists and pathologists, which can result in inconsistency and inefficiency. Computer vision and deep learning models have revolutionized the study and diagnosis of medical images such as X-rays, CT scans, and MRIs. Despite their effectiveness, these models often act as black boxes and are difficult to interpret. In this study, we propose an ensemble of two popular deep learning models, ResNet101 and EfficientNetB7, which not only provide higher classification accuracy for CT scans but also offer superior interpretability. Our ensemble model outperforms the original models in COVID-19 CT scan classification by more than 2%. We compared the performance of three eXplainable AI (XAI) visualization models including GradCAM, Guided-GradCAM, and LIME, against the ground truth data labeled by radiologists. The proposed ensemble method provides better interpretability for the CT scan images compared to the individual models.
Pawan Whig, Pavika Sharma, Nagender Aneja, Ahmed A. Elngar, and Nuno Silva
CRC Press
Nishad Mathur, Ashutosh Sharma, Shubh Kaushik, V S Bakkialakshmi, Vishishta Kavadiya, and Nagender Aneja
IEEE
AI has transformed recruitment, overcoming logictical issues and interviewer prejudice. The COVID-19 epidemic compounded matters. Advanced AI interview systems use natural language processing and computer vision algorithms to assess candidates’ verbal and non-verbal cues using audio and video. Systems have uniform standards, high integrity, security, and scalability. In addition, speech, sentiment, face, and emotion detection are provided. Online learning and telemedicine could be added to recruitment apps. These technologies improve feedback quality and transparency for safe, efficient, and balanced global change.
Chour Singh Rajpoot, Gajanand Sharma, Praveen Gupta, Pankaj Dadheech, Umar Yahya, and Nagender Aneja
Informa UK Limited
Sheba Rani David, Rajan Rajabalaya, and Nagender Aneja
Elsevier BV
Nagender Aneja, Sandhya Aneja, and Bharat Bhargava
Hindawi Limited
WiFi and private 5G networks, commonly referred to as P5G, provide Internet of Things (IoT) devices the ability to communicate at fast speeds, with low latency and with a high capacity. Will they coexist and share the burden of delivering a connection to devices at home, on the road, in the workplace, and at a park or a stadium? Or will one replace the other to manage the increase in endpoints and traffic in the enterprise, campus, and manufacturing environments? In this research, we describe IoT device testbeds to collect network traffic in a local area network and cyberspace including beyond 5G/6G network traffic traces at different layers. We also describe research problems and challenges, such as traffic classification and traffic prediction by the traffic traces of devices. An AI-enabled hierarchical learning architecture for the problems above using sources like network packets, frames, and signals from the traffic traces with machine learning models is also presented.
Priyanka Sharma, Pankaj Dadheech, Nagender Aneja, and Sandhya Aneja
Institute of Electrical and Electronics Engineers (IEEE)
Agriculture contributes a significant amount to the economy of India due to the dependence on human beings for their survival. The main obstacle to food security is population expansion leading to rising demand for food. Farmers must produce more on the same land to boost the supply. Through crop yield prediction, technology can assist farmers in producing more. This paper’s primary goal is to predict crop yield utilizing the variables of rainfall, crop, meteorological conditions, area, production, and yield that have posed a serious threat to the long-term viability of agriculture. Crop yield prediction is a decision-support tool that uses machine learning and deep learning that can be used to make decisions about which crops to produce and what to do in the crop’s growing season. It can decide which crops to produce and what to do in the crop’s growing season. Regardless of the distracting environment, machine learning and deep learning algorithms are utilized in crop selection to reduce agricultural yield output losses. To estimate the agricultural yield, machine learning techniques: decision tree, random forest, and XGBoost regression; deep learning techniques - convolutional neural network and long-short term memory network have been used. Accuracy, root mean square error, mean square error, mean absolute error, standard deviation, and losses are compared. Other machine learning and deep learning methods fall short compared to the random forest and convolutional neural network. The random forest has a maximum accuracy of 98.96%, mean absolute error of 1.97, root mean square error of 2.45, and standard deviation of 1.23. The convolutional neural network has been evaluated with a minimum loss of 0.00060. Consequently, a model is developed that, compared to other algorithms, predicts the yield quite well. The findings are then analyzed using the root mean square error metric to understand better how the model’s errors compare to those of the other methods.
Wang Xin Hui, Nagender Aneja, Sandhya Aneja, and Abdul Ghani Naim
Elsevier BV
Ajay Kumar Bansal, Virendra Swaroop Sangtani, Pankaj Dadheech, Nagender Aneja, and Umar Yahya
Informa UK Limited
Shafkat Islam, Nagender Aneja, Ruy De Oliveira, Sandhya Aneja, Bharat Bhargava, Jason Hamlet, and Chris Jenkins
IEEE
Over the years, space systems have evolved considerably to provide high-quality services for demanding applications such as navigation, communication, and weather forecast. Modern space systems rely on extremely fast commercially available off-the-shelf (COTS) processing units, with built-in GPU, DSP, and FPGA in light-weight, energy-efficient hardware. Since such devices are not necessarily designed with security features as a priority, there must be an adaptive controller to protect this mission-critical space system from potential malicious attacks, such as memory leaks, packet drops, algorithmic trojans, and so on. These attacks can lead the system to substantial inefficiency or complete failure. Considering the hardware diversity in current space systems, we propose a framework to explore both diversity and redundancy not only of hardware but also of software to make the overall system fault-tolerant. Our approach deploys mechanisms for monitoring and orchestrating actions of redundancy, diversity, and randomization to render the system resilient unpredictably dynamic, and optimize efficiency as much as possible during abnormalities. Yet, we use rule-based and adaptive engines to keep track of the various computing units to learn the best strategies to take when the system is under attack. The robustness of our approach lies in the fact that it makes the system highly unpredictable to potential attackers and tolerates attacks to some extent, which is crucial for any mission-critical application.
Kavita Sheoran, Arpit Bajgoti, Rishik Gupta, Nishtha Jatana, Geetika Dhand, Charu Gupta, Pankaj Dadheech, Umar Yahya, and Nagender Aneja
Institute of Electrical and Electronics Engineers (IEEE)
Sandhya Aneja, Nagender Aneja, and Ponnurangam Kumaraguru
IAES International Journal of Artificial Intelligence Institute of Advanced Engineering and Science
<span>Media news are making a large part of public opinion and, therefore, must not be fake. News on web sites, blogs, and social media must be analyzed before being published. In this paper, we present linguistic characteristics of media news items to differentiate between fake news and real news using machine learning algorithms. Neural fake news generation, headlines created by machines, semantic incongruities in text and image captions generated by machine are other types of fake news problems. These problems use neural networks which mainly control distributional features rather than evidence. We propose applying correlation between features set and class, and correlation among the features to compute correlation attribute evaluation metric and covariance metric to compute variance of attributes over the news items. Features unique, negative, positive, and cardinal numbers with high values on the metrics are observed to provide a high area under the curve (AUC) and F1-score.</span>
Sandhya Aneja, Nagender Aneja, Pg Emeroylariffion Abas, and Abdul Ghani Naim
Institute of Advanced Engineering and Science
Transfer learning allows us to exploit knowledge gained from one task to assist in solving another but relevant task. In modern computer vision research, the question is which architecture performs better for a given dataset. In this paper, we compare the performance of 14 pre-trained ImageNet models on the histopathologic cancer detection dataset, where each model has been configured as naive model, feature extractor model, or fine-tuned model. Densenet161 has been shown to have high precision whilst Resnet101 has a high recall. A high precision model is suitable to be used when follow-up examination cost is high, whilst low precision but a high recall/sensitivity model can be used when the cost of follow-up examination is low. Results also show that transfer learning helps to converge a model faster.
Sandhya Aneja, Melanie Ang Xuan En, and Nagender Aneja
IEEE
Artificial Intelligence (AI) development has encouraged many new research areas, including AI-enabled Internet of Things (IoT) network. AI analytics and intelligent paradigms greatly improve learning efficiency and accuracy. Applying these learning paradigms to network scenarios provide technical advantages of new networking solutions. In this paper, we propose an improved approach for IoT security from data perspective. The network traffic of IoT devices can be analyzed using AI techniques. The Adversary Learning (AdLIoTLog) model is proposed using Recurrent Neural Network (RNN) with attention mechanism on sequences of network events in the network traffic. We define network events as a sequence of the time series packets of protocols captured in the log. We have considered different packets TCP packets, UDP packets, and HTTP packets in the network log to make the algorithm robust. The distributed IoT devices can collaborate to cripple our world which is extending to Internet of Intelligence. The time series packets are converted into structured data by removing noise and adding timestamps. The resulting data set is trained by RNN and can detect the node pairs collaborating with each other. We used the BLEU score to evaluate the model performance. Our results show that the predicting performance of the AdLIoTLog model trained by our method degrades by 3-4% in the presence of attack in comparison to the scenario when the network is not under attack. AdLIoTLog can detect adversaries because when adversaries are present the model gets duped by the collaborative events and therefore predicts the next event with a biased event rather than a benign event. We conclude that AI can provision ubiquitous learning for the new generation of Internet of Things.
Gagan Thakral, Sapna Gambhir, and Nagender Aneja
IEEE
Sandeep Singh, Shalini Bhaskar Bajaj, Khushboo Tripathi, and Nagendra Aneja
IEEE
In this paper the Mobile Ad Hoc Network (MANET) was considered for analyzing the performance of Destination Sequenced Distance Vector (DSDV) of Proactive class and Ad Hoc On-Demand Distance Vector (AODV) and Dynamic Source Routing Protocol (DSR) of Reactive class. The protocols were simulated using the NS-2 (Network Simulator 2.35) package on Linux 12.04. The paper focuses on performance parameters e.g. Packet size, Speed, Packet rate, Transmission Control Protocol (TCP) types and Number of Packets and energy in the network. Simulation results shows that DSR gives better performance as compared to AODV and DSDV. The results were compared for inspection of packet delivery rate, % Lost packets, throughput and Jitter on varying Packet size, TCP types, and the number of packets in queue by changing packet size. The implementation study can further extend by applying artificial algorithms in MANET for enhancing the better results in presence of any type of attacks too.
Sandhya Aneja, Nagender Aneja, Bharat Bhargava, and Rajarshi Roy Chowdhury
Inderscience Publishers
Device fingerprinting is a problem of identifying a network device using network traffic data to secure against cyber-attacks. Automated device classification from a large set of network traffic features space is challenging for the devices connected in the cyberspace. In this work, the idea is to define a device-specific unique fingerprint by analysing solely inter-arrival time of packets as a feature to identify a device. Neural networks are the universal function approximation which learn abstract, highlevel, nonlinear representation of training data. Deep convolution neural network is used on images of inter-arrival time signature for device fingerprinting of 58 non-IoT devices of 5-11 types. To evaluate the performance, we compared ResNet-50 layer and basic CNN-5 layer architectures. We observed that device type identification models perform better than device identification. We also found that when deep learning models are attacked over device signature, the models identify the change in signature, and classify the device in the wrong class thereby the classification performance of the models degrades. The performance of the models to detect the attacks are significantly different from each other though both models indicate the system under attack.