@ubd.edu.bn
Assistant Professor, School of Digital Science
Universiti Brunei Darussalam
Dr. Nagender Aneja is an Assistant Professor in the School of Digital Science at Universiti Brunei Darussalam in Brunei Darussalamnand founder of ResearchID.
Ph.D. Computer Engineering
M.E. Computer Technology and Applications
Computer Science, Artificial Intelligence, Computer Vision and Pattern Recognition, Computer Science Applications
Deep learning needs lots of data for training; however, in some industrial applications, the significant amount of data may not be available, limiting the deep learning approach. Modern techniques like transfer learning and generative adversarial networks show some hope to solve this challenge. The objective of the project is to propose new techniques for deep learning training.
Deep-learning networks are susceptible to butterfly effect wherein small alterations in the input data can point to drastically distinctive outcomes, making the deep learning network inherently volatile. Thus, the output of deep learning network may be controlled by altering its input or by adding noise. Research has shown that it is possible to fool the deep learning network by adding an imperceptible amount of noise in the input.
Generative Adversarial Networks may have potential to solve the text-to-image problem, but there are challenges in using GANs for NLP. Image classification have got benefitted with large mini-batches and one of the open question the question https://distill.pub/2019/gan-open-problems/#batchsize is if they can also help to scale GANs
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Nishtha Jatana, Mansehej Singh, Charu Gupta, Geetika Dhand, Shaily Malik, Pankaj Dadheech, Nagender Aneja, and Sandhya Aneja
Springer Science and Business Media LLC
Chour Singh Rajpoot, Gajanand Sharma, Praveen Gupta, Pankaj Dadheech, Umar Yahya, and Nagender Aneja
Informa UK Limited
Shafkat Islam, Nagender Aneja, Ruy De Oliveira, Sandhya Aneja, Bharat Bhargava, Jason Hamlet, and Chris Jenkins
IEEE
Over the years, space systems have evolved considerably to provide high-quality services for demanding applications such as navigation, communication, and weather forecast. Modern space systems rely on extremely fast commercially available off-the-shelf (COTS) processing units, with built-in GPU, DSP, and FPGA in light-weight, energy-efficient hardware. Since such devices are not necessarily designed with security features as a priority, there must be an adaptive controller to protect this mission-critical space system from potential malicious attacks, such as memory leaks, packet drops, algorithmic trojans, and so on. These attacks can lead the system to substantial inefficiency or complete failure. Considering the hardware diversity in current space systems, we propose a framework to explore both diversity and redundancy not only of hardware but also of software to make the overall system fault-tolerant. Our approach deploys mechanisms for monitoring and orchestrating actions of redundancy, diversity, and randomization to render the system resilient unpredictably dynamic, and optimize efficiency as much as possible during abnormalities. Yet, we use rule-based and adaptive engines to keep track of the various computing units to learn the best strategies to take when the system is under attack. The robustness of our approach lies in the fact that it makes the system highly unpredictable to potential attackers and tolerates attacks to some extent, which is crucial for any mission-critical application.
Priyanka Sharma, Pankaj Dadheech, Nagender Aneja, and Sandhya Aneja
Institute of Electrical and Electronics Engineers (IEEE)
Agriculture contributes a significant amount to the economy of India due to the dependence on human beings for their survival. The main obstacle to food security is population expansion leading to rising demand for food. Farmers must produce more on the same land to boost the supply. Through crop yield prediction, technology can assist farmers in producing more. This paper’s primary goal is to predict crop yield utilizing the variables of rainfall, crop, meteorological conditions, area, production, and yield that have posed a serious threat to the long-term viability of agriculture. Crop yield prediction is a decision-support tool that uses machine learning and deep learning that can be used to make decisions about which crops to produce and what to do in the crop’s growing season. It can decide which crops to produce and what to do in the crop’s growing season. Regardless of the distracting environment, machine learning and deep learning algorithms are utilized in crop selection to reduce agricultural yield output losses. To estimate the agricultural yield, machine learning techniques: decision tree, random forest, and XGBoost regression; deep learning techniques - convolutional neural network and long-short term memory network have been used. Accuracy, root mean square error, mean square error, mean absolute error, standard deviation, and losses are compared. Other machine learning and deep learning methods fall short compared to the random forest and convolutional neural network. The random forest has a maximum accuracy of 98.96%, mean absolute error of 1.97, root mean square error of 2.45, and standard deviation of 1.23. The convolutional neural network has been evaluated with a minimum loss of 0.00060. Consequently, a model is developed that, compared to other algorithms, predicts the yield quite well. The findings are then analyzed using the root mean square error metric to understand better how the model’s errors compare to those of the other methods.
Wang Xin Hui, Nagender Aneja, Sandhya Aneja, and Abdul Ghani Naim
Elsevier BV
Kavita Sheoran, Arpit Bajgoti, Rishik Gupta, Nishtha Jatana, Geetika Dhand, Charu Gupta, Pankaj Dadheech, Umar Yahya, and Nagender Aneja
Institute of Electrical and Electronics Engineers (IEEE)
Nagender Aneja, Sandhya Aneja, and Bharat Bhargava
Hindawi Limited
WiFi and private 5G networks, commonly referred to as P5G, provide Internet of Things (IoT) devices the ability to communicate at fast speeds, with low latency and with a high capacity. Will they coexist and share the burden of delivering a connection to devices at home, on the road, in the workplace, and at a park or a stadium? Or will one replace the other to manage the increase in endpoints and traffic in the enterprise, campus, and manufacturing environments? In this research, we describe IoT device testbeds to collect network traffic in a local area network and cyberspace including beyond 5G/6G network traffic traces at different layers. We also describe research problems and challenges, such as traffic classification and traffic prediction by the traffic traces of devices. An AI-enabled hierarchical learning architecture for the problems above using sources like network packets, frames, and signals from the traffic traces with machine learning models is also presented.
Ajay Kumar Bansal, Virendra Swaroop Sangtani, Pankaj Dadheech, Nagender Aneja, and Umar Yahya
Informa UK Limited
Sandhya Aneja, Nagender Aneja, and Ponnurangam Kumaraguru
IAES International Journal of Artificial Intelligence Institute of Advanced Engineering and Science
<span>Media news are making a large part of public opinion and, therefore, must not be fake. News on web sites, blogs, and social media must be analyzed before being published. In this paper, we present linguistic characteristics of media news items to differentiate between fake news and real news using machine learning algorithms. Neural fake news generation, headlines created by machines, semantic incongruities in text and image captions generated by machine are other types of fake news problems. These problems use neural networks which mainly control distributional features rather than evidence. We propose applying correlation between features set and class, and correlation among the features to compute correlation attribute evaluation metric and covariance metric to compute variance of attributes over the news items. Features unique, negative, positive, and cardinal numbers with high values on the metrics are observed to provide a high area under the curve (AUC) and F1-score.</span>
Sandhya Aneja, Nagender Aneja, Pg Emeroylariffion Abas, and Abdul Ghani Naim
Institute of Advanced Engineering and Science
Transfer learning allows us to exploit knowledge gained from one task to assist in solving another but relevant task. In modern computer vision research, the question is which architecture performs better for a given dataset. In this paper, we compare the performance of 14 pre-trained ImageNet models on the histopathologic cancer detection dataset, where each model has been configured as naive model, feature extractor model, or fine-tuned model. Densenet161 has been shown to have high precision whilst Resnet101 has a high recall. A high precision model is suitable to be used when follow-up examination cost is high, whilst low precision but a high recall/sensitivity model can be used when the cost of follow-up examination is low. Results also show that transfer learning helps to converge a model faster.
Anand Kumar, Dharmesh Dhabliya, Pankaj Agarwal, Nagender Aneja, Pankaj Dadheech, Sajjad Shaukat Jamal, and Owusu Agyeman Antwi
Hindawi Limited
The Internet of Things (IoT) ushers in a new era of communication that depends on a broad range of things and many types of communication technologies to share information. This new age of communication will be characterised by the following characteristics: Because all of the IoT’s objects are connected to one another and because they function in environments that are not protected, it poses a significantly greater number of issues, constraints, and challenges than do traditional computing systems. This is due to the fact that traditional computing systems do not have as many interconnected components. Because of this, it is imperative that security be prioritised in a new approach, which is not something that is currently present in conventional computer systems. The Wireless Sensor Network, often known as WSN, and the Mobile Ad hoc Network are two technologies that play significant roles in the process of building an Internet of Things system. These technologies are used in a wide variety of activities, including sensing, environmental monitoring, data collecting, heterogeneous communication techniques, and data processing, amongst others. Because it incorporates characteristics of both MANET and WSN, IoT is susceptible to the same kinds of security issues that affect those other networks. An assault known as a Delegate Entity Attack (DEA) is a subclass of an attack known as a Denial of Service (DoS). The attacker sends an unacceptable number of control packets that have the appearance of being authentic. DoS assaults may take many different forms, and one of those kinds is an SD attack. Because of this, it is far more difficult to recognise this form of attack than a simple one that depletes the battery’s capacity. One of the other key challenges that arise in a network during an SD attack is that there is the need to enhance energy management and prolong the lifespan of IoT nodes. This is one of the other significant issues that arise in a network when an SD attack is occurs. It is recommended that you make use of a Random Number Generator with Hierarchical Intrusion Detection System, abbreviated as RNGHID for short. The ecosystem of the Internet of Things is likely to be segmented into a great number of separate sectors and clusters. The HIPS system has been partitioned into two entities, which are referred to as the Delegate Entity (DE) and the Pivotal Entity, in order to identify any nodes in the network that are behaving in an abnormal manner. These entities are known, respectively, as the Delegate Entity and the Pivotal Entity (PE). Once the anomalies have been identified, it will be possible to pinpoint the area of the SD attack torture and the damaging activities that have been taken place. A warning message, generated by the Malicious Node Alert System (MNAS), is broadcast across the network in order to inform the other nodes that the network is under attack. This message classifies the various sorts of attacks based on the results of an algorithm that employs machine learning. The proposed protocol displays various desired properties, such as the capacity to conduct indivisible authentication, rapid authentication, and minimum overhead in both transmission and storage. These are only a few of the desirable attributes.
Gagan Thakral, Sapna Gambhir, and Nagender Aneja
IEEE
Sudhir Sharma, Kaushal Kishor Bhatt, Rimmy Chabra, and Nagender Aneja
Springer Nature Singapore
Sandeep Singh, Shalini Bhaskar Bajaj, Khushboo Tripathi, and Nagendra Aneja
IEEE
In this paper the Mobile Ad Hoc Network (MANET) was considered for analyzing the performance of Destination Sequenced Distance Vector (DSDV) of Proactive class and Ad Hoc On-Demand Distance Vector (AODV) and Dynamic Source Routing Protocol (DSR) of Reactive class. The protocols were simulated using the NS-2 (Network Simulator 2.35) package on Linux 12.04. The paper focuses on performance parameters e.g. Packet size, Speed, Packet rate, Transmission Control Protocol (TCP) types and Number of Packets and energy in the network. Simulation results shows that DSR gives better performance as compared to AODV and DSDV. The results were compared for inspection of packet delivery rate, % Lost packets, throughput and Jitter on varying Packet size, TCP types, and the number of packets in queue by changing packet size. The implementation study can further extend by applying artificial algorithms in MANET for enhancing the better results in presence of any type of attacks too.
Sandhya Aneja, Nagender Aneja, Bharat Bhargava, and Rajarshi Roy Chowdhury
Inderscience Publishers
Device fingerprinting is a problem of identifying a network device using network traffic data to secure against cyber-attacks. Automated device classification from a large set of network traffic features space is challenging for the devices connected in the cyberspace. In this work, the idea is to define a device-specific unique fingerprint by analysing solely inter-arrival time of packets as a feature to identify a device. Neural networks are the universal function approximation which learn abstract, highlevel, nonlinear representation of training data. Deep convolution neural network is used on images of inter-arrival time signature for device fingerprinting of 58 non-IoT devices of 5-11 types. To evaluate the performance, we compared ResNet-50 layer and basic CNN-5 layer architectures. We observed that device type identification models perform better than device identification. We also found that when deep learning models are attacked over device signature, the models identify the change in signature, and classify the device in the wrong class thereby the classification performance of the models degrades. The performance of the models to detect the attacks are significantly different from each other though both models indicate the system under attack.
Sandhya Aneja, Melanie Ang Xuan En, and Nagender Aneja
IEEE
Artificial Intelligence (AI) development has encouraged many new research areas, including AI-enabled Internet of Things (IoT) network. AI analytics and intelligent paradigms greatly improve learning efficiency and accuracy. Applying these learning paradigms to network scenarios provide technical advantages of new networking solutions. In this paper, we propose an improved approach for IoT security from data perspective. The network traffic of IoT devices can be analyzed using AI techniques. The Adversary Learning (AdLIoTLog) model is proposed using Recurrent Neural Network (RNN) with attention mechanism on sequences of network events in the network traffic. We define network events as a sequence of the time series packets of protocols captured in the log. We have considered different packets TCP packets, UDP packets, and HTTP packets in the network log to make the algorithm robust. The distributed IoT devices can collaborate to cripple our world which is extending to Internet of Intelligence. The time series packets are converted into structured data by removing noise and adding timestamps. The resulting data set is trained by RNN and can detect the node pairs collaborating with each other. We used the BLEU score to evaluate the model performance. Our results show that the predicting performance of the AdLIoTLog model trained by our method degrades by 3-4% in the presence of attack in comparison to the scenario when the network is not under attack. AdLIoTLog can detect adversaries because when adversaries are present the model gets duped by the collaborative events and therefore predicts the next event with a biased event rather than a benign event. We conclude that AI can provision ubiquitous learning for the new generation of Internet of Things.
Rajarshi Roy Chowdhury, Sandhya Aneja, Nagender Aneja, and Pg Emeroylariffion Abas
Data in Brief Elsevier BV
With the growth of wireless network technology-based devices, identifying the communication behaviour of wireless connectivity enabled devices, e.g. Internet of Things (IoT) devices, is one of the vital aspects, in managing and securing IoT networks. Initially, devices use frames to connect to the access point on the local area network and then, use packets of typical communication protocols through the access point to communicate over the Internet. Toward this goal, network packet and IEEE 802.11 media access control (MAC) frame analysis may assist in managing IoT networks efficiently, and allow investigation of inclusive behaviour of IoT devices. This paper presents network traffic traces data of D-Link IoT devices from packet and frame levels. Data collection experiment has been conducted in the Network Systems and Signal Processing (NSSP) laboratory at Universiti Brunei Darussalam (UBD). All the required devices, such as IoT devices, workstation, smartphone, laptop, USB Ethernet adapter, and USB WiFi adapter, have been configured accordingly, to capture and store network traffic traces of the 14 IoT devices in the laboratory. These IoT devices were from the same manufacture (D-Link) with different types, such as camera, home-hub, door-window sensor, and smart-plug.
Nagender Aneja and Sapna Gambhir
Springer Science and Business Media LLC
Ad-hoc Social Networks are formed by groups of nodes, designating a similarity of interests. The network establishes a two-layer hierarchical structure that comprises communication within-group and joining with other groups. This paper presents survey and future directions in four areas of establishing ad-hoc social network using mobile ad-hoc social network (MANET) that includes architecture or implementation features, Profile Management of users, Similarity Metric, and Routing Protocols. The survey presents the need to provide social applications over MANET, optimizing profile matching algorithms of users, and context aware routing protocols. Future directions include multi-hop social network applications that can be useful for users even in airplane mode and notifying over MANET when a user of profile with similar interest is nearby.
Nur Umairah Ali Hazis, Nagender Aneja, Rajan Rajabalaya, and Sheba Rani David
Bentham Science Publishers Ltd.
Background: The application of nanotechnology has been considered a powerful platform in improving the current situation in drug delivery and cancer therapy, especially in targeting the desired site of action. Objective: The main objective of the patent review is to survey and review patents from the past ten years that are related to the two particular areas of nanomedicines. Methods: The patents related to the nanoparticle-based inventions utilized in drug delivery and cancer treatment from 2010 onwards were browsed in databases like USPTO, WIPO, Google Patents, and Free Patents Online. After conducting numerous screening processes, a total of 40 patents were included in the patent analysis. See the PRISMA checklist 2020 checklist. Results: Amongst the selected patents, an overview of various types of nanoparticles is presented in this paper, including polymeric, metallic, silica, lipid-based nanoparticles, quantum dots, carbon nanotubes, and albumin-based nanomedicines. Conclusion: Nanomedicines advantages include improvements in terms of drug delivery, bioavailability, solubility, penetration, and stability of drugs. It is concluded that the utilization of nanoparticles in medicines is essential in the pursuit of better clinical practice.
Nagender Aneja and Sandhya Aneja
Springer International Publishing
Fake news is intentionally written to influence individuals and their belief system. Detection of fake news has become extremely important since it is impacting society and politics negatively. Most existing works have used supervised learning but given importance to the words used in the dataset. The approach may work well when the dataset is huge and covers a wide domain. However, getting the labeled dataset of fake news is a challenging problem. Additionally, the algorithms are trained after the news has already been disseminated. In contrast, this research gives importance to content-based prediction based on language statistical features. Our assumption of using language statistical features is relevant since the fake news is written to impact human psychology. A pattern in the language features can predict whether the news is fake or not. We extracted 43 features that include Parts of Speech and Sentiment Analysis and shown that AdaBoost gave accuracy and F-score close to 1 when using 43 features. Results also show that the top ten features instead of all 43 features give the accuracy of 0.85 and F-Score of 0.87.
Rajarshi Roy Chowdhury, Sandhya Aneja, Nagender Aneja, and Emeroylariffion Abas
ACM
Device identification is the process of identifying a device on Internet without using its assigned network or other credentials. The sharp rise of usage in Internet of Things (IoT) devices has imposed new challenges in device identification due to a wide variety of devices, protocols and control interfaces. In a network, conventional IoT devices identify each other by utilizing IP or MAC addresses, which are prone to spoofing. Moreover, IoT devices are low power devices with minimal embedded security solution. To mitigate the issue in IoT devices, fingerprint (DFP) for device identification can be used. DFP identifies a device by using implicit identifiers, such as network traffic (or packets), radio signal, which a device used for its communication over the network. These identifiers are closely related to the device hardware and software features. In this paper, we exploit TCP/IP packet header features to create a device fingerprint utilizing device originated network packets. We present a set of three metrics which separate some features from a packet which contribute actively for device identification. To evaluate our approach, we used publicly accessible two datasets. We observed the accuracy of device genre classification 99.37% and 83.35% of accuracy in the identification of an individual device from IoT Sentinel dataset. However, using UNSW dataset device type identification accuracy reached up to 97.78%.
Sandhya Aneja, Siti Nur Afikah Bte Abdul Mazid, and Nagender Aneja
ACM International Conference Proceeding Series ACM
Machine translation has many applications such as news translation, email translation, official letter translation etc. Commercial translators, e.g. Google Translation lags in regional vocabulary and are unable to learn the bilingual text in the source and target languages within the input. In this paper, a regional vocabulary-based application-oriented Neural Machine Translation (NMT) model is proposed over the data set of emails used at the University for communication over a period of three years. A state-of-the-art Sequence-to-Sequence Neural Network for ML → EN (Malay to English) and EN → ML (English to Malay) translations is compared with Google Translate using Gated Recurrent Unit Recurrent Neural Network machine translation model with attention decoder. The low BLEU score of Google Translation in comparison to our model indicates that the application based regional models are better. The low BLEU score of English to Malay of our model and Google Translation indicates that the Malay Language has complex language features corresponding to English.
Nagender Aneja and Sandhya Aneja
IEEE
This paper presents an analysis of pre-trained models to recognize handwritten Devanagari alphabets using transfer learning for Deep Convolution Neural Network(DCNN). This research implements AlexNet, DenseNet, Vgg, and Inception ConvNet as a fixed feature extractor. We implemented 15 epochs for each of AlexNet, DenseNet 121, DenseNet 201, Vgg 11, Vgg 16, Vgg 19, and Inception V3.Results show that Inception V3 performs better in terms of accuracy achieving 99% accuracy with average epoch time 16.3 minutes while AlexNet performs fastest with 2.2 minutes per epoch and achieving 98%accuracy.
Founder and Developer,