Ali Abdularahman Alani

@uodiyala.edu.iq

College of Sciences/ Computer Department
Diyala University



                    

https://researchid.co/alialani

RESEARCH INTERESTS

Machine Learning

11

Scopus Publications

275

Scholar Citations

7

Scholar h-index

7

Scholar i10-index

Scopus Publications

  • A Model for Qur'anic Sign Language Recognition Based on Deep Learning Algorithms
    Hany A. AbdElghfar, Abdelmoty M. Ahmed, Ali A. Alani, Hammam M. AbdElaal, Belgacem Bouallegue, Mahmoud M. Khattab, Gamal Tharwat, and Hassan A. Youness

    Hindawi Limited
    Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them from applying Islamic rituals such as prayer that require learning and reading the Holy Qur’an. In this paper, we propose a new model for Qur’anic sign language recognition based on convolutional neural networks through data preparation, preprocessing, feature extraction, and classification stages. The proposed model is aimed at recognizing the movements of the Arabic sign language by recognizing the hand gestures that refer to the dashed Qur’anic letters in order to help the deaf and dumb learn their Islamic rituals. The experiments have been conducted on a part of a large Arabic sign language dataset called ArSL2018, which represents the 14 dashed letters in the Holy Qur’an, so that this part contains only 24,137 images. The experimental results demonstrate that the proposed model performs better than the other existing models.

  • QSLRS-CNN: Qur'anic sign language recognition system based on convolutional neural networks
    Hany A. AbdElghfar, Abdelmoty M. Ahmed, Ali A. Alani, Hammam M. AbdElaal, Belgacem Bouallegue, Mahmoud M. Khattab, and Hassan A. Youness

    Informa UK Limited

  • ArSL-CNN: A convolutional neural network for arabic sign language gesture recognition
    Ali A. Alani and Georgina Cosma

    Institute of Advanced Engineering and Science
    <p class="IJASEITAbtract">Sign language (SL) is a visual language means of communication for people who are Deaf or have hearing impairments. In Arabic-speaking countries, there are many Arabic sign languages (ArSL) and these use the same alphabets. This study proposes ArSL-CNN, a deep learning model that is based on a convolutional neural network (CNN) for translating Arabic SL (ArSL). Experiments were performed using a large ArSL dataset (ArSL2018) that contains 54049 images of 32 sign language gestures, collected from forty participants. The results of the first experiments with the ArSL-CNN model returned a train and test accuracy of 98.80% and 96.59%, respectively. The results also revealed the impact of imbalanced data on model accuracy. For the second set of experiments, various re-sampling methods were applied to the dataset. Results revealed that applying the synthetic minority oversampling technique (SMOTE) improved the overall test accuracy from 96.59% to 97.29%, yielding a statistically signicant improvement in test accuracy (p=0.016,  α<0=05). The proposed ArSL-CNN model can be trained on a variety of Arabic sign languages and reduce the communication barriers encountered by Deaf communities in Arabic-speaking countries.</p>

  • Classifying Imbalanced Multi-modal Sensor Data for Human Activity Recognition in a Smart Home using Deep Learning
    Ali A. Alani, Georgina Cosma, and Aboozar Taherkhani

    IEEE
    In smart homes, data generated from real-time sensors for human activity recognition is complex, noisy and imbalanced. It is a significant challenge to create machine learning models that can classify activities which are not as commonly occurring as other activities. Machine learning models designed to classify imbalanced data are biased towards learning the more commonly occurring classes. Such learning bias occurs naturally, since the models better learn classes which contain more records. This paper examines whether fusing real-world imbalanced multi-modal sensor data improves classification results as opposed to using unimodal data; and compares deep learning approaches to dealing with imbalanced multi-modal sensor data when using various resampling methods and deep learning models. Experiments were carried out using a large multi-modal sensor dataset generated from the Sensor Platform for HEalthcare in a Residential Environment (SPHERE). The data comprises 16104 samples, where each sample comprises 5608 features and belongs to one of 20 activities (classes). Experimental results using SPHERE demonstrate the challenges of dealing with imbalanced multi-modal data and highlight the importance of having a suitable number of samples within each class for sufficiently training and testing deep learning models. Furthermore, the results revealed that when fusing the data and using the Synthetic Minority Oversampling Technique (SMOTE) to correct class imbalance, CNN-LSTM achieved the highest classification accuracy of 93.67% followed by CNN, 93.55%, and LSTM, i.e. 92.98%.


  • Activity recognition from multi-modal sensor data using a deep convolutional neural network
    Aboozar Taherkhani, Georgina Cosma, Ali A. Alani, and T. M. McGinnity

    Springer International Publishing
    Multi-modal data extracted from different sensors in a smart home can be fused to build models that recognize the daily living activities of residents. This paper proposes a Deep Convolutional Neural Network to perform the activity recognition task using the multi-modal data collected from a smart residential home. The dataset contains accelerometer data (composed of three perpendicular components of acceleration and the strength of the accelerometer signal received by four receivers), video data (15 time-series related to 2D and 3D center of mass and bounding box extracted from an RGB-D camera), and Passive Infra-Red sensor data. The performance of the Deep Convolutional Neural Network is compared to the Deep Belief Network. Experimental results revealed that the Deep Convolutional Neural Network with two pairs of convolutional and max pooling layers achieved better classification accuracy than the Deep Belief Network. The Deep Belief Network uses Restricted Boltzmann Machines for pre-training the network. When training deep learning models using classes with a high number of training samples, the DBN achieved 65.97% classification accuracy, whereas the CNN achieved 75.33% accuracy. The experimental results demonstrate the challenges of dealing with multi-modal data and highlight the importance of having a suitable number of samples within each class for sufficiently training and testing deep learning models.

  • A Hybrid Model for Classification of Biomedical Data Using Feature Filtering and a Convolutional Neural Network
    Sadegh Salesi, Ali A. Alani, and Georgina Cosma

    IEEE
    Deep learning is known for its capabilities in analysing large and complex sets of data, without the need of applying noise reduction methods, which is a necessary step for improving the performance of conventional machine learning models. Indeed, the superiority of deep learning over conventional machine learning models resides in their capabilities of analysing large sets of data to learn features directly from the data without the need for manual feature extraction. However, this paper aims to evaluate the hypothesis that by using feature filtering as a preprocessing step prior to feeding the data into the deep learning model, the quality of the data is improved which also leads to a better performing deep learning model. Two complex biomedical datasets which contain a large number of features and sufficient number of patient cases for deep learning were selected for the evaluations. A selection of feature filtering methods were applied to identify the most important features (i.e. top 20% ranked features) at the input level, prior to the data being fed into a deep learning classifier. Once the most important features are selected, these are fed into a deep learning algorithm, and in particular the Convolutional Neural Network, which has been tuned for the particular task. Experiment results demonstrate that applying feature filtering at the input level improves the performance of the deep Convolutional Neural Network, even for the most complex biomedical data such as those utilised in this paper. In particular, for the first dataset, PANCAN, an improvement of 20% was reported in Accuracy, whereas for the second dataset GAMETES Epistasis, an improvement of 10.63% was reported in Accuracy. The results are promising and demonstrate the benefits of filter filtering when deep learning methods are adopted for biomedical classification tasks.

  • Hand gesture recognition using an adapted convolutional neural network with data augmentation
    Ali A. Alani, Georgina Cosma, Aboozar Taherkhani, and T.M McGinnity

    IEEE
    Hand gestures provide a natural way for humans to interact with computers to perform a variety of different applications. However, factors such as the complexity of hand gesture structures, differences in hand size, hand posture, and environmental illumination can influence the performance of hand gesture recognition algorithms. Recent advances in Deep Learning have significantly advanced the performance of image recognition systems. In particular, the Deep Convolutional Neural Network has demonstrated superior performance in image representation and classification, compared to conventional machine learning approaches. This paper proposes an Adapted Deep Convolutional Neural Network (ADCNN) suitable for hand gesture recognition tasks. Data augmentation is initially applied which shifts images both horizontally and vertically to an extent of 20% of the original dimensions randomly, in order to numerically increase the size of the dataset and to add the robustness needed for a deep learning approach. These images are input into the proposed ADCNN model which is empowered by the presence of network initialization (ReLU and Softmax) and L2 Regularization to eliminate the problem of data overfitting. With these modifications, the experimental results using the ADCNN model demonstrate that it is an effective method of increasing the performance of CNN for hand gesture recognition. The model was trained and tested using 3750 static hand gesture images, which incorporate variations in features such as scale, rotation, translation, illumination and noise. The proposed ADCNN was compared to a baseline Convolutional Neural Network and the results show that the proposed ADCNN achieved a classification recognition accuracy of 99.73%, and a 4% improvement over the baseline Convolutional Neural Network model (95.73%).

  • On-line voltage stability monitoring using an Ensemble AdaBoost classifier
    Salim S. Maaji, Georgina Cosma, Aboozar Taherkhani, Ali A. Alani, and T. M McGinnity

    IEEE
    Predictive modeling in an electrical power systems is currently gaining momentum especially as Phasor Measurement Units (PMUs) are being deployed in modern electrical grids to replace the Supervisory Control and Data Acquisition System (SCADA). This paper evaluates machine learning algorithms for the task of monitoring voltage instability for online decision making. In particular the performance of the Naïve Bayesian, K-Nearest Neighbors, Decision Tree and Ensembles classifiers (XGBoost, Bagging, Random Forest, and AdaBoost) were compared. Performance evaluation measures of Precision, Recall, F1-score, and Accuracy were adopted to evaluate the performance of the classifiers. In this paper, a number of voltage stability operating points were generated with different variations of load/generation, using the PSSE Power-Voltage (PV) analysis tool. An IEEE 39 bus was used as a test system. Sufficient training patterns that captured different Operating Points (OPs) at base-case and at multiple contingencies (N-k) were gathered to train machine learning methods to identify acceptable operating conditions and near collapse situations. Experimental results show that AdaBoost achieved the highest classification accuracy, i.e. 96.02%, compared to the other classifiers.

  • Fingerprint classification using a deep convolutional neural network
    Bhavesh Pandya, Georgina Cosma, Ali A. Alani, Aboozar Taherkhani, Vinayak Bharadi, and T.M McGinnity

    IEEE
    Biometric systems detect authenticity based on users' distinct physiological or behavioral characteristics for purposes of identification and access control. These pattern recognition systems are difficult to bypass when compared to traditional token or password based systems. This paper is proposing a new deep learning architecture for fingerprint recognition. The proposed architecture comprises of a pre-processing stage for extracting texture features from fingerprints, and this stage is performed by using histogram equalization, Gabor enhancement and fingerprint thinning. The pre-processed fingerprints are input into a Deep Convolutional Neural Network classifier. The proposed approach has achieved 98.21% classification accuracy with 0.9 loss. The obtained accuracy is significantly higher than previously reported results on the same dataset, 77%.

  • Arabic handwritten digit recognition based on restricted Boltzmann machine and convolutional neural networks
    Ali Alani

    MDPI AG
    Handwritten digit recognition is an open problem in computer vision and pattern recognition, and solving this problem has elicited increasing interest. The main challenge of this problem is the design of an efficient method that can recognize the handwritten digits that are submitted by the user via digital devices. Numerous studies have been proposed in the past and in recent years to improve handwritten digit recognition in various languages. Research on handwritten digit recognition in Arabic is limited. At present, deep learning algorithms are extremely popular in computer vision and are used to solve and address important problems, such as image classification, natural language processing, and speech recognition, to provide computers with sensory capabilities that reach the ability of humans. In this study, we propose a new approach for Arabic handwritten digit recognition by use of restricted Boltzmann machine (RBM) and convolutional neural network (CNN) deep learning algorithms. In particular, we propose an Arabic handwritten digit recognition approach that works in two phases. First, we use the RBM, which is a deep learning technique that can extract highly useful features from raw data, and which has been utilized in several classification problems as a feature extraction technique in the feature extraction phase. Then, the extracted features are fed to an efficient CNN architecture with a deep supervised learning architecture for the training and testing process. In the experiment, we used the CMATERDB 3.3.1 Arabic handwritten digit dataset for training and testing the proposed method. Experimental results show that the proposed method significantly improves the accuracy rate, with accuracy reaching 98.59%. Finally, comparison of our results with those of other studies on the CMATERDB 3.3.1 Arabic handwritten digit dataset shows that our approach achieves the highest accuracy rate.

RECENT SCHOLAR PUBLICATIONS

  • QSLRS-CNN: Qur'anic sign language recognition system based on convolutional neural networks
    HA AbdElghfar, AM Ahmed, AA Alani, HM AbdElaal, B Bouallegue, ...
    The Imaging Science Journal 72 (2), 254-266 2024

  • A model for qur’anic sign language recognition based on deep learning algorithms
    HA AbdElghfar, AM Ahmed, AA Alani, HM AbdElaal, B Bouallegue, ...
    Journal of Sensors 2023 2023

  • ArSL-CNN: a convolutional neural network for Arabic sign language gesture recognition
    AA Alani, G Cosma
    Indonesian journal of electrical engineering and computer science 22 2021

  • COVID-CNNnet: Convolutional Neural Network for Coronavirus Detection
    AA Alani, AA Alani, KAMAAL Ani
    International Journal of Data Science 2 (1), 9-18 2021

  • Classifying imbalanced multi-modal sensor data for human activity recognition in a smart home using deep learning
    AA Alani, G Cosma, A Taherkhani
    2020 international joint conference on neural networks (IJCNN), 1-8 2020

  • Activity recognition from multi-modal sensor data using a deep convolutional neural network
    A Taherkhani, G Cosma, AA Alani, TM McGinnity
    Intelligent Computing: Proceedings of the 2018 Computing Conference, Volume 2019

  • A hybrid model for classification of biomedical data using feature filtering and a convolutional neural network
    S Salesi, AA Alani, G Cosma
    2018 Fifth International Conference on Social Networks Analysis, Management 2018

  • Big Data Analytics for Healthcare Organizations A Case Study of the Iraqi Healthcare Sector
    AA Alani, FD Ahmed, AM Mazlina, A Mohd Sharifuddin
    Advanced Science Letters 24 (10), 7783-7789 2018

  • On-line voltage stability monitoring using an Ensemble AdaBoost classifier
    SS Maaji, G Cosma, A Taherkhani, AA Alani, TM McGinnity
    2018 4th International Conference on Information Management (ICIM), 253-259 2018

  • Fingerprint classification using a deep convolutional neural network
    B Pandya, G Cosma, AA Alani, A Taherkhani, V Bharadi, TM McGinnity
    2018 4th international conference on information management (ICIM), 86-91 2018

  • Hand gesture recognition using an adapted convolutional neural network with data augmentation
    AA Alani, G Cosma, A Taherkhani, TM McGinnity
    2018 4th International conference on information management (ICIM), 5-12 2018

  • Arabic Handwritten Digit Recognition Based on Restricted Boltzmann Machine and Convolutional Neural Networks
    AA Alani
    Information 8 (4), 142 2017

  • Modified AODV routing protocol to detect the black hole attack in MANET
    MSAA Mahmood, TM Hasan, MSDS Ibrahim
    International Journal of Advanced Research in Computer Science and Software 2015

  • Hiding Information Using Circular Distribuition
    AA Alani
    Journal of the college of education 1 (1), 101-114 2015

  • The Way Forward For Distance Learning In Countries with Low ICT Implementation: The Iraq Case
    DMBO Ali A. Alani
    2nd National Graduate Conference 2 (2), 1-4 2014

  • A Survey on Readiness and Preference in Adopting Distance Learning: Case at University Of Technology, Iraq
    DMBO Ali A. Alani
    2nd National Graduate Conference 2 (2), 1-4 2014

  • Develop A Hybrid E-Learning Model For Distance Learning Implementation: Case Study University Of Technology In Iraq
    DMBO Ali A. Alani
    The Arab Journal of Quality in Education 1 (1), 57-71 2014

MOST CITED SCHOLAR PUBLICATIONS

  • Hand gesture recognition using an adapted convolutional neural network with data augmentation
    AA Alani, G Cosma, A Taherkhani, TM McGinnity
    2018 4th International conference on information management (ICIM), 5-12 2018
    Citations: 67

  • Arabic Handwritten Digit Recognition Based on Restricted Boltzmann Machine and Convolutional Neural Networks
    AA Alani
    Information 8 (4), 142 2017
    Citations: 58

  • Fingerprint classification using a deep convolutional neural network
    B Pandya, G Cosma, AA Alani, A Taherkhani, V Bharadi, TM McGinnity
    2018 4th international conference on information management (ICIM), 86-91 2018
    Citations: 55

  • ArSL-CNN: a convolutional neural network for Arabic sign language gesture recognition
    AA Alani, G Cosma
    Indonesian journal of electrical engineering and computer science 22 2021
    Citations: 24

  • Classifying imbalanced multi-modal sensor data for human activity recognition in a smart home using deep learning
    AA Alani, G Cosma, A Taherkhani
    2020 international joint conference on neural networks (IJCNN), 1-8 2020
    Citations: 20

  • On-line voltage stability monitoring using an Ensemble AdaBoost classifier
    SS Maaji, G Cosma, A Taherkhani, AA Alani, TM McGinnity
    2018 4th International Conference on Information Management (ICIM), 253-259 2018
    Citations: 18

  • Activity recognition from multi-modal sensor data using a deep convolutional neural network
    A Taherkhani, G Cosma, AA Alani, TM McGinnity
    Intelligent Computing: Proceedings of the 2018 Computing Conference, Volume 2019
    Citations: 11

  • Modified AODV routing protocol to detect the black hole attack in MANET
    MSAA Mahmood, TM Hasan, MSDS Ibrahim
    International Journal of Advanced Research in Computer Science and Software 2015
    Citations: 6

  • A hybrid model for classification of biomedical data using feature filtering and a convolutional neural network
    S Salesi, AA Alani, G Cosma
    2018 Fifth International Conference on Social Networks Analysis, Management 2018
    Citations: 5

  • A model for qur’anic sign language recognition based on deep learning algorithms
    HA AbdElghfar, AM Ahmed, AA Alani, HM AbdElaal, B Bouallegue, ...
    Journal of Sensors 2023 2023
    Citations: 3

  • Big Data Analytics for Healthcare Organizations A Case Study of the Iraqi Healthcare Sector
    AA Alani, FD Ahmed, AM Mazlina, A Mohd Sharifuddin
    Advanced Science Letters 24 (10), 7783-7789 2018
    Citations: 3

  • QSLRS-CNN: Qur'anic sign language recognition system based on convolutional neural networks
    HA AbdElghfar, AM Ahmed, AA Alani, HM AbdElaal, B Bouallegue, ...
    The Imaging Science Journal 72 (2), 254-266 2024
    Citations: 2

  • Develop A Hybrid E-Learning Model For Distance Learning Implementation: Case Study University Of Technology In Iraq
    DMBO Ali A. Alani
    The Arab Journal of Quality in Education 1 (1), 57-71 2014
    Citations: 2

  • COVID-CNNnet: Convolutional Neural Network for Coronavirus Detection
    AA Alani, AA Alani, KAMAAL Ani
    International Journal of Data Science 2 (1), 9-18 2021
    Citations: 1