@iau.edu.sa
College of Arts/Department of Arabic Language/Associate Professor in Literary and Critical Studies/Children’s Literature
Imam Abdulrahman Bin Faisal University: Dammam
Faculty member at Imam Abdul Rahman bin Faisal University / College of Arts / Department of Arabic Language / Associate Professor in Literary and Critical Studies / Specific specialty is children’s literature
General Arts and Humanities, Arts and Humanities, Literature and Literary Theory, Language and Linguistics
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Mona Alghamdi, Plamen Angelov, and Lopez Pellicer Alvaro
Elsevier BV
Mona Alghamdi
IEEE
This paper presents a multimodal biometric approach applied to all fingernails and knuckle creases of the five human fingers for identifying persons. In this paper, the proposed biometric technique consists of several phases. The method starts with the detection and localisation of the main components of the hand, defining the region of interest (ROI), segmentation, feature extraction by retraining the DenseNet201 model, measuring the similarity using different metrics, and lastly, improving the person identification performance by implementing score-level fusion. This approach presents different methods for person identification, which combine fingernails, knuckles based on the modality type, and whole hands based on different similarity metrics. This paper uses various similarity metrics to distinguish between individuals. These include the Bray-Curtis, Cosine, and Euclidean metrics. Two main score-level fusion techniques are employed: the majority voting (MV) and weighted average (WA). The experimental results are evaluated with well-known databases, the ’11k Hands’ and the Hong Kong Polytechnic University Contactless Hand Dorsal Images ’PolyU’, show the proposed algorithm’s efficiency. Using the MV on the Bray-Curtis similarity measure, the fingernail-based and the base-knuckle- based fusion obtained 100% in the identification estimation. In addition, the identification rate gained 100% in regions of hands and whole hands from the two popular datasets exceeded the performance of the state-of-the-art approaches.
Mona Alghamdi, Plamen Angelov, and Bryan Williams
IEEE
Handimages are of paramount importance within critical domains like security and criminal investigation. They can sometimes be the only available evidence of an offender's identity at a crime scene. Approaches to person identification that consider the human hand as a complex object composed of many components are rare. The approach proposed in this paper fills this gap, making use of knuckle creases and fingernail information. It introduces a framework for automatic person identification that includes localisation of the regions of interest within hand images, recognition of the detected components, segmentation of the region of interest using bounding boxes, and similarity matching between a query image and a library of available images. The following hand components are considered: i) the metacarpohalangeal, commonly known as base knuckle; ii) the proximal interphalangeal joint commonly known as major knuckle; iii) distal interphalangeal joint, commonly known as minor knuckle; iv) the interphalangeal joint, commonly known as thumb's knuckle, and v) the fingernails. A key element of the proposed framework is the similarity matching and an important role for it is played by the feature extraction. In this paper, we exploit end-to-end deep convolutional neural networks to extract discriminative high-level abstract features. We further use Bray-Curtis (BC) similarity for the matching process. We validated the proposed approach on well-known benchmarks, the ‘11k Hands' dataset and the Hong Kong Polytechnic University Contactless Hand Dorsal Images known as ‘PolyU HD’. We found that the results indicate that the knuckle patterns and fingernails play a significant role in the person identification. The results from the 11K dataset indicate that the results for the left hand are better than the results for the right hand. In both datasets, the fingernails produced consistently higher identification results than other hand components, with a rank-1 score of 93.65% on the ring finger of the left hand for the ‘11k Hands' dataset and rank-l score of 93.81% for the thumb from the ‘PolyU HD’ dataset.
Mona Alghamdi, Plamen Angelov, Raul Gimenez, Mariana Rufino, and Eduardo Soares
IEEE
Machine learning has arisen with advanced data analytics. Many factors influence crop yield, such as soil, amount of water, climate, and genotype. Determining factors that significantly influence yield prediction and identify the most appropriate predictive methods are important in yield management. It is critical to consider and study the combination of different crop factors and their impact on the yield. The objectives of this paper are: (1) to use advanced data analytic techniques to precisely predict the soybean crop yields, (2) to identify the most influential features that impact soybean predictions, (3) to illustrate the ability of Fuzzy Rule-Based (FRB) sub-systems, which are self-organizing, self-learning, and data-driven, by using the recently developed Autonomous Learning Multiple-Model First-order (ALMMo-1) system, and (4) to compare the performance with other well-known methods. The ALMMo-1 system is a transparent model, which stakeholders can easily read and interpret. The model is a data-driven and composed of prototypes selected from the actual data. Many factors affect the yield, and data clouds can be formed in the feature/data space based on the data density. The data cloud is the key to the IF part of FRB sub-systems, while the THEN part (the consequences of the IF condition) illustrates the yield prediction in the form of a linear regression model, which consists of the yield features or factors. In addition, the model can determine the most influential features of the yield prediction online. The model shows an excellent prediction accuracy with a Root Mean Square Error (RMSE) of 0.0883, and Non-Dimensional Error Index (NDEI) of 0.0611, which is competitive with state-of-the-art methods.