Verified @bauc14.edu.iq
Bilad Alrafidain university
Electrical and Electronic Engineering, Electrical and Electronic Engineering, Electrical and Electronic Engineering, Electrical and Electronic Engineering
Scopus Publications
Mosab. A. Hassan, Alaa. H. Ali, and Atheer A. Sabri
EDP Sciences
The advancement of assistive communication technology for the deaf and hard-of-hearing community is an area of significant research interest. In this study, we present a Convolutional Neural Network (CNN) model tailored for the recognition of Arabic Sign Language (ArSL). Our model incorporates a meticulous preprocessing pipeline that transforms input images through grayscale conversion, Gaussian blur, histogram equalization, and resizing to standardize input data and enhance feature visibility. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are employed for feature extraction to retain critical discriminative information while reducing dimensionality. The proposed CNN architecture leverages a blend of one-dimensional convolutional layers, max pooling, Leaky ReLU activation functions, and Long Short-Term Memory (LSTM) layers to efficiently capture both spatial and temporal patterns within the data. Our experiments on two separate datasets—one consisting of images and the other of videos—demonstrate exceptional recognition rates of 99.7% and 99.9%, respectively. These results significantly surpass the performance of existing models referenced in the literature. This paper discusses the methodologies, architectural considerations, and the training approach of the proposed model, alongside a comparative analysis of its performance against previous studies. The research outcomes suggest that our model not only sets a new benchmark in sign language recognition but also offers a promising foundation for the development of real-time, assistive sign language translation tools. The potential applications of such technology could greatly enhance communication accessibility, fostering greater inclusion for individuals who rely on sign language as their primary mode of communication. Future work will aim to expand the model's capabilities to more diverse datasets and investigate its deployment in practical, everyday scenarios to bridge the communication gap for the deaf and hard of hearing community.
Mosab A. Hassan, Alaa H. Ali, and Atheer A. Sabri
Walter de Gruyter GmbH
Abstract This study explores the field of sign language recognition through machine learning, focusing on the development and comparative evaluation of various algorithms designed to interpret sign language. With the prevalence of hearing impairment affecting millions globally, efficient sign language recognition systems are increasingly critical for enhancing communication for the deaf and hard-of-hearing community. We review several studies, showcasing algorithms with accuracies ranging from 63.5 to 99.6%. Building on these works, we introduce a novel algorithm that has been rigorously tested and has demonstrated a perfect accuracy of 99.7%. Our proposed algorithm utilizes a sophisticated convolutional neural network architecture that outperforms existing models. This work details the methodology of the proposed system, which includes preprocessing, feature extraction, and a multi-layered CNN approach. The remarkable performance of our algorithm sets a new benchmark in the field and suggests significant potential for real-world application in assistive technologies. We conclude by discussing the impact of these findings and propose directions for future research to further improve the accessibility and effectiveness of sign language recognition systems.