@skku.edu
Hanyang University
Redhwan Algabri received the B.Eng., the M.Eng., and Ph.D. degrees in mechanical engineering in 2011, 2015, and 2022 from Al-Baath University, Syria, Cairo University, Egypt, and Sungkyunkwan University, South Korea, respectively. He is currently working as a Researcher at Hanyang University in South Korea. His research interests in machine & deep learning and intelligent robotics include person tracking and identification for mobile robots.
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Redhwan Algabri, Hyunsoo Shin, and Sungon Lee
Elsevier BV
Ahmed Abdu, Zhengjun Zhai, Hakim A. Abdo, and Redhwan Algabri
Institute of Electrical and Electronics Engineers (IEEE)
Ahmed Abdu, Zhengjun Zhai, Hakim A. Abdo, Redhwan Algabri, and Sungon Lee
Computers, Materials and Continua (Tech Science Press)
Redhwan Algabri and Mun-Taek Choi
MDPI AG
It is challenging for a mobile robot to follow a specific target person in a dynamic environment, comprising people wearing similar-colored clothes and having the same or similar height. This study describes a novel framework for a person identification model that identifies a target person by merging multiple features into a single joint feature online. The proposed framework exploits the deep learning output to extract four features for tracking the target person without prior knowledge making it generalizable and more robust. A modified intersection over union between the current frame and the last frame is proposed as a feature to distinguish people, in addition to color, height, and location. To improve the performance of target identification in a dynamic environment, an online boosting method was adapted by continuously updating the features in every frame. Through extensive real-life experiments, the effectiveness of the proposed method was demonstrated by showing experimental results that it outperformed the previous methods.
Ahmed Abdu, Zhengjun Zhai, Redhwan Algabri, Hakim A. Abdo, Kotiba Hamad, and Mugahed A. Al-antari
MDPI AG
Software defect prediction (SDP) methodology could enhance software’s reliability through predicting any suspicious defects in its source code. However, developing defect prediction models is a difficult task, as has been demonstrated recently. Several research techniques have been proposed over time to predict source code defects. However, most of the previous studies focus on conventional feature extraction and modeling. Such traditional methodologies often fail to find the contextual information of the source code files, which is necessary for building reliable prediction deep learning models. Alternatively, the semantic feature strategies of defect prediction have recently evolved and developed. Such strategies could automatically extract the contextual information from the source code files and use them to directly predict the suspicious defects. In this study, a comprehensive survey is conducted to systematically show recent software defect prediction techniques based on the source code’s key features. The most recent studies on this topic are critically reviewed through analyzing the semantic feature methods based on the source codes, the domain’s critical problems and challenges are described, and the recent and current progress in this domain are discussed. Such a comprehensive survey could enable research communities to identify the current challenges and future research directions. An in-depth literature review of 283 articles on software defect prediction and related work was performed, of which 90 are referenced.
Redhwan Algabri and Mun-Taek Choi
MDPI AG
The ability to predict a person’s trajectory and recover a target person in the event the target moves out of the field of view of the robot’s camera is an important requirement for mobile robots designed to follow a specific person in the workspace. This paper describes an extended work of an online learning framework for trajectory prediction and recovery, integrated with a deep learning-based person-following system. The proposed framework first detects and tracks persons in real time using the single-shot multibox detector deep neural network. It then estimates the real-world positions of the persons by using a point cloud and identifies the target person to be followed by extracting the clothes color using the hue-saturation-value model. The framework allows the robot to learn online the target trajectory prediction according to the historical path of the target person. The global and local path planners create robot trajectories that follow the target while avoiding static and dynamic obstacles, all of which are elaborately designed in the state machine control. We conducted intensive experiments in a realistic environment with multiple people and sharp corners behind which the target person may quickly disappear. The experimental results demonstrated the effectiveness and practicability of the proposed framework in the given environment.
Redhwan Algabri and Mun-Taek Choi
IEEE
Tracking a specific person in environments with non-uniform illumination is a difficult task for mobile robots. Image information such as color is essential to identify a target person. However, the information is not reliable under severe illumination changes unless the system can accommodate these changes over time. In this paper, we propose a robust identifier that has been combined with a deep learning technique to accommodate varying illumination in the ambient lighting of a scene. Moreover, an enhanced online update strategy for the person identification model is used to deal with the challenge of drifting the target person's appearance changes during tracking. Using the proposed method, the system achieves a successfully tracked rate above 90% on real-world video sequences in which variations in illumination are dominant. We confirmed the effectiveness of the proposed method through target-following experiments using five different clothing colors in a real indoor environment where the lighting conditions change extremely.
Redhwan Algabri and Mun-Taek Choi
MDPI AG
Human following is one of the fundamental functions in human–robot interaction for mobile robots. This paper shows a novel framework with state-machine control in which the robot tracks the target person in occlusion and illumination changes, as well as navigates with obstacle avoidance while following the target to the destination. People are detected and tracked using a deep learning algorithm, called Single Shot MultiBox Detector, and the target person is identified by extracting the color feature using the hue-saturation-value histogram. The robot follows the target safely to the destination using a simultaneous localization and mapping algorithm with the LIDAR sensor for obstacle avoidance. We performed intensive experiments on our human following approach in an indoor environment with multiple people and moderate illumination changes. Experimental results indicated that the robot followed the target well to the destination, showing the effectiveness and practicability of our proposed system in the given environment.