Juan Pablo Vasconez Hurtado

Verified @hotmail.com

Faculty of Engineering
Universidad Andrés Bello / Research Professor



              

https://researchid.co/juanpavas89

Ph.D. in Electronic Engineering from the Universidad Técnica Federico Santa María - Chile. Electronics and Control Engineer from the Escuela Politécnica Nacional (EPN) - Ecuador. Currently, he is a researcher at the Artificial Vision and Intelligence Research Laboratory of the Escuela Politécnica Nacional. He has investigated topics related to Robotics and Artificial Intelligence, Human-Robot Interaction (HRI), Human-Machine Interaction (HMI), assistance and collaborative systems, algorithms able to detect different non-verbal communication techniques such as gestures, actions, and human cognitive parameters. He has also worked in the area of precision agriculture using cooperative robots capable of interacting and sharing the workspace with humans. His areas of interest are robotics, machine learning, deep learning, reinforcement learning, and computer vision.

EDUCATION

Ph.D. in Electronic Engineering - Universidad Técnica Federico Santa María - Chile
Electronics and Control Engineer - Universidad Politécnica Nacional - Ecuador

RESEARCH INTERESTS

Human-robot interaction, artificial intelligence, robotics, electronics.

27

Scopus Publications

Scopus Publications

  • Deep Learning based flower detection and counting in highly populated images: A peach grove case study
    Juan Sebastian Estrada, Juan Pablo Vasconez, Longsheng Fu, and Fernando Auat Cheein

    Elsevier BV

  • Characterization and Comparison of Maximum Isometric Strength and Vertical Jump Among Novice Runners, Long Distance Runners, and Ultramarathoners
    Mailyn Calderón Díaz, Ricardo Ulloa-Jiménez, Nicole Castro Laroze, Juan Pablo Vásconez, Jairo R. Coronado-Hernández, Mónica Acuña Rodríguez, and Samir F. Umaña Ibáñez

    Springer Nature Switzerland

  • Explainable Machine Learning Techniques to Predict Muscle Injuries in Professional Soccer Players through Biomechanical Analysis
    Mailyn Calderón-Díaz, Rony Silvestre Aguirre, Juan P. Vásconez, Roberto Yáñez, Matías Roby, Marvin Querales, and Rodrigo Salas

    MDPI AG
    There is a significant risk of injury in sports and intense competition due to the demanding physical and psychological requirements. Hamstring strain injuries (HSIs) are the most prevalent type of injury among professional soccer players and are the leading cause of missed days in the sport. These injuries stem from a combination of factors, making it challenging to pinpoint the most crucial risk factors and their interactions, let alone find effective prevention strategies. Recently, there has been growing recognition of the potential of tools provided by artificial intelligence (AI). However, current studies primarily concentrate on enhancing the performance of complex machine learning models, often overlooking their explanatory capabilities. Consequently, medical teams have difficulty interpreting these models and are hesitant to trust them fully. In light of this, there is an increasing need for advanced injury detection and prediction models that can aid doctors in diagnosing or detecting injuries earlier and with greater accuracy. Accordingly, this study aims to identify the biomarkers of muscle injuries in professional soccer players through biomechanical analysis, employing several ML algorithms such as decision tree (DT) methods, discriminant methods, logistic regression, naive Bayes, support vector machine (SVM), K-nearest neighbor (KNN), ensemble methods, boosted and bagged trees, artificial neural networks (ANNs), and XGBoost. In particular, XGBoost is also used to obtain the most important features. The findings highlight that the variables that most effectively differentiate the groups and could serve as reliable predictors for injury prevention are the maximum muscle strength of the hamstrings and the stiffness of the same muscle. With regard to the 35 techniques employed, a precision of up to 78% was achieved with XGBoost, indicating that by considering scientific evidence, suggestions based on various data sources, and expert opinions, it is possible to attain good precision, thus enhancing the reliability of the results for doctors and trainers. Furthermore, the obtained results strongly align with the existing literature, although further specific studies about this sport are necessary to draw a definitive conclusion.

  • A Behavior-Based Fuzzy Control System for Mobile Robot Navigation: Design and Assessment
    Juan Pablo Vásconez, Mailyn Calderón-Díaz, Inesmar C. Briceño, Jenny M. Pantoja, and Patricio J. Cruz

    Springer Nature Switzerland

  • A Deep Q-Network based hand gesture recognition system for control of robotic platforms
    Patricio J. Cruz, Juan Pablo Vásconez, Ricardo Romero, Alex Chico, Marco E. Benalcázar, Robin Álvarez, Lorena Isabel Barona López, and Ángel Leonardo Valdivieso Caraguay

    Springer Science and Business Media LLC
    AbstractHand gesture recognition (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) has been investigated for human-machine applications in the last few years. The information obtained from the HGR systems has the potential to be helpful to control machines such as video games, vehicles, and even robots. Therefore, the key idea of the HGR system is to identify the moment in which a hand gesture was performed and it’s class. Several human-machine state-of-the-art approaches use supervised machine learning (ML) techniques for the HGR system. However, the use of reinforcement learning (RL) approaches to build HGR systems for human-machine interfaces is still an open problem. This work presents a reinforcement learning (RL) approach to classify EMG-IMU signals obtained using a Myo Armband sensor. For this, we create an agent based on the Deep Q-learning algorithm (DQN) to learn a policy from online experiences to classify EMG-IMU signals. The HGR proposed system accuracy reaches up to $$97.45 \\pm 1.02\\%$$ 97.45 ± 1.02 % and $$88.05 \\pm 3.10\\%$$ 88.05 ± 3.10 % for classification and recognition respectively, with an average inference time per window observation of 20 ms. and we also demonstrate that our method outperforms other approaches in the literature. Then, we test the HGR system to control two different robotic platforms. The first is a three-degrees-of-freedom (DOF) tandem helicopter test bench, and the second is a virtual six-degree-of-freedom (DOF) UR5 robot. We employ the designed hand gesture recognition (HGR) system and the inertial measurement unit (IMU) integrated into the Myo sensor to command and control the motion of both platforms. The movement of the helicopter test bench and the UR5 robot is controlled under a PID controller scheme. Experimental results show the effectiveness of using the proposed HGR system based on DQN for controlling both platforms with a fast and accurate response.

  • A comparison of EMG-based hand gesture recognition systems based on supervised and reinforcement learning
    Juan Pablo Vásconez, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, and Marco E. Benalcázar

    Elsevier BV

  • Recognition of Hand Gestures Based on EMG Signals with Deep and Double-Deep Q-Networks
    Ángel Leonardo Valdivieso Caraguay, Juan Pablo Vásconez, Lorena Isabel Barona López, and Marco E. Benalcázar

    MDPI AG
    In recent years, hand gesture recognition (HGR) technologies that use electromyography (EMG) signals have been of considerable interest in developing human–machine interfaces. Most state-of-the-art HGR approaches are based mainly on supervised machine learning (ML). However, the use of reinforcement learning (RL) techniques to classify EMGs is still a new and open research topic. Methods based on RL have some advantages such as promising classification performance and online learning from the user’s experience. In this work, we propose a user-specific HGR system based on an RL-based agent that learns to characterize EMG signals from five different hand gestures using Deep Q-network (DQN) and Double-Deep Q-Network (Double-DQN) algorithms. Both methods use a feed-forward artificial neural network (ANN) for the representation of the agent policy. We also performed additional tests by adding a long–short-term memory (LSTM) layer to the ANN to analyze and compare its performance. We performed experiments using training, validation, and test sets from our public dataset, EMG-EPN-612. The final accuracy results demonstrate that the best model was DQN without LSTM, obtaining classification and recognition accuracies of up to 90.37%±10.7% and 82.52%±10.9%, respectively. The results obtained in this work demonstrate that RL methods such as DQN and Double-DQN can obtain promising results for classification and recognition problems based on EMG signals.

  • Comparison of path planning methods for robot navigation in simulated agricultural environments
    Juan P. Vásconez, Fernando Basoalto, Inesmar C. Briceño, Jenny M. Pantoja, Roberto A. Larenas, Jhon H. Rios, and Felipe A. Castro

    Elsevier BV

  • Ensure the generation and processing of inventory transactions through a web application for ground freight transportation equipment
    Juan P. Vasconez, Jenny M. Pantoja, Roberto A. Larenas, Jhon H. Rios, Hector G. Puente, Eduardo A. Mendez, Yulineth Gómez-Charris, and Inesmar C. Briceño

    Elsevier BV

  • Proposal for the Recovery of Waste Cold Energy in Liquefied Natural Gas Satellite Regasification Plants
    J. Simonetti Harlan, M. Pantoja Jenny, Matamala Monica, P. Vasconez Juan, C. Briceño Inesmar, A. Larenas Roberto, and Neira-Rodado Dionicio

    Elsevier BV

  • Hand Gesture Recognition Using EMG-IMU Signals and Deep Q-Networks
    Juan Pablo Vásconez, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, and Marco E. Benalcázar

    MDPI AG
    Hand gesture recognition systems (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) have been studied for different applications in recent years. Most commonly, cutting-edge HGR methods are based on supervised machine learning methods. However, the potential benefits of reinforcement learning (RL) techniques have shown that these techniques could be a viable option for classifying EMGs. Methods based on RL have several advantages such as promising classification performance and online learning from experience. In this work, we developed an HGR system made up of the following stages: pre-processing, feature extraction, classification, and post-processing. For the classification stage, we built an RL-based agent capable of learning to classify and recognize eleven hand gestures—five static and six dynamic—using a deep Q-network (DQN) algorithm based on EMG and IMU information. The proposed system uses a feed-forward artificial neural network (ANN) for the representation of the agent policy. We carried out the same experiments with two different types of sensors to compare their performance, which are the Myo armband sensor and the G-force sensor. We performed experiments using training, validation, and test set distributions, and the results were evaluated for user-specific HGR models. The final accuracy results demonstrated that the best model was able to reach up to 97.50%±1.13% and 88.15%±2.84% for the classification and recognition, respectively, with regard to static gestures, and 98.95%±0.62% and 90.47%±4.57% for the classification and recognition, respectively, with regard to dynamic gestures with the Myo armband sensor. The results obtained in this work demonstrated that RL methods such as the DQN are capable of learning a policy from online experience to classify and recognize static and dynamic gestures using EMG and IMU signals.


  • Hand Gesture and Arm Movement Recognition for Multimodal Control of a 3-DOF Helicopter
    Ricardo Romero, Patricio Cruz, J. P. Vasconez, Marco E. Benalcázar, Robin Álvarez, Lorena Isabel Barona López and Ángel Leonardo Valdivieso Caraguay



  • A New Methodology for Pattern Recognition Applied to Hand Gestures Recognition Using EMG. Analysis of Intrapersonal and Interpersonal Variability
    Javier Alejandro Ordóñez Flores, Robin Gerardo Alvarez Rueda, Marco E. Benalcázar, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, Patricio Cruz and J. P. Vasconez


    Systems used for pattern recognition are usually divided into 4 stages: signal acquisition, preprocessing, feature extraction and classification. However, the use of algorithms in these last 3 stages is not justified and researchers use them without criteria other than the result achieved at the end of the process. In this paper we propose a new methodology and show its particular application to the recognition of five hand gestures based on 8 channels of Electromyography using the Myo armband device placed on the forearm. If n features are extracted, they will form clusters of points in n-dimensional space and now the selection of the best preprocessing algorithms and the best features are based on maximizing the distance among clusters. On the other hand, both intrapersonal and interpersonal variability are treated to facilitate the understanding of the phenomenon. As a demonstration, it was applied to 12 people and the recognition accuracy was 97%.

  • Hand Gesture Recognition and Tracking Control for a Virtual UR5 Robot Manipulator
    Alex Chico, Patricio J. Cruz, Juan Pablo Vasconez, Marco E. Benalcazar, Robin Alvarez, Lorena Barona, and Angel Leonardo Valdivieso

    IEEE
    Human-machine interfaces (HMIs) have received an increased attention during the past decades, specially due to the use of robots in many fields of the industry. In this paper, we employ a hand gesture recognition (HGR) system and the inertial measurement unit (IMU) integrated in the Myo Armband sensor as a HMI to control the position and orientation of a virtual six-degree-of-freedom (DoF) UR5 robot. As part of the HMI, this work also focuses on solving the trajectory tracking problem for the UR5 robot using two different control strategies: a minimum norm PID and a controller based on linear algebra. Both controllers are designed from the UR5 kinematic model. Finally, we test and compare their performance by computing their IAE performance criterion and the position errors obtained from two experiments: tracking pre-defined input trajectories and commanding the virtual robot by the HGR system plus the Myo Armband's IMU signals.


  • A Hand Gesture Recognition System Using EMG and Reinforcement Learning: A Q-Learning Approach
    Juan Pablo Vásconez, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, Patricio J. Cruz, Robin Álvarez, and Marco E. Benalcázar

    Springer International Publishing

  • A fuzzy-based driver assistance system using human cognitive parameters and driving style information
    Juan Pablo Vasconez, Michelle Viscaino, Leonardo Guevara, and Fernando Auat Cheein

    Elsevier BV

  • An Energy-Based Method for Orientation Correction of EMG Bracelet Sensors in Hand Gesture Recognition Systems
    Lorena Isabel Barona Barona López, Ángel Leonardo Valdivieso Valdivieso Caraguay, Victor H. Vimos, Jonathan A. Zea, Juan P. Vásconez, Marcelo Álvarez, and Marco E. Benalcázar

    MDPI AG
    Hand gesture recognition (HGR) systems using electromyography (EMG) bracelet-type sensors are currently largely used over other HGR technologies. However, bracelets are susceptible to electrode rotation, causing a decrease in HGR performance. In this work, HGR systems with an algorithm for orientation correction are proposed. The proposed orientation correction method is based on the computation of the maximum energy channel using a synchronization gesture. Then, the channels of the EMG are rearranged in a new sequence which starts with the maximum energy channel. This new sequence of channels is used for both training and testing. After the EMG channels are rearranged, this signal passes through the following stages: pre-processing, feature extraction, classification, and post-processing. We implemented user-specific and user-general HGR models based on a common architecture which is robust to rotations of the EMG bracelet. Four experiments were performed, taking into account two different metrics which are the classification and recognition accuracy for both models implemented in this work, where each model was evaluated with and without rotation of the bracelet. The classification accuracy measures how well a model predicted which gesture is contained somewhere in a given EMG, whereas recognition accuracy measures how well a model predicted when it occurred, how long it lasted, and which gesture is contained in a given EMG. The results of the experiments (without and with orientation correction) executed show an increase in performance from 44.5% to 81.2% for classification and from 43.3% to 81.3% for recognition in user-general models, while in user-specific models, the results show an increase in performance from 39.8% to 94.9% for classification and from 38.8% to 94.2% for recognition. The results obtained in this work evidence that the proposed method for orientation correction makes the performance of an HGR robust to rotations of the EMG bracelet.

  • An Energy-Based Method for Orientation Correction of EMG Bracelet Sensors in Hand Gesture Recognition Systems
    Lorena Isabel Barona Barona López, Ángel Leonardo Valdivieso Valdivieso Caraguay, Victor H. Vimos, Jonathan A. Zea, Juan P. Vásconez, Marcelo Álvarez, and Marco E. Benalcázar

    MDPI AG
    Hand gesture recognition (HGR) systems using electromyography (EMG) bracelet-type sensors are currently largely used over other HGR technologies. However, bracelets are susceptible to electrode rotation, causing a decrease in HGR performance. In this work, HGR systems with an algorithm for orientation correction are proposed. The proposed orientation correction method is based on the computation of the maximum energy channel using a synchronization gesture. Then, the channels of the EMG are rearranged in a new sequence which starts with the maximum energy channel. This new sequence of channels is used for both training and testing. After the EMG channels are rearranged, this signal passes through the following stages: pre-processing, feature extraction, classification, and post-processing. We implemented user-specific and user-general HGR models based on a common architecture which is robust to rotations of the EMG bracelet. Four experiments were performed, taking into account two different metrics which are the classification and recognition accuracy for both models implemented in this work, where each model was evaluated with and without rotation of the bracelet. The classification accuracy measures how well a model predicted which gesture is contained somewhere in a given EMG, whereas recognition accuracy measures how well a model predicted when it occurred, how long it lasted, and which gesture is contained in a given EMG. The results of the experiments (without and with orientation correction) executed show an increase in performance from 44.5% to 81.2% for classification and from 43.3% to 81.3% for recognition in user-general models, while in user-specific models, the results show an increase in performance from 39.8% to 94.9% for classification and from 38.8% to 94.2% for recognition. The results obtained in this work evidence that the proposed method for orientation correction makes the performance of an HGR robust to rotations of the EMG bracelet.

  • Comparison of convolutional neural networks in fruit detection and counting: A comprehensive evaluation
    J.P. Vasconez, J. Delpiano, S. Vougioukas, and F. Auat Cheein

    Elsevier BV

  • On the design of a human–robot interaction strategy for commercial vehicle driving based on human cognitive parameters
    Juan Pablo Vasconez, Diego Carvajal, and Fernando Auat Cheein

    SAGE Publications
    A proper design of human–robot interaction strategies based on human cognitive factors can help to compensate human limitations for safety purposes. This work is focused on the development of a human–robot interaction system for commercial vehicle (Renault Twizy) driving, that uses driver cognitive parameters to improve driver’s safety during day and night tasks. To achieve this, eye blink behavior measurements are detected using a convolutional neural network, which is capable of operating under variable illumination conditions using an infrared camera. Percentage of eye closure measure values along with blink frequency are used to infer diver’s sleepiness level. The use of such algorithm is validated with experimental tests for subjects under different sleep-quality conditions. Additional cognitive parameters are also analyzed for the human–robot interaction system such as driver sleep quality, distraction level, stress level, and the effects related to not wearing glasses. Based on such driver cognitive state parameters, a human–robot interaction strategy is proposed to limit the speed of a Renault Twizy vehicle by intervening its acceleration and braking system. The proposed human–robot interaction strategy can increase safety during driving tasks for both users and pedestrians.

  • Human–robot interaction in agriculture: A survey and current challenges
    Juan P. Vasconez, George A. Kantor, and Fernando A. Auat Cheein

    Elsevier BV

  • Social robot navigation based on HRI non-verbal communication: A case study on avocado harvesting
    J. P. Vasconez, Leonardo Guevara and F. A. Cheeín


    To date, robotic applications in agriculture are still a challenging topic, which has been studied mainly for large farms. However, groves in particular, require tasks, such as picking and handling, that still require human labor force. In countries such as Chile and Peru, avocado is one of the main fruit production, but its growing in complex environments, making it difficult to fully automate the harvesting process. In this scenario, human-robot interaction (HRI) strategies can provide solutions to enhance the farming process. In this work, we propose the use of a HRI strategy via three visual non-verbal communication methods, with the aim of improving the avocado harvesting process leading to possible human workload decrement. Using such HRI directives, a robot motion controller is implemented for the robotic service unit to ensure that the interaction is socially acceptable during the avocado transportation task. The robot social navigation is tested in a simulated environment where the robot interacts with field workers to test three control tasks which are approaching, following and avoiding the human.

  • Toward semantic action recognition for avocado harvesting process based on single shot multibox detector
    Juan Pablo Vasconez, Jaime Salvo, and Fernando Auat

    IEEE
    To date, human action recognition is still a challenging topic and has been addressed from many perspectives. Detection of human actions can be useful to obtain relevant information to improve complex processes, which is the case of agricultural applications. In this work, the detection of objects that can provide information for human action recognition based on semantic representations is studied. For this purpose, a convolutional neuronal network based on Single Shot MultiBox Detector meta-architecture and MobileNet feature extractor was implemented, which has been trained to detect nine classes of objects during the process of collecting avocados in a Chilean farm. We have found that such detected objects are related to seven possible actions that can be detected during avocado harvesting process. Such information could allow to directly detect certain actions in still images, or improve conventional action detection methods during the harvesting process. The results show that is possible to detect human actions during the process, obtaining action recognition performances from 41% to 80% depending on the task. This approach can help to obtain information about how to improve harvesting process and reduce human workload in near future, which may be an important contribution for the search of sustainable agricultural practices.

RECENT SCHOLAR PUBLICATIONS