Verified @hotmail.com
Faculty of Engineering
Universidad Andrés Bello / Research Professor
Ph.D. in Electronic Engineering from the Universidad Técnica Federico Santa María - Chile. Electronics and Control Engineer from the Escuela Politécnica Nacional (EPN) - Ecuador. Currently, he is a researcher at the Artificial Vision and Intelligence Research Laboratory of the Escuela Politécnica Nacional. He has investigated topics related to Robotics and Artificial Intelligence, Human-Robot Interaction (HRI), Human-Machine Interaction (HMI), assistance and collaborative systems, algorithms able to detect different non-verbal communication techniques such as gestures, actions, and human cognitive parameters. He has also worked in the area of precision agriculture using cooperative robots capable of interacting and sharing the workspace with humans. His areas of interest are robotics, machine learning, deep learning, reinforcement learning, and computer vision.
Ph.D. in Electronic Engineering - Universidad Técnica Federico Santa María - Chile
Electronics and Control Engineer - Universidad Politécnica Nacional - Ecuador
Human-robot interaction, artificial intelligence, robotics, electronics.
Scopus Publications
Pablo Ormeño-Arriagada, Eduardo Navarro, Carla Taramasco, Gustavo Gatica, and Juan Pablo Vásconez
Springer Nature Switzerland
J.P. Vásconez, I.N. Vásconez, V. Moya, M.J. Calderón-Díaz, M. Valenzuela, X. Besoain, M. Seeger, and F. Auat Cheein
Elsevier BV
Piero Vilcapoma, Diana Parra Meléndez, Alejandra Fernández, Ingrid Nicole Vásconez, Nicolás Corona Hillmann, Gustavo Gatica, and Juan Pablo Vásconez
MDPI AG
The use of artificial intelligence algorithms (AI) has gained importance for dental applications in recent years. Analyzing AI information from different sensor data such as images or panoramic radiographs (panoramic X-rays) can help to improve medical decisions and achieve early diagnosis of different dental pathologies. In particular, the use of deep learning (DL) techniques based on convolutional neural networks (CNNs) has obtained promising results in dental applications based on images, in which approaches based on classification, detection, and segmentation are being studied with growing interest. However, there are still several challenges to be tackled, such as the data quality and quantity, the variability among categories, and the analysis of the possible bias and variance associated with each dataset distribution. This study aims to compare the performance of three deep learning object detection models—Faster R-CNN, YOLO V2, and SSD—using different ResNet architectures (ResNet-18, ResNet-50, and ResNet-101) as feature extractors for detecting and classifying third molar angles in panoramic X-rays according to Winter’s classification criterion. Each object detection architecture was trained, calibrated, validated, and tested with three different feature extraction CNNs which are ResNet-18, ResNet-50, and ResNet-101, which were the networks that best fit our dataset distribution. Based on such detection networks, we detect four different categories of angles in third molars using panoramic X-rays by using Winter’s classification criterion. This criterion characterizes the third molar’s position relative to the second molar’s longitudinal axis. The detected categories for the third molars are distoangular, vertical, mesioangular, and horizontal. For training, we used a total of 644 panoramic X-rays. The results obtained in the testing dataset reached up to 99% mean average accuracy performance, demonstrating the YOLOV2 obtained higher effectiveness in solving the third molar angle detection problem. These results demonstrate that the use of CNNs for object detection in panoramic radiographs represents a promising solution in dental applications.
Ricardo Paul Urvina, César Leonardo Guevara, Juan Pablo Vásconez, and Alvaro Javier Prado
MDPI AG
This article presents a combined route and path planning strategy to guide Skid–Steer Mobile Robots (SSMRs) in scheduled harvest tasks within expansive crop rows with complex terrain conditions. The proposed strategy integrates: (i) a global planning algorithm based on the Traveling Salesman Problem under the Capacitated Vehicle Routing approach and Optimization Routing (OR-tools from Google) to prioritize harvesting positions by minimum path length, unexplored harvest points, and vehicle payload capacity; and (ii) a local planning strategy using Informed Rapidly-exploring Random Tree (IRRT*) to coordinate scheduled harvesting points while avoiding low-traction terrain obstacles. The global approach generates an ordered queue of harvesting locations, maximizing the crop yield in a workspace map. In the second stage, the IRRT* planner avoids potential obstacles, including farm layout and slippery terrain. The path planning scheme incorporates a traversability model and a motion model of SSMRs to meet kinematic constraints. Experimental results in a generic fruit orchard demonstrate the effectiveness of the proposed strategy. In particular, the IRRT* algorithm outperformed RRT and RRT* with 96.1% and 97.6% smoother paths, respectively. The IRRT* also showed improved navigation efficiency, avoiding obstacles and slippage zones, making it suitable for precision agriculture.
Juan Pablo Vásconez, Elias Schotborgh, Ingrid Nicole Vásconez, Viviana Moya, Andrea Pilco, Oswaldo Menéndez, Robert Guamán-Rivera, and Leonardo Guevara
MDPI AG
Intelligent transportation and advanced mobility techniques focus on helping operators to efficiently manage navigation tasks in smart cities, enhancing cost efficiency, increasing security, and reducing costs. Although this field has seen significant advances in developing large-scale monitoring of smart cities, several challenges persist concerning the practical assignment of delivery personnel to customer orders. To address this issue, we propose an architecture to optimize the task assignment problem for delivery personnel. We propose the use of different cost functions obtained with deterministic and machine learning techniques. In particular, we compared the performance of linear and polynomial regression methods to construct different cost functions represented by matrices with orders and delivery people information. Then, we applied the Hungarian optimization algorithm to solve the assignment problem, which optimally assigns delivery personnel and orders. The results demonstrate that when used to estimate distance information, linear regression can reduce estimation errors by up to 568.52 km (1.51%) for our dataset compared to other methods. In contrast, polynomial regression proves effective in constructing a superior cost function based on time information, reducing estimation errors by up to 17,143.41 min (11.59%) compared to alternative methods. The proposed approach aims to enhance delivery personnel allocation within the delivery sector, thereby optimizing the efficiency of this process.
Viviana Moya, Angélica Quito, Andrea Pilco, Juan P. Vásconez, and Christian Vargas
Ital Publication
In recent years, the accurate identification of chili maturity stages has become essential for optimizing cultivation processes. Conventional methodologies, primarily reliant on manual assessments or rudimentary detection systems, often fall short of reflecting the plant’s natural environment, leading to inefficiencies and prolonged harvest periods. Such methods may be imprecise and time-consuming. With the rise of computer vision and pattern recognition technologies, new opportunities in image recognition have emerged, offering solutions to these challenges. This research proposes an affordable solution for object detection and classification, specifically through version 5 of the You Only Look Once (YOLOv5) model, to determine the location and maturity state of rocoto chili peppers cultivated in Ecuador. To enhance the model’s efficacy, we introduce a novel dataset comprising images of chili peppers in their authentic states, spanning both immature and mature stages, all while preserving their natural settings and potential environmental impediments. This methodology ensures that the dataset closely replicates real-world conditions encountered by a detection system. Upon testing the model with this dataset, it achieved an accuracy of 99.99% for the classification task and an 84% accuracy rate for the detection of the crops. These promising outcomes highlight the model’s potential, indicating a game-changing technique for chili small-scale farmers, especially in Ecuador, with prospects for broader applications in agriculture. Doi: 10.28991/ESJ-2024-08-02-08 Full Text: PDF
Oswaldo Menéndez, Juan Villacrés, Alvaro Prado, Juan P. Vásconez, and Fernando Auat-Cheein
MDPI AG
Electric-field energy harvesters (EFEHs) have emerged as a promising technology for harnessing the electric field surrounding energized environments. Current research indicates that EFEHs are closely associated with Tribo-Electric Nano-Generators (TENGs). However, the performance of TENGs in energized environments remains unclear. This work aims to evaluate the performance of TENGs in electric-field energy harvesting applications. For this purpose, TENGs of different sizes, operating in single-electrode mode were conceptualized, assembled, and experimentally tested. Each TENG was mounted on a 1.5 HP single-phase induction motor, operating at nominal parameters of 8 A, 230 V, and 50 Hz. In addition, the contact layer was mounted on a linear motor to control kinematic stimuli. The TENGs successfully induced electric fields and provided satisfactory performance to collect electrostatic charges in fairly variable electric fields. Experimental findings disclosed an approximate increase in energy collection ranging from 1.51% to 10.49% when utilizing TENGs compared to simple EFEHs. The observed correlation between power density and electric field highlights TENGs as a more efficient energy source in electrified environments compared to EFEHs, thereby contributing to the ongoing research objectives of the authors.
Juan Sebastian Estrada, Juan Pablo Vasconez, Longsheng Fu, and Fernando Auat Cheein
Elsevier BV
Andrea Pilco, Viviana Moya, Angélica Quito, Juan P. Vásconez, and Matías Limaico
EJournal Publishing
This study sheds light on the evolution of the agricultural industry and highlights advances in production area. The salient recognition of fruit size and shape as critical quality parameters underscores the significance of the research. In response to this challenge, the research introduces specialized image processing techniques designed to streamline the sorting of apples in agricultural settings, specifically emphasizing accurate apple width estimation. A purpose-built machine was designed, featuring an enclosure box housing a cost-effective camera for the vision system and a chain conveyor for classifying Malus domestica Borkh kind apples. These goals were successfully achieved by implementing image preprocessing, segmentation, and measurement techniques to facilitate sorting. The proposed methodology classifies apples into three distinct classes, attaining an impressive accuracy of 94% in Class 1, 92% in Class 2, and 86% in Class 3. This represents an efficient and economical solution for apple classification and size estimation, promising substantial enhancements to sorting processes and pushing the boundaries of automation in the agricultural sector.
L. Guevara, P. Wariyapperuma, H. Arunachalam, J. Vasconez, M. Hanheide, and E. Sklar
IEEE
The introduction of mobile robots to assist pickers in transporting crops during fruit harvesting operations is a promising solution to mitigate the impacts of the current labour shortage. However, having robots sharing the workspace with humans involves solving new challenges related to Human-Robot Interaction (HRI). For instance, effective Human-Robot Communication (HRC) methods will be crucial to not only ensure safe and efficient HRI but to mitigate the potential stress of pickers who interact with robots for the first time. If the stress or discomfort levels are not mitigated, then instead of facilitating the harvesting task, the robot can become a burden. In this context, this paper contributes to one of the first user studies investigating the preferences and requirements of end users (actual pickers) to improve the usability of Robot-Assisted Fruit Harvesting (RAFH) solutions. The study involves real-world experiments and the usability assessment of existing and potential future technology which results and lessons learned can be used as guidelines for agri-robotics companies and stakeholders on how to design and deploy their own RAFH solutions.
Juan I. Saez Rojas, Jenny M. Pantoja, Mónica Matamala, Inesmar C. Briceño, Juan Pablo Vásconez, and Alfonso R. Romero-Conrado
Elsevier BV
Mailyn Calderón Díaz, Ricardo Ulloa-Jiménez, Nicole Castro Laroze, Juan Pablo Vásconez, Jairo R. Coronado-Hernández, Mónica Acuña Rodríguez, and Samir F. Umaña Ibáñez
Springer Nature Switzerland
Mailyn Calderón-Díaz, Rony Silvestre Aguirre, Juan P. Vásconez, Roberto Yáñez, Matías Roby, Marvin Querales, and Rodrigo Salas
MDPI AG
There is a significant risk of injury in sports and intense competition due to the demanding physical and psychological requirements. Hamstring strain injuries (HSIs) are the most prevalent type of injury among professional soccer players and are the leading cause of missed days in the sport. These injuries stem from a combination of factors, making it challenging to pinpoint the most crucial risk factors and their interactions, let alone find effective prevention strategies. Recently, there has been growing recognition of the potential of tools provided by artificial intelligence (AI). However, current studies primarily concentrate on enhancing the performance of complex machine learning models, often overlooking their explanatory capabilities. Consequently, medical teams have difficulty interpreting these models and are hesitant to trust them fully. In light of this, there is an increasing need for advanced injury detection and prediction models that can aid doctors in diagnosing or detecting injuries earlier and with greater accuracy. Accordingly, this study aims to identify the biomarkers of muscle injuries in professional soccer players through biomechanical analysis, employing several ML algorithms such as decision tree (DT) methods, discriminant methods, logistic regression, naive Bayes, support vector machine (SVM), K-nearest neighbor (KNN), ensemble methods, boosted and bagged trees, artificial neural networks (ANNs), and XGBoost. In particular, XGBoost is also used to obtain the most important features. The findings highlight that the variables that most effectively differentiate the groups and could serve as reliable predictors for injury prevention are the maximum muscle strength of the hamstrings and the stiffness of the same muscle. With regard to the 35 techniques employed, a precision of up to 78% was achieved with XGBoost, indicating that by considering scientific evidence, suggestions based on various data sources, and expert opinions, it is possible to attain good precision, thus enhancing the reliability of the results for doctors and trainers. Furthermore, the obtained results strongly align with the existing literature, although further specific studies about this sport are necessary to draw a definitive conclusion.
Juan Pablo Vásconez, Mailyn Calderón-Díaz, Inesmar C. Briceño, Jenny M. Pantoja, and Patricio J. Cruz
Springer Nature Switzerland
Patricio J. Cruz, Juan Pablo Vásconez, Ricardo Romero, Alex Chico, Marco E. Benalcázar, Robin Álvarez, Lorena Isabel Barona López, and Ángel Leonardo Valdivieso Caraguay
Springer Science and Business Media LLC
AbstractHand gesture recognition (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) has been investigated for human-machine applications in the last few years. The information obtained from the HGR systems has the potential to be helpful to control machines such as video games, vehicles, and even robots. Therefore, the key idea of the HGR system is to identify the moment in which a hand gesture was performed and it’s class. Several human-machine state-of-the-art approaches use supervised machine learning (ML) techniques for the HGR system. However, the use of reinforcement learning (RL) approaches to build HGR systems for human-machine interfaces is still an open problem. This work presents a reinforcement learning (RL) approach to classify EMG-IMU signals obtained using a Myo Armband sensor. For this, we create an agent based on the Deep Q-learning algorithm (DQN) to learn a policy from online experiences to classify EMG-IMU signals. The HGR proposed system accuracy reaches up to $$97.45 \\pm 1.02\\%$$ 97.45 ± 1.02 % and $$88.05 \\pm 3.10\\%$$ 88.05 ± 3.10 % for classification and recognition respectively, with an average inference time per window observation of 20 ms. and we also demonstrate that our method outperforms other approaches in the literature. Then, we test the HGR system to control two different robotic platforms. The first is a three-degrees-of-freedom (DOF) tandem helicopter test bench, and the second is a virtual six-degree-of-freedom (DOF) UR5 robot. We employ the designed hand gesture recognition (HGR) system and the inertial measurement unit (IMU) integrated into the Myo sensor to command and control the motion of both platforms. The movement of the helicopter test bench and the UR5 robot is controlled under a PID controller scheme. Experimental results show the effectiveness of using the proposed HGR system based on DQN for controlling both platforms with a fast and accurate response.
Juan Pablo Vásconez, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, and Marco E. Benalcázar
Elsevier BV
Ángel Leonardo Valdivieso Caraguay, Juan Pablo Vásconez, Lorena Isabel Barona López, and Marco E. Benalcázar
MDPI AG
In recent years, hand gesture recognition (HGR) technologies that use electromyography (EMG) signals have been of considerable interest in developing human–machine interfaces. Most state-of-the-art HGR approaches are based mainly on supervised machine learning (ML). However, the use of reinforcement learning (RL) techniques to classify EMGs is still a new and open research topic. Methods based on RL have some advantages such as promising classification performance and online learning from the user’s experience. In this work, we propose a user-specific HGR system based on an RL-based agent that learns to characterize EMG signals from five different hand gestures using Deep Q-network (DQN) and Double-Deep Q-Network (Double-DQN) algorithms. Both methods use a feed-forward artificial neural network (ANN) for the representation of the agent policy. We also performed additional tests by adding a long–short-term memory (LSTM) layer to the ANN to analyze and compare its performance. We performed experiments using training, validation, and test sets from our public dataset, EMG-EPN-612. The final accuracy results demonstrate that the best model was DQN without LSTM, obtaining classification and recognition accuracies of up to 90.37%±10.7% and 82.52%±10.9%, respectively. The results obtained in this work demonstrate that RL methods such as DQN and Double-DQN can obtain promising results for classification and recognition problems based on EMG signals.
Juan P. Vásconez, Fernando Basoalto, Inesmar C. Briceño, Jenny M. Pantoja, Roberto A. Larenas, Jhon H. Rios, and Felipe A. Castro
Elsevier BV
Juan P. Vasconez, Jenny M. Pantoja, Roberto A. Larenas, Jhon H. Rios, Hector G. Puente, Eduardo A. Mendez, Yulineth Gómez-Charris, and Inesmar C. Briceño
Elsevier BV
J. Simonetti Harlan, M. Pantoja Jenny, Matamala Monica, P. Vasconez Juan, C. Briceño Inesmar, A. Larenas Roberto, and Neira-Rodado Dionicio
Elsevier BV
Juan Pablo Vásconez, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, and Marco E. Benalcázar
MDPI AG
Hand gesture recognition systems (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) have been studied for different applications in recent years. Most commonly, cutting-edge HGR methods are based on supervised machine learning methods. However, the potential benefits of reinforcement learning (RL) techniques have shown that these techniques could be a viable option for classifying EMGs. Methods based on RL have several advantages such as promising classification performance and online learning from experience. In this work, we developed an HGR system made up of the following stages: pre-processing, feature extraction, classification, and post-processing. For the classification stage, we built an RL-based agent capable of learning to classify and recognize eleven hand gestures—five static and six dynamic—using a deep Q-network (DQN) algorithm based on EMG and IMU information. The proposed system uses a feed-forward artificial neural network (ANN) for the representation of the agent policy. We carried out the same experiments with two different types of sensors to compare their performance, which are the Myo armband sensor and the G-force sensor. We performed experiments using training, validation, and test set distributions, and the results were evaluated for user-specific HGR models. The final accuracy results demonstrated that the best model was able to reach up to 97.50%±1.13% and 88.15%±2.84% for the classification and recognition, respectively, with regard to static gestures, and 98.95%±0.62% and 90.47%±4.57% for the classification and recognition, respectively, with regard to dynamic gestures with the Myo armband sensor. The results obtained in this work demonstrated that RL methods such as the DQN are capable of learning a policy from online experience to classify and recognize static and dynamic gestures using EMG and IMU signals.
Juan P. Vásconez and Fernando A. Auat Cheein
Elsevier BV
Ricardo Romero, Patricio Cruz, J. P. Vasconez, Marco E. Benalcázar, Robin Álvarez, Lorena Isabel Barona López and Ángel Leonardo Valdivieso Caraguay
Javier Alejandro Ordóñez Flores, Robin Gerardo Alvarez Rueda, Marco E. Benalcázar, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, Patricio Cruz and J. P. Vasconez
Systems used for pattern recognition are usually divided into 4 stages: signal acquisition, preprocessing, feature extraction and classification. However, the use of algorithms in these last 3 stages is not justified and researchers use them without criteria other than the result achieved at the end of the process. In this paper we propose a new methodology and show its particular application to the recognition of five hand gestures based on 8 channels of Electromyography using the Myo armband device placed on the forearm. If n features are extracted, they will form clusters of points in n-dimensional space and now the selection of the best preprocessing algorithms and the best features are based on maximizing the distance among clusters. On the other hand, both intrapersonal and interpersonal variability are treated to facilitate the understanding of the phenomenon. As a demonstration, it was applied to 12 people and the recognition accuracy was 97%.
Alex Chico, Patricio J. Cruz, Juan Pablo Vasconez, Marco E. Benalcazar, Robin Alvarez, Lorena Barona, and Angel Leonardo Valdivieso
IEEE
Human-machine interfaces (HMIs) have received an increased attention during the past decades, specially due to the use of robots in many fields of the industry. In this paper, we employ a hand gesture recognition (HGR) system and the inertial measurement unit (IMU) integrated in the Myo Armband sensor as a HMI to control the position and orientation of a virtual six-degree-of-freedom (DoF) UR5 robot. As part of the HMI, this work also focuses on solving the trajectory tracking problem for the UR5 robot using two different control strategies: a minimum norm PID and a controller based on linear algebra. Both controllers are designed from the UR5 kinematic model. Finally, we test and compare their performance by computing their IAE performance criterion and the position errors obtained from two experiments: tracking pre-defined input trajectories and commanding the virtual robot by the HGR system plus the Myo Armband's IMU signals.