Ye Htet

@miyazaki-u.ac.jp

Interdisciplinary Graduate School of Agriculture and Engineering, Department of Materials and Informatics
University of Miyazaki



                 

https://researchid.co/yehtetuom

Ye Htet (Graduate Student Member, IEEE) received the B.E. degree and M.E. degree in Electronic Engineering from University of Technology (Yatanarpon Cyber City), Pyin Oo Lwin, Myanmar, in 2017 and 2020, respectively. Then, he worked as a researcher at the Graduate School of Engineering, University of Miyazaki, Miyazaki, Japan for two years. He is currently a Ph.D. student at the Interdisciplinary Graduate School of Agriculture and Engineering, University of Miyazaki, Japan. His research interests include computer vision, artificial intelligent, deep learning, and human behavior understanding.

RESEARCH, TEACHING, or OTHER INTERESTS

Computer Vision and Pattern Recognition, Computer Science, Artificial Intelligence, Information Systems

9

Scopus Publications

69

Scholar Citations

4

Scholar h-index

1

Scholar i10-index

Scopus Publications

  • Smarter Aging: Developing a Foundational Elderly Activity Monitoring System With AI and GUI Interface
    Ye Htet, Thi Thi Zin, Pyke Tin, Hiroki Tamura, Kazuhiro Kondo, Shinji Watanabe, and Etsuo Chosa

    Institute of Electrical and Electronics Engineers (IEEE)
    The global rise in the elderly population, which presents challenges to healthcare systems owing to labor shortages in caregiving facilities, necessitates innovative solutions for elderly care services. Smart aging technologies such as robotic companions and digital home gadgets, offer a solution to these challenges by improving the elderly’s quality of life and assisting caregivers. However, limitations in data privacy, real-time processing, and reliability often hinder the effectiveness of the existing technologies. Among these, privacy concerns are a major barrier to ensuring user trust and ethical implementation. Therefore, this study proposes a more effective approach for smart aging through elderly activity monitoring that prioritizes data privacy. The proposed system utilizes stereo depth cameras to monitor the activities of the elderly. Data were collected from real-world environments with the participation of six elderly individuals from a care center and hospital. This system focuses on recognizing common daily actions of the elderly including sitting, standing, lying down, and seated in a wheelchair. Additionally, it recognizes transition states (in-between actions such as changing from sitting to standing) that are crucial for assessing balance issues. By integrating motion information with a deep-learning architecture, the system achieved a high accuracy of 99.42% in recognizing daily actions in real-time. This high accuracy was maintained even with minimal data from new environments through transfer learning, and the adaptability of this model ensured its potential for real-world applications. For intuitive interaction between the caregivers and the system, a user-friendly graphical interface (GUI) was also designed in the proposed approach.

  • A Markov-Dependent stochastic approach to modeling lactation curves in dairy cows
    Thi Thi Zin, Ye Htet, Tunn Cho Lwin, and Pyke Tin

    Elsevier BV

  • Temporal-Dependent Features Based Inter-Action Transition State Recognition for Eldercare System
    Ye Htet, Thi Thi Zin, Hiroki Tamura, Kazuhiro Kondo, and Etsuo Chosa

    IEEE
    Elderly individuals are particularly vulnerable to accidents, with a significant number of incidents occurring during transition states between primitive actions such as sitting to standing and sitting to lying. This paper introduces a novel machine-learning technique in artificial intelligence, based on temporal-dependent features to assist the elderly. To ensure privacy, we employed stereo depth cameras for data acquisition from the elder care center and exclusively processed depth images. The first step of our approach involves localizing individuals using the YOLOv5 object detector. Subsequently, we employed the Segment Anything Model to segment only the person masks, excluding other areas from consideration. Temporal-dependent features were then extracted for every five frames from the subsequent person masks that enable the recognition of transition states from primitive actions. We tested various classification approaches and compared the results by defining norms and metrics. Our experimental findings demonstrated that the overall accuracy rates for classifying 2 classes and 5 classes on small segments are 91.18% and 91.67% respectively. To validate the effectiveness of our proposed method, we conducted experiments using real-life environments inside three rooms and obtained average accuracy rates of 90.17%, 97.16%, and 77.44% respectively. Overall, this model has the potential to enhance the safety and well-being of the elderly population.

  • HMM-Based Action Recognition System for Elderly Healthcare by Colorizing Depth Map
    Ye Htet, Thi Thi Zin, Pyke Tin, Hiroki Tamura, Kazuhiro Kondo, and Etsuo Chosa

    MDPI AG
    Addressing the problems facing the elderly, whether living independently or in managed care facilities, is considered one of the most important applications for action recognition research. However, existing systems are not ready for automation, or for effective use in continuous operation. Therefore, we have developed theoretical and practical foundations for a new real-time action recognition system. This system is based on Hidden Markov Model (HMM) along with colorizing depth maps. The use of depth cameras provides privacy protection. Colorizing depth images in the hue color space enables compressing and visualizing depth data, and detecting persons. The specific detector used for person detection is You Look Only Once (YOLOv5). Appearance and motion features are extracted from depth map sequences and are represented with a Histogram of Oriented Gradients (HOG). These HOG feature vectors are transformed as the observation sequences and then fed into the HMM. Finally, the Viterbi Algorithm is applied to recognize the sequential actions. This system has been tested on real-world data featuring three participants in a care center. We tried out three combinations of HMM with classification algorithms and found that a fusion with Support Vector Machine (SVM) had the best average results, achieving an accuracy rate (84.04%).

  • Action Recognition System for Senior Citizens Using Depth Image Colorization
    Ye Htet, Thi Thi Zin, Hiroki Tamura, Kazuhiro Kondo, and Etsuo Chosa

    IEEE
    This paper describes about the system which can be used at the care center for the purpose of elderly action recognition using depth camera. The depth image colorization is used for compression, visualization, and person detection process. The YOLOv5 (You Only Look Once) algorithm is used as object detector. The space-time features are extracted from depth sequences and they are recognized by linear SVM (Support Vector Machine) classifier. The random image sequences are generated for testing to recognize six actions. The results show that this system can detect the various actions with the average of 92% accuracy for different durations.

  • Real-time action recognition system for elderly people using stereo depth camera
    Thi Thi Zin, Ye Htet, Yuya Akagi, Hiroki Tamura, Kazuhiro Kondo, Sanae Araki, and Etsuo Chosa

    MDPI AG
    Smart technologies are necessary for ambient assisted living (AAL) to help family members, caregivers, and health-care professionals in providing care for elderly people independently. Among these technologies, the current work is proposed as a computer vision-based solution that can monitor the elderly by recognizing actions using a stereo depth camera. In this work, we introduce a system that fuses together feature extraction methods from previous works in a novel combination of action recognition. Using depth frame sequences provided by the depth camera, the system localizes people by extracting different regions of interest (ROI) from UV-disparity maps. As for feature vectors, the spatial-temporal features of two action representation maps (depth motion appearance (DMA) and depth motion history (DMH) with a histogram of oriented gradients (HOG) descriptor) are used in combination with the distance-based features, and fused together with the automatic rounding method for action recognition of continuous long frame sequences. The experimental results are tested using random frame sequences from a dataset that was collected at an elder care center, demonstrating that the proposed system can detect various actions in real-time with reasonable recognition rates, regardless of the length of the image sequences.


  • Elderly monitoring and action recognition system using stereo depth camera
    Thi Thi Zin, Ye Htet, Yuya Akagi, Hiroki Tamura, Kazuhiro Kondo, and Sanae Araki

    IEEE
    The proposed system used stereo type depth camera by examining the human action recognition and also sleep monitoring in the elderly care center. Different regions of interest (ROI) are extracted using the U-Disparity and V-Disparity maps. The main information used for recognition is 3D human centroid height relative to the floor and percentage of movement from frame differencing for sleep monitoring. The results from the experiments of the proposed method show that this system can detect the person location, sitting or lying and also sleep behaviors effectively.

  • Handwritten Characters Segmentation using Projection Approach
    Thi Thi Zin, Shin Thant, Ye Htet, and Pyke Tin

    IEEE
    In the area of optical character recognition, handwritten character segmentation is still an ongoing process. Having good segmentation result can provide the better recognition accuracy. In the proposed system, segmentation is carried out mainly on labelling and projection concepts. The input word is firstly labelled. Then, the modified word is segmented with projection approach. The experiments are performed on local dataset with 1600 words approximately and the system gets segmentation accuracy around 85.75 percentage.

RECENT SCHOLAR PUBLICATIONS

  • Smarter Aging: Developing A Foundational Elderly Activity Monitoring System with AI and GUI Interface
    Y Htet, TT Zin, P Tin, H Tamura, K Kondo, S Watanabe, E Chosa
    IEEE Access 2024

  • A Markov-Dependent stochastic approach to modeling lactation curves in dairy cows
    TT Zin, Y Htet, TC Lwin, P Tin
    Smart Agricultural Technology 6, 100335 2023

  • Temporal-Dependent Features Based Inter-Action Transition State Recognition for Eldercare System
    Y Htet, TT Zin, H Tamura, K Kondo, E Chosa
    2023 IEEE 13th International Conference on Consumer Electronics-Berlin (ICCE 2023

  • Artificial Intelligence Fusion in Digital Transformation Techniques for Lameness Detection in Dairy Cattle
    TT ZIN, Y HTET, SC TUN, P TIN
    International Journal of Biomedical Soft Computing and Human Sciences: the 2023

  • HMM-based action recognition system for elderly healthcare by colorizing depth map
    Y Htet, TT Zin, P Tin, H Tamura, K Kondo, E Chosa
    International Journal of Environmental Research and Public Health 19 (19), 12055 2022

  • Action Recognition System for Senior Citizens Using Depth Image Colorization
    Y Htet, TT Zin, H Tamura, K Kondo, E Chosa
    2022 IEEE 4th Global Conference on Life Sciences and Technologies (LifeTech 2022

  • Artificial Intelligence topping on spectral analysis for lameness detection in dairy cattle
    TT Zin, Y Htet, CT San, P Tin
    Proceedings of the Annual Conference of Biomedical Fuzzy Systems Association 2022

  • Real-time action recognition system for elderly people using stereo depth camera
    TT Zin, Y Htet, Y Akagi, H Tamura, K Kondo, S Araki, E Chosa
    Sensors 21 (17), 5895 2021

  • Smart irrigation: An intelligent system for growing strawberry plants in different seasons of the year
    Y Htet, HK Oo, TT Zin
    ICIC Express Letters 12 (4), 359-367 2021

  • Elderly monitoring and action recognition system using stereo depth camera
    TT Zin, Y Htet, Y Akagi, H Tamura, K Kondo, S Araki
    2020 IEEE 9th Global Conference on Consumer Electronics (GCCE), 316-317 2020

  • Handwritten Characters Segmentation using Projection Approach
    TT Zin, S Thant, Y Htet, P Tin
    2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech 2020

MOST CITED SCHOLAR PUBLICATIONS

  • Real-time action recognition system for elderly people using stereo depth camera
    TT Zin, Y Htet, Y Akagi, H Tamura, K Kondo, S Araki, E Chosa
    Sensors 21 (17), 5895 2021
    Citations: 43

  • HMM-based action recognition system for elderly healthcare by colorizing depth map
    Y Htet, TT Zin, P Tin, H Tamura, K Kondo, E Chosa
    International Journal of Environmental Research and Public Health 19 (19), 12055 2022
    Citations: 7

  • Handwritten Characters Segmentation using Projection Approach
    TT Zin, S Thant, Y Htet, P Tin
    2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech 2020
    Citations: 6

  • Smart irrigation: An intelligent system for growing strawberry plants in different seasons of the year
    Y Htet, HK Oo, TT Zin
    ICIC Express Letters 12 (4), 359-367 2021
    Citations: 4

  • Artificial Intelligence topping on spectral analysis for lameness detection in dairy cattle
    TT Zin, Y Htet, CT San, P Tin
    Proceedings of the Annual Conference of Biomedical Fuzzy Systems Association 2022
    Citations: 3

  • Elderly monitoring and action recognition system using stereo depth camera
    TT Zin, Y Htet, Y Akagi, H Tamura, K Kondo, S Araki
    2020 IEEE 9th Global Conference on Consumer Electronics (GCCE), 316-317 2020
    Citations: 3

  • Action Recognition System for Senior Citizens Using Depth Image Colorization
    Y Htet, TT Zin, H Tamura, K Kondo, E Chosa
    2022 IEEE 4th Global Conference on Life Sciences and Technologies (LifeTech 2022
    Citations: 2

  • Temporal-Dependent Features Based Inter-Action Transition State Recognition for Eldercare System
    Y Htet, TT Zin, H Tamura, K Kondo, E Chosa
    2023 IEEE 13th International Conference on Consumer Electronics-Berlin (ICCE 2023
    Citations: 1