Andry Chowanda

@binus.ac.id

Computer Science
Bina Nusantara University



                    

https://researchid.co/andrychowanda

RESEARCH INTERESTS

Affective Computing, Social Signal Processing, Virtual Agents, Deep Learning, Game Technology

129

Scopus Publications

1470

Scholar Citations

20

Scholar h-index

34

Scholar i10-index

Scopus Publications


  • Hyperparameter tuning for deep learning model used in multimodal emotion recognition data
    Fernandi Widardo and Andry Chowanda

    Institute of Advanced Engineering and Science
    This study attempts to address overfitting, a frequent problem with multimodal emotion identification models. This study proposes model optimization using various hyperparameter approaches, such as dropout layer, l2 kernel regularization, batch normalization, and learning rate schedule, and discovers which approach yields the most impact for optimizing the model from overfitting. For the emotion dataset, this research utilizes the interactive emotional dyadic motion capture (IEMOCAP) dataset and uses the motion capture and speech audio data modality. The models used in this experiment are convolutional neural network (CNN) for the motion capture data and CNN-bidirectional long short-term memory (CNN-BiLSTM) for the audio data. This study also applied a smaller model batch size in the experiment to accommodate the limited computing resources. The result of the experiment is that the optimization using hyperparameter tuning raises the validation accuracy to 73.67% and the f1-score to 73% on audio and motion capture data, respectively, from the base model of this research and can competitively compete with another research model result. It is hoped that the optimization experiment results in this study can be useful for future emotion recognition research, especially for those who have encountered overfitting problems.

  • KEYSTROKE DYNAMICS ON MULTI-SESSION AND UNCONTROLLED SETTINGS USING CNN BI-LSTM


  • IPerFEX-2023: Indonesian personal financial entity extraction using indoBERT-BiGRU-CRF model
    Emmanuel Dave and Andry Chowanda

    Springer Science and Business Media LLC

  • Fitcam: detecting and counting repetitive exercises with deep learning
    Ferdinandz Japhne, Kevin Janada, Agustinus Theodorus, and Andry Chowanda

    Springer Science and Business Media LLC
    AbstractPhysical fitness is one of the most important traits a person could have for health longevity. Conducting regular exercise is fundamental to maintaining physical fitness, but with the caveat of occurring injury if not done properly. Several algorithms exists to automatically monitor and evaluate exercise using the user’s pose. However, it is not an easy task to accurately monitor and evaluate exercise poses automatically. Moreover, there are limited number of datasets exists in this area. In our work, we attempt to construct a neural network model that could be used to evaluate exercise poses based on key points extracted from exercise video frames. First, we collected several images consists of different exercise poses. We utilize the the OpenPose library to extract key points from exercise video datasets and LSTM neural network to learn exercise patterns. The result of our experiment has shown that the methods used are quite effective for exercise types of push-up, sit-up, squat, and plank. The neural-network model achieved more than 90% accuracy for the four exercise types.


  • Using CNN along with Transfer Learning to Predict Wildfires in Borneo's Topography
    Richard Limec, Roderick Kangson, Anderies, and Andry Chowanda

    IEEE
    Indonesia has experienced significant carbon emissions resulting from wildfires, and these projections indicate that these emissions are expected to rise over the next decade due to the effects of climate change. These wildfires are due to environmental characteristics, especially ombrotrophic peatlands present in wildfire-prone areas like Borneo. To address the wildfire problem in Borneo, we can try leveraging machine learning models to develop a wildfire prediction model that integrates real-time weather data and a fuel classification system using frameworks like TensorFlow. With the limited resources and remote terrain that Borneo inhibits, we utilize satellite technology for a more efficient and cost-effective approach to wildfire prediction compared to ground-based devices. Utilizing satellite imagery combined with Convolutional Neural Networks (CNNs) to analyze geospatial data for identifying wildfire-affected regions along with a Fire Weather Index adjusted for areas of peatland could result in a more accurate result, with the addition of transfer learning from the model we could produce an accurate result. These results could then be used to enact preventative measures that could reduce the severe impact of wildfires on Borneo's ecosystems and communities.

  • Exploring the Impact of Image Downscaling Algorithms on Color Perception
    Bryan Myer Setiawan, Aura Kristian Sumowidjojo, Anderies, and Andry Chowanda

    IEEE
    Several image downscaling algorithms have been developed over the years, yet their impact on color perception after downscaling has not been thoroughly investigated. This study conducts various experiments to evaluate traditional and modern image downscaling algorithms and their effects on color preservation. Using the adapted method of Quantitative Color Pattern Analysis (QCPA), we objectively measure each algorithm's performance in maintaining color perception. The algorithms compared include Bicubic Interpolation, Lanczos Resampling, Seam Carving, and Rapid Detail-Preserving Image Downscaling (RDPID) using the Imagenette and DIV2K datasets. Evaluation metrics encompass average color histogram similarity, dominant color shifts, and color scatter plots. Results reveal varying degrees of color information loss among the algorithms, with Bicubic Interpolation and RDPID showing superior color fidelity compared to Seam Carving. For the Imagenette dataset, average color histogram similarity between the original and downscaled images is 78.9% for Bicubic Interpolation, 75.8% Lanczos Resampling, 77.6% for RDPID, and 20.8% for Seam Carving. For the DIV2K dataset, the similarities are 65.7% for Bicubic Interpolation, 64.7% for RDPID, and 48.8% for Seam Carving. This research highlights the extent of color information loss due to the downscaling process across the chosen algorithms. Future studies should explore additional algorithms and assess their performance across diverse datasets and applications, enhancing our understanding of color information loss in image downscaling and its impact on computer vision tasks.

  • A fine-tuned vision transformer-based on limited dataset for facial expression recognition
    Rio Febrian, Ronald Richie Huang, Nicholas Setiono, Dimas Ramdhan, and Andry Chowanda

    Elsevier BV

  • Development of a Secure Web Based Application to Automate Data Synchronization and Processing
    Hansen Artajaya, Julieta, Jose Giancarlos, Jurike V. Moniaga, and Andry Chowanda

    Elsevier BV


  • Machine Learning for Threat Detection
    Dustin Sutanto, Febri Fransisca, Steven Liem, Andry Chowanda, Yohan Muliono, and Zahra Nabila Izdihar

    IEEE
    These days almost all people in the world use the Internet as the internet is constantly evolving. Cyber attack scale are increased thanks to cybercriminals that have become sophisticated in employing threats. Since the 1980s, Intrusion Detection Systems (IDS) have been developed to defend computer networks against malicious activity and unauthorized access. Companies choose to implement IDS the part of their Cyber Security strategy to protect their data and network from potential threats. in this case, IDS has played a crucial role in detecting abnormal traffic and protecting digital assets. furthermore, a high-accuracy IDS is needed in concern of having false positive feedback from the IDS machine itself that can cause various problems. to accompany the need for high accuracy and trainable IDS machines, we experimented with a hybrid system using machine learning that is combined with traditional IDS machines resulting in higher accuracy and lower error rate. By adjusting the parameter the accuracy improved and could achieve an accuracy of 99.66% of classifying the traffic.

  • Security Challenges and Issues in Cloud Computing
    Rika Zakiyyah, Daniel Permana, Devan Valencio, Andry Chowanda, Yohan Muliono, and Zahra Nabila Izdihar

    IEEE
    Managing data has changed significantly because of cloud computing, which offers scalabe, flexible and reasonably priced solutions to enterprises and to people as well such as Amazon, Google, and Microsoft expanding their infrastructure to meet the demand from the customer. With such advantages there will also be disadvantages such as security problems. This paper will discuss about security problems such as encryption, access control, multi-factor authentication, regular audits, secure configurations, incident response planning. This paper will also talk about security measures to deal with threats such as malicious insiders, data abuse, unsecured interfaces and APIs, shared technology complications, data loss or leakage, hijacking, and enigmatic risk profiles. This study examines and evaluates a range of cloud computing security concerns and issues through a systematic analysis of the literature. The goal of the research is to raise public awareness of the need for cloud computing security and to offer possible remedies. Identify and classify network security threats and vulnerabilities in cloud computing by doing an extensive examination of the body of research and empirical investigations is one of the answer to the questions from the paper. In the end, this take a look at unearths vulnerabilities and their practical implications through very well reading network safety in cloud computing.

  • Evaluation of REAL-ESRGAN Using Different Types of Image Degradation
    Kus Andriadi, Muhammad Zarlis, Yaya Heryadi, and Andry Chowanda

    IEEE
    This study evaluates the performance of the REAL-ESRGAN [1] model on images with varying levels of degradation using the DIV2K dataset [2], such as the Wild, the Mild, the Difficult, and the x8 subsets. REAL-ESRGAN was created to solve super-resolution problems and aims to produce high-resolution images from low-resolution images. Experiments were conducted at scales of x2 and x4, and performance was measured using Full-Reference metrics (LPIPS, PSNR, SSIM) and No-Reference metrics (NIQE, MANIQA, CLIPIQA, and PI). The Results were good, especially with the x2 scale; it has higher PSNR and SSIM scores, lower LPIPS and NIQE values, and enhanced visual and perceptual quality. The model faced more significant challenges with the wild and the difficult datasets because they have more complex degradations and compression artifacts; it can be seen with unstable results of Full-Reference and No-Reference metrics. On the contrary, the Mild and x8 datasets yielded better results in both metrics; not only that, even the computational cost for Mild and x8 outperforms the rest of the dataset. This study shows the strengths and limitations of REAL-ESRGAN in handling different levels of image degradation. For future research, the model needs enhancement to tackle the degradation format of the wild and the difficult dataset. It would be good if the REAL-ESRGAN improvement could also maintain the computational cost.

  • Research on the Utilization of Cloud Anchors in 5G Mobile Networks for Interactive Augmented Reality Display and Sculptural Art
    He-Lin Luo, Pei-Ying Lin, Tin-Kai Chen, and Andry Chowanda

    IEEE
    In recent years, Augmented Reality (AR) technology has seen widespread adoption in museum art displays, progressing from AR markers to AR SLAM, and now to the emerging trend of AR Cloud Anchor with the advent of 5G mobile networks. AR Cloud Anchor, a crucial component of this third generation of AR, allows for the synchronized display of AR virtual objects by sharing anchors in the cloud, offering innovative possibilities beyond traditional standalone AR experiences. Despite its potential, there are limited cases of AR Cloud Anchor implementation in art exhibitions. This research focuses on the technical application of AR Cloud Anchor in art exhibitions, specifically utilizing sculptures as the medium. The study's design encompasses four main aspects: the process of 3D scanning for sculptures, defining AR anchors for the sculpture medium, creator experience design, and user interaction design. The research aims to provide insights into the technical implementation of AR Cloud Anchor, offering valuable information on data storage, the artistic impact of augmented sculptures, and user preferences for AR applications extended by Cloud Anchor technology. Preliminary conclusions suggest that AR Cloud Anchor technology brings significant innovation to art exhibitions, enhancing audience engagement and making exhibitions more attractive and participatory. Additionally, there is a correlation between users' AR technology experiences and their perceptions of app usability and AR technology's impact on art exhibitions, highlighting the importance of user education and experience in promoting AR technology. Users also express interest in the intervention of virtual sculptures in public spaces, indicating substantial potential for AR technology in enriching artistic experiences.

  • Evaluating Back Translation and Misspelling Correction Utilization on Indonesian AES
    Elvina Amadea Tanaka, Steven Christian, Anderies, and Andry Chowanda

    IEEE
    This paper aims to tackle the problem of Automatic Essay Scoring (AES), a method used to predict whether an answer is correct given a question based on the provided guidelines and criteria. AES helps teachers to score student answers based on the guidelines fed into the model, saving time and decreases the human-error rate. We had chosen the UKARA challenge dataset, an Indonesian binary classification dataset consisting of two different questions separated into two datasets A and B. Prior studies utilized a stacking approach with XGBoost and a Neural Network model, achieving F1-scores of 88.4% and 75.9% for problem A and B respectively. The latest study utilizing the UKARA dataset used SBERT sentence embeddings and a Neural Network model, resulting in F1-scores of 89.4% for problem A and 75.7% for problem B. Although the F1-score for problem A was satisfactory, the F1-score for problem B remained relatively low. Therefore, this research aim to improve the performance of problem B. This research also discusses the various data augmentation techniques that can be applied to the dataset, such as back translation and misspelling correction Peter Norvig method. Based on the latest research, we decided to use SBERT sentence embeddings and neural networks for the model. The experiment managed to improve the result for problem B with a maximum F1-score of 77.2% along with 89.7% for problem A.

  • PetVision: Improved MobileViTv3-Based Image Classification for Cat Breed Identification
    Jackson Ang, Shannie Tannaris, Andry Chowanda, and Anderies Anderies

    IEEE
    Numerous animals are kept as pets, and one of them is a cat. Cats have different breeds and each breed requires different treatment and food. However, only a few cat caretakers have the ability to identify the breed of their pet. This has caused misidentification and mistreatment leading to fatal consequences. Hence, technology can help pet owners recognize their pet's breed to avoid serious problems. This study seeks to identify the breed of cats utilizing the Machine Learning approach. Instead of using either a Convolutional Neural Network (CNN) or a Vision Transformer(ViT), a hybrid model delivers the best performance. The hybrid model, MobileViT, is applied in this study, specifically MobileViTv3. It was tested with the dataset that comprises images consisting of 12 breeds of cats. The methodology involves extracting the 12 Cat breeds in the Oxford IIIT Pet Dataset. These images will undergo preprocessing and data augmentation to enhance the training process. Models including PetVision are then trained in this dataset and performance evaluation will conducted using the accuracy and parameter counts of each model. The model PetVision produced an accuracy of 75% with parameter counts of 1.2 million in 200 epoch runs. These results produced a satisfying result that can aid pet owners in providing the appropriate care for their pets.

  • Implementation of Dynamic Image for Facial Expression Recognition on Indonesian Facial Expression Dataset
    Irene Anindaputri Iswanto, Andry Chowanda, Haryono Soeparno, and Widodo Budiharto

    IEEE
    In recent times, there has been considerable attention directed towards Facial Expression Recognition (FER) due to its extensive utility across diverse domains. However, the universality of facial expressions has been challenged by studies suggesting that cultural backgrounds significantly influence the perception and recognition of emotions. This paper addresses the need for culturally specific datasets in FER tasks, particularly in underrepresented regions like Indonesia. The study introduces dynamic images as an alternative input representation for facial expression recognition tasks, aiming to assess their efficacy using the Indonesian Mixed Emotions Dataset (IMED). Through experimentation using EfficientNet model, the performance of dynamic images is compared with static image and video inputs. Results indicate that dynamic images exhibit promising performance, with an accuracy of 94.28%. These results outperform static image datasets and nearly match the performance of video-based models, which achieved an accuracy of 97.93 %, despite using fewer data. Nonetheless, challenges such as data imbalance and the quality of generated dynamic images persist, suggesting avenues for further research and model refinement. This study provides valuable insights into methodological advancements in FER, particularly in limited dataset conditions, laying the groundwork for future developments in dynamic image-based facial expression recognition algorithms.

  • Modeling Emotion Recognition System from Facial Images Using Convolutional Neural Networks
    Jasen Wanardi Kusno and Andry Chowanda

    Universitas Bina Nusantara
    Emotion classification is the process of identifying human emotions. Implementing technology to help people with emotional classification is considered a relatively popular research field. Until now, most of the work has been done to automate the recognition of facial cues (e.g., expressions) from several modalities (e.g., image, video, audio, and text). Deep learning architecture such as Convolutional Neural Networks (CNN) demonstrates promising results for emotion recognition. The research aims to build a CNN model while improving accuracy and performance. Two models are proposed in the research with some hyperparameter tuning followed by two datasets and other existing architecture that will be used and compared with the proposed architecture. The two datasets used are Facial Expression Recognition 2013 (FER2013) and Extended Cohn-Kanade (CK+), both of which are commonly used datasets in FER. In addition, the proposed model is compared with the previous model using the same setting and dataset. The result shows that the proposed models with the CK+ dataset gain higher accuracy, while some models with the FER2013 dataset have lower accuracy compared to previous research. The model trained with the FER2013 dataset has lower accuracy because of overfitting. Meanwhile, the model trained with CK+ has no overfitting problem. The research mainly explores the CNN model due to limited resources and time.


  • Object Detection Model for Web-Based Physical Distancing Detector Using Deep Learning
    Andry Chowanda, Ananda Kevin Refaldo Sariputra, and Ricardo Gunawan Prananto

    Universitas Bina Nusantara
    The pandemic has changed the way people interact with each other in the public setting. As a result, social distancing has been implemented in public society to reduce the virus’s spread. Automatically detecting social distancing is paramount in reducing menial manual tasks. There are several methods to detect social distance in public, and one is through a surveillance camera. However, detecting social distance through a camera is not an easy task. Problems, such as lighting, occlusion, and camera resolution, can occur during detection. The research aims to develop a physical distancing detector system that is adjusted to work with Indonesian rules and conditions, especially in Jakarta, using deep learning (i.e., YOLOv4 architecture with the Darknet framework) and the CrowdHuman dataset. The detection is done by reading the source video, detecting the distance between individuals, and determining the crowd of individuals close to each other. In order to accomplish the detection, the training is done with CSPDarknet53 and VGG16 backbone in YOLOv4 and YOLOv4 Tiny architecture using various hyperparameters in the training process. Several explorations are made in the research to find the best combination of architectures and fine-tune them. The research successfully detects crowds at the 16th training, with mAP50 of 71.59% (74.04% AP50) and 16.2 Frame per Second (FPS) displayed on the web. The input size is essential for determining the model’s accuracy and speed. The model can be implemented in a web-based application.

  • Job Recommendation System based on Resume using Natural Language Processing and Distance-based Algorithm
    Hansen Artajaya, Julieta, Jose Giancarlos, Jurike V. Moniaga, and Andry Chowanda

    IEEE
    This research aims to assist students struggling to determine which internship positions to apply for by providing recommendations based on their skills and abilities as stated in their CVs. Employing an experimental approach, key variables such as hard skills, soft skills, organizational experience, and job positions are extracted from student CVs, as well as job descriptions and requirements obtained from a corresponding job list. The extraction process is experimented with using OCR (Optical Character Recognition) and several PDF reader libraries. Through manual analysis, it is determined that the PDFPlumber library can handle layouts more effectively using character location data. After the extraction process, the variables obtained from the resumes are compared to those obtained from the job descriptions and requirements using distance-based algorithms. Three distance-based algorithms are utilized for this assessment. Results show that with Word2Vec vectorization, Euclidean distance yields the lowest average error value (0.051), followed by Jaccard distance (0.111) and Cosine similarity (0.26). Attempts to enhance Jaccard distance with synonims added did not yield significant improvements. Additionally, employing TFIDF for Cosine similarity improved performance, results in an error value of 0.117. It can be concluded, the Euclidean distance algorithm demonstrated the most effective performance.

  • Portuguese Meals Image Recognition Using CNN Models
    Johanes, Devin Jonathan, Anderies, and Andry Chowanda

    IEEE
    Recent research in deep learning has played a crucial role in many areas especially in image classification. Processing and analyzing large volumes of visual data has proven to be a huge success for deep learning techniques, which imitate the neural networks found in the human brain. This paper has an objective on identifying Portuguese cuisine using deep learning techniques, by creating and comparing three different models such as Multi-Layer Perceptron (MLP) as an Artificial Neural Network (ANN), Long Short-Term Memory (LSTM) as a Recurrent Neural Network (RNN), and ResNet-50, a pre-trained Convolutional Neural Network (CNN). Each model is trained using a carefully chosen dataset of Portuguese meals that represents the many types of culinary. We evaluate the models in detail using accuracy and loss performance criteria. Our experimental results indicate that the ResNet-50 model outperforms the others with a 90% test accuracy and a 97% training accuracy. Still, more investigation is needed.

  • Developing a Robust Face Recognition Algorithm with Anti Spoofing Using InceptionV3 and YOLOv8
    Anselyus Patrick Siswanto, Aaron Scott Buana, Anderies, and Andry Chowanda

    IEEE
    With the development of technology, also comes the rise of feature-based recognition systems. One such system is the face recognition system. Despite the rapid improvements to the system, it has a risk in that the system can become susceptible to spoofing. Our research aims to address this issue by combining two models, one focusing on face recognition, while the other is focused on detecting any spoofing. Utilizing Inception V3strong feature extraction performance and classification accuracy and YOLOv8, known for its real-time object detection capabilities, we desire to develop a combined model capable of applying accurate face recognition and capable of dealing with spoofing attacks. The algorithm works by first identifying the captured input as real or fake using YOLOv8, once the input is confirmed as real, the process continues with a facial recognition with Inception V3.Results of testing showed that the algorithm performs accurately in both tasks. However, the resulted integration caused a low framerate to be captured due to high computational power requirement. Future works aims to find methods to enhance the efficiency of the model, either by optimization or utilizing a hardware with higher computational power, to hopefully create a robust system that can be used for example a face recognition attendance system.

  • YouTube Videos Clickbait Classification Utilizing Text Summarization and Similarity Score via LLM
    Delvin Hu, Anderies, and Andry Chowanda

    IEEE
    Clickbait detection is used to check for clickbait, preventing users from wasting their precious time watching videos that might have misled them. Past researches have used methods such as Deep Learning algorithms that take into account the thumbnail of the video, the statistics, and the comment section. There are also researches that uses LLMs for detection. With this in mind, the author introduces a new way of clickbait detection through the combination of both YouTube statistics and LLMs, as well as the addition of Youtube transcripts as one of the determining factor. This is done with the use of text summarization and OpenAI's ChatGPT. The video transcript is extracted and summarized. ChatGPT will then create a new title suitable for the summarized transcript. The made up title is then compared to the original title and be given a score based on their similarity and used as a new feature for the model. ChatGPT will also be asked to directly predict the presence of clickbait directly from the title and the summarized transcript. All the features are used to create a new machine learning model. The algorithms used for the classification include Logistic Regression, Naïve Bayes, Random Forest, Multi Layer Perceptron, and Support Vector Machine. Random Forest achieved the highest f1-score out of all the models with the score of 87%.

RECENT SCHOLAR PUBLICATIONS

  • Using CNN and Transformer Model for Unimodal Speech Emotion Recognition on MELD and IEMOCAP
    C Aurelio, A Chowanda
    2025 International Conference on Advancement in Data Science, E-learning and 2025

  • Hyperparameter tuning for deep learning model used in multimodal emotion recognition data
    F Widardo, A Chowanda
    Bulletin of Electrical Engineering and Informatics 14 (1), 261-267 2025

  • KEYSTROKE DYNAMICS ON MULTI-SESSION AND UNCONTROLLED SETTINGS USING CNN BI-LSTM
    SR PUTRA, A CHOWANDA
    Journal of Theoretical and Applied Information Technology 103 (2) 2025

  • IMPROVING COLLECTION OF DATA TYPE EVIDENCE AND THE INTEGRITY OF EVIDENCE COLLECTED USING SHA-256 HASHING ALGORITHM FOR WEB BROWSERS
    NURHM SHAH, A ASMAWI, SMD YASIN, BP NARENDRA, AK KHAN, ...
    Journal of Theoretical and Applied Information Technology 102 (2) 2025

  • YouTube Videos Clickbait Classification Utilizing Text Summarization and Similarity Score via LLM
    D Hu, A Chowanda
    2024 6th International Conference on Cybernetics and Intelligent System 2024

  • Portuguese Meals Image Recognition Using CNN Models
    D Jonathan, A Chowanda
    2024 6th International Conference on Cybernetics and Intelligent System 2024

  • Developing a Robust Face Recognition Algorithm with Anti Spoofing Using InceptionV3 and YOLOv8
    AP Siswanto, AS Buana, A Chowanda
    2024 6th International Conference on Cybernetics and Intelligent System 2024

  • Simulation-Based Optimization of Autonomous Vehicles Using Genetic Algorithm
    KF Hanson, KK Al Biruni, A Chowanda
    2024 6th International Conference on Cybernetics and Intelligent System 2024

  • Developing Campus Digital Twin with Integrated 3D Point Clouds and 3D Modeling Techniques
    RN Utama, J Sutjiatmadja, F Osmond, L Stefan, AAS Gunawan, ...
    2024 6th International Conference on Cybernetics and Intelligent System 2024

  • Security Challenges and Issues in Cloud Computing
    R Zakiyyah, D Permana, D Valenecio, A Chowanda, Y Muliono, ...
    2024 International Symposium on Networks, Computers and Communications 2024

  • Machine Learning for Threat Detection
    D Sutanto, F Fransisca, S Liem, A Chowanda, Y Muliono, ZN Izdihar
    2024 International Symposium on Networks, Computers and Communications 2024

  • Modeling Emotion Recognition System from Facial Images Using Convolutional Neural Networks
    JW Kusno, A Chowanda
    CommIT (Communication and Information Technology) Journal 18 (2), 251-259 2024

  • IPerFEX-2023: Indonesian personal financial entity extraction using indoBERT-BiGRU-CRF model
    E Dave, A Chowanda
    Journal of Big Data 11 (1), 139 2024

  • Exploring the Impact of Image Downscaling Algorithms on Color Perception
    BM Setiawan, AK Sumowidjojo, A Chowanda
    2024 International Seminar on Application for Technology of Information and 2024

  • Evaluation of REAL-ESRGAN Using Different Types of Image Degradation
    K Andriadi, M Zarlis, Y Heryadi, A Chowanda
    2024 International Conference on ICT for Smart Society (ICISS), 1-6 2024

  • PetVision: Improved MobileViTv3-Based Image Classification for Cat Breed Identification
    J Ang, S Tannaris, A Chowanda, A Anderies
    2024 5th International Conference on Artificial Intelligence and Data 2024

  • Evaluating Back Translation and Misspelling Correction Utilization on Indonesian AES
    EA Tanaka, S Christian, A Chowanda
    2024 5th International Conference on Artificial Intelligence and Data 2024

  • Using CNN Along with Transfer Learning to Predict Wildfires in Borneo's Topography
    R Limec, R Kangson, A Chowanda
    2024 International Conference on Information Management and Technology 2024

  • Implementation of Dynamic Image for Facial Expression Recognition on Indonesian Facial Expression Dataset
    IA Iswanto, A Chowanda, H Soeparno, W Budiharto
    2024 IEEE International Conference on Artificial Intelligence in Engineering 2024

  • Fitcam: detecting and counting repetitive exercises with deep learning
    F Japhne, K Janada, A Theodorus, A Chowanda
    Journal of Big Data 11 (1), 101 2024

MOST CITED SCHOLAR PUBLICATIONS

  • Text based personality prediction from multiple social media data sources using pre-trained language model and model averaging
    H Christian, D Suhartono, A Chowanda, KZ Zamli
    Journal of Big Data 8 (1), 68 2021
    Citations: 136

  • Fast object detection for quadcopter drone using deep learning
    W Budiharto, AAS Gunawan, JS Suroso, A Chowanda, A Patrik, G Utama
    2018 3rd international conference on computer and communication systems 2018
    Citations: 93

  • GNSS-based navigation systems of autonomous drone for delivering items
    A Patrik, G Utama, AAS Gunawan, A Chowanda, JS Suroso, R Shofiyanti, ...
    Journal of Big Data 6, 1-14 2019
    Citations: 88

  • Mapping and 3D modelling using quadrotor drone and GIS software
    W Budiharto, E Irwansyah, JS Suroso, A Chowanda, H Ngarianto, ...
    Journal of Big Data 8, 1-12 2021
    Citations: 74

  • Implementation of optical character recognition using tesseract with the javanese script target in android application
    GA Robby, A Tandra, I Susanto, J Harefa, A Chowanda
    Procedia Computer Science 157, 499-505 2019
    Citations: 65

  • A review and progress of research on autonomous drone in agriculture, delivering items and geographical information systems (GIS)
    W Budiharto, A Chowanda, AAS Gunawan, E Irwansyah, JS Suroso
    2019 2nd world symposium on communication engineering (WSCE), 205-209 2019
    Citations: 61

  • Exploring text-based emotions recognition machine learning techniques on social media conversation
    A Chowanda, R Sutoyo, S Tanachutiwat
    Procedia Computer Science 179, 821-828 2021
    Citations: 58

  • Enhancing game experience with facial expression recognition as dynamic balancing
    MT Akbar, MN Ilmi, IV Rumayar, J Moniaga, TK Chen, A Chowanda
    Procedia Computer Science 157, 388-395 2019
    Citations: 53

  • Designing an emotionally realistic chatbot framework to enhance its believability with AIML and information states
    R Sutoyo, A Chowanda, A Kurniati, R Wongso
    Procedia Computer Science 157, 621-628 2019
    Citations: 50

  • Playing with social and emotional game companions
    A Chowanda, M Flintham, P Blanchfield, M Valstar
    Intelligent Virtual Agents: 16th International Conference, IVA 2016, Los 2016
    Citations: 50

  • Facial expression recognition using bidirectional LSTM-CNN
    R Febrian, BM Halim, M Christina, D Ramdhan, A Chowanda
    Procedia Computer Science 216, 39-47 2023
    Citations: 45

  • Erisa: Building emotionally realistic social game-agents companions
    A Chowanda, P Blanchfield, M Flintham, M Valstar
    Intelligent Virtual Agents: 14th International Conference, IVA 2014, Boston 2014
    Citations: 41

  • Clustering models for hospitals in Jakarta using fuzzy c-means and k-means
    KE Setiawan, A Kurniawan, A Chowanda, D Suhartono
    Procedia Computer Science 216, 356-363 2023
    Citations: 38

  • Computational models of emotion, personality, and social relationships for interactions in games
    A Chowanda, P Blanchfield, M Flintham, M Valstar
    The 2016 international conference on autonomous agents & multiagent systems 2016
    Citations: 38

  • Facial expression recognition as dynamic game balancing system
    JV Moniaga, A Chowanda, A Prima, MDT Rizqi
    Procedia Computer Science 135, 361-368 2018
    Citations: 30

  • Spatial autoregressive (SAR) model for average expenditure of Papua Province
    SD Permai, R Jauri, A Chowanda
    Procedia Computer Science 157, 537-542 2019
    Citations: 29

  • Enhancing player experience in game with affective computing
    D Setiono, D Saputra, K Putra, JV Moniaga, A Chowanda
    Procedia Computer Science 179, 781-788 2021
    Citations: 28

  • Separable convolutional neural networks for facial expressions recognition
    A Chowanda
    Journal of Big Data 8 (1), 132 2021
    Citations: 23

  • Recurrent neural network to deep learn conversation in indonesian
    A Chowanda, AD Chowanda
    Procedia computer science 116, 579-586 2017
    Citations: 23

  • Perancangan Game Edukasi Bertemakan Sejarah Indonesia (Ken Arok dan Buto Ijo)
    A Chowanda, YL Prasetio
    Seminar Nasional Matematika dan Teknologi Informasi & Komunikasi 2012, 151-155 2012
    Citations: 21