Erik Billing

@his.se

School of Informatics
University of Skövde

RESEARCH INTERESTS

Cognitive modeling, Human-Robot Interaction

42

Scopus Publications

Scopus Publications

  • Previous Experience Matters: An in-Person Investigation of Expectations in Human–Robot Interaction
    Julia Rosén, Jessica Lindblom, Maurice Lamb, and Erik Billing

    Springer Science and Business Media LLC
    AbstractThe human–robot interaction (HRI) field goes beyond the mere technical aspects of developing robots, often investigating how humans perceive robots. Human perceptions and behavior are determined, in part, by expectations. Given the impact of expectations on behavior, it is important to understand what expectations individuals bring into HRI settings and how those expectations may affect their interactions with the robot over time. For many people, social robots are not a common part of their experiences, thus any expectations they have of social robots are likely shaped by other sources. As a result, individual expectations coming into HRI settings may be highly variable. Although there has been some recent interest in expectations within the field, there is an overall lack of empirical investigation into its impacts on HRI, especially in-person robot interactions. To this end, a within-subject in-person study ($$N=31$$ N = 31 ) was performed where participants were instructed to engage in open conversation with the social robot Pepper during two 2.5 min sessions. The robot was equipped with a custom dialogue system based on the GPT-3 large language model, allowing autonomous responses to verbal input. Participants’ affective changes towards the robot were assessed using three questionnaires, NARS, RAS, commonly used in HRI studies, and Closeness, based on the IOS scale. In addition to the three standard questionnaires, a custom question was administered to capture participants’ views on robot capabilities. All measures were collected three times, before the interaction with the robot, after the first interaction with the robot, and after the second interaction with the robot. Results revealed that participants to large degrees stayed with the expectations they had coming into the study, and in contrast to our hypothesis, none of the measured scales moved towards a common mean. Moreover, previous experience with robots was revealed to be a major factor of how participants experienced the robot in the study. These results could be interpreted as implying that expectations of robots are to large degrees decided before interactions with the robot, and that these expectations do not necessarily change as a result of the interaction. Results reveal a strong connection to how expectations are studied in social psychology and human-human interaction, underpinning its relevance for HRI research.

  • Kinematic primitives in action similarity judgments: A human-centered computational model
    Vipul Nair, Paul Hemeren, Alessia Vignolo, Nicoletta Noceti, Elena Nicora, Alessandra Sciutti, Francesco Rea, Erik Billing, Mehul Bhatt, Francesca Odone,et al.

    Institute of Electrical and Electronics Engineers (IEEE)

  • Language Models for Human-Robot Interaction
    Erik Billing, Julia Rosén, and Maurice Lamb

    ACM
    Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.

  • How to train a self-driving vehicle: On the added value (or lack thereof) of curriculum learning and replay buffers
    Sara Mahmoud, Erik Billing, Henrik Svensson, and Serge Thill

    Frontiers Media SA
    Learning from only real-world collected data can be unrealistic and time consuming in many scenario. One alternative is to use synthetic data as learning environments to learn rare situations and replay buffers to speed up the learning. In this work, we examine the hypothesis of how the creation of the environment affects the training of reinforcement learning agent through auto-generated environment mechanisms. We take the autonomous vehicle as an application. We compare the effect of two approaches to generate training data for artificial cognitive agents. We consider the added value of curriculum learning—just as in human learning—as a way to structure novel training data that the agent has not seen before as well as that of using a replay buffer to train further on data the agent has seen before. In other words, the focus of this paper is on characteristics of the training data rather than on learning algorithms. We therefore use two tasks that are commonly trained early on in autonomous vehicle research: lane keeping and pedestrian avoidance. Our main results show that curriculum learning indeed offers an additional benefit over a vanilla reinforcement learning approach (using Deep-Q Learning), but the replay buffer actually has a detrimental effect in most (but not all) combinations of data generation approaches we considered here. The benefit of curriculum learning does depend on the existence of a well-defined difficulty metric with which various training scenarios can be ordered. In the lane-keeping task, we can define it as a function of the curvature of the road, in which the steeper and more occurring curves on the road, the more difficult it gets. Defining such a difficulty metric in other scenarios is not always trivial. In general, the results of this paper emphasize both the importance of considering data characterization, such as curriculum learning, and the importance of defining an appropriate metric for the task.

  • Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver
    Tim Schreiter, Lucas Morillo-Mendez, Ravi T. Chadalavada, Andrey Rudenko, Erik Billing, Martin Magnusson, Kai O. Arras, and Achim J. Lilienthal

    IEEE
    Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an “Anthropomorphic Robotic Mock Driver” (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.

  • Applying the Social Robot Expectation Gap Evaluation Framework
    Julia Rosén, Erik Billing, and Jessica Lindblom

    Springer Nature Switzerland

  • Where to from here? On the future development of autonomous vehicles from a cognitive systems perspective
    Sara Mahmoud, Erik Billing, Henrik Svensson, and Serge Thill

    Elsevier BV

  • Eye-Tracking Beyond Peripersonal Space in Virtual Reality: Validation and Best Practices
    Maurice Lamb, Malin Brundin, Estela Perez Luque, and Erik Billing

    Frontiers Media SA
    Recent developments in commercial virtual reality (VR) hardware with embedded eye-tracking create tremendous opportunities for human subjects researchers. Accessible eye-tracking in VR opens new opportunities for highly controlled experimental setups in which participants can engage novel 3D digital environments. However, because VR embedded eye-tracking differs from the majority of historical eye-tracking research, in both providing for relatively unconstrained movement and stimulus presentation distances, there is a need for greater discussion around methods for implementation and validation of VR based eye-tracking tools. The aim of this paper is to provide a practical introduction to the challenges of, and methods for, 3D gaze-tracking in VR with a focus on best practices for results validation and reporting. Specifically, first, we identify and define challenges and methods for collecting and analyzing 3D eye-tracking data in VR. Then, we introduce a validation pilot study with a focus on factors related to 3D gaze tracking. The pilot study provides both a reference data point for a common commercial hardware/software platform (HTC Vive Pro Eye) and illustrates the proposed methods. One outcome of this study was the observation that accuracy and precision of collected data may depend on stimulus distance, which has consequences for studies where stimuli is presented on varying distances. We also conclude that vergence is a potentially problematic basis for estimating gaze depth in VR and should be used with caution as the field move towards a more established method for 3D eye-tracking.

  • Understanding Eye-Tracking in Virtual Reality


  • A Learning Tracker using Digital Biomarkers for Autistic Preschoolers – Practice Track –
    Gurmit Sandhu, Anne Kilburg, Andreas Martin, Charuta Pande, Hans Friedrich Witschel, Emanuele Laurenzi, and Erik Billing

    EasyChair
    Preschool children, when diagnosed with Autism Spectrum Disorder (ASD), often ex- perience a long and painful journey on their way to self-advocacy. Access to standard of care is poor, with long waiting times and the feeling of stigmatization in many social set- tings. Early interventions in ASD have been found to deliver promising results, but have a high cost for all stakeholders. Some recent studies have suggested that digital biomarkers (e.g., eye gaze), tracked using affordable wearable devices such as smartphones or tablets, could play a role in identifying children with special needs. In this paper, we discuss the possibility of supporting neurodiverse children with technologies based on digital biomark- ers which can help to a) monitor the performance of children diagnosed with ASD and b) predict those who would benefit most from early interventions. We describe an ongoing feasibility study that uses the “DREAM dataset”, stemming from a clinical study with 61 pre-school children diagnosed with ASD, to identify digital biomarkers informative for the child’s progression on tasks such as imitation of gestures. We describe our vision of a tool that will use these prediction models and that ASD pre-schoolers could use to train certain social skills at home. Our discussion includes the settings in which this usage could be embedded.

  • The Social Robot Expectation Gap Evaluation Framework
    Julia Rosén, Jessica Lindblom, and Erik Billing

    Springer International Publishing

  • Current Trends in Research and Application of Digital Human Modeling
    Lars Hanson, Dan Högberg, Erik Brolin, Erik Billing, Aitor Iriondo Pascual, and Maurice Lamb

    Springer International Publishing

  • Reporting of Ethical Conduct in Human-Robot Interaction Research
    Julia Rosén, Jessica Lindblom, and Erik Billing

    Springer International Publishing

  • Action similarity judgment based on kinematic primitives
    Vipul Nair, Paul Hemeren, Alessia Vignolo, Nicoletta Noceti, Elena Nicora, Alessandra Sciutti, Francesco Rea, Erik Billing, Francesca Odone, and Giulio Sandini

    IEEE
    Understanding which features humans rely on - in visually recognizing action similarity is a crucial step towards a clearer picture of human action perception from a learning and developmental perspective. In the present work, we investigate to which extent a computational model based on kinematics can determine action similarity and how its performance relates to human similarity judgments of the same actions. To this aim, twelve participants perform an action similarity task, and their performances are compared to that of a computational model solving the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experiment results show that both the model and human participants can reliably identify whether two actions are the same or not. However, the model produces more false hits and has a greater selection bias than human participants. A possible reason for this is the particular sensitivity of the model towards kinematic primitives of the presented actions. In a second experiment, human participants' performance on an action identification task indicated that they relied solely on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions.

  • Digital human modeling technology in virtual reality-studying aspects of users' experiences
    J. Rosén, J. Lindblom, M. Lamb and E. Billing


    Virtual Reality (VR) could be used to develop more representative Digital Human Modeling (DHM) simulations of work tasks for future Operators 4.0. Although VR allows users to experience the manikin ...

  • Automatic selection of viewpoint for digital human modelling
    E. Billing, Elpida Bampouni and M. Lamb


    During concept design of new vehicles, work places, and other complex artifacts, it is critical to assess positioning of instruments and regulators from the perspective of the end user. One common ...

  • The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy
    Erik Billing, Tony Belpaeme, Haibin Cai, Hoang-Long Cao, Anamaria Ciocan, Cristina Costescu, Daniel David, Robert Homewood, Daniel Hernandez Garcia, Pablo Gómez Esteban,et al.

    Public Library of Science (PLoS)
    We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.

  • Robot-Enhanced Therapy: Development and Validation of Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy
    Hoang-Long Cao, Pablo G. Esteban, Madeleine Bartlett, Paul Baxter, Tony Belpaeme, Erik Billing, Haibin Cai, Mark Coeckelbergh, Cristina Costescu, Daniel David,et al.

    Institute of Electrical and Electronics Engineers (IEEE)
    Robot-assisted therapy (RAT) offers potential advantages for improving the social skills of children with autism spectrum disorders (ASDs). This article provides an overview of the developed technology and clinical results of the EC-FP7-funded Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM) project, which aims to develop the next level of RAT in both clinical and technological perspectives, commonly referred to as robot-enhanced therapy (RET). Within this project, a supervised autonomous robotic system is collaboratively developed by an interdisciplinary consortium including psychotherapists, cognitive scientists, roboticists, computer scientists, and ethicists, which allows robot control to exceed classical remote control methods, e.g., Wizard of Oz (WoZ), while ensuring safe and ethical robot behavior. Rigorous clinical studies are conducted to validate the efficacy of RET. Current results indicate that RET can obtain an equivalent performance compared to that of human standard therapy for children with ASDs. We also discuss the next steps of developing RET robotic systems.

  • Social Robots in Therapy and Care
    Daniel Hernandez Garcia, Pablo G. Esteban, Hee Rin Lee, Marta Romeo, Emmanuel Senft, and Erik Billing

    IEEE
    The Social Robots in Therapy workshop series aims at advancing research topics related to the use of robots in the contexts of Social Care and Robot-Assisted Therapy (RAT). Robots in social care and therapy have been a long time promise in HRI as they have the opportunity to improve patients life significantly. Multiple challenges have to be addressed for this, such as building platforms that work in proximity with patients, therapists and health-care professionals; understanding user needs; developing adaptive and autonomous robot interactions; and addressing ethical questions regarding the use of robots with a vulnerable population. The full-day workshop follows last year's edition which centered on how social robots can improve health-care interventions, how increasing the degree of autonomy of the robots might affect therapies, and how to overcome the ethical challenges inherent to the use of robot assisted technologies. This 2nd edition of the workshop will be focused on the importance of equipping social robots with socio-emotional intelligence and the ability to perform meaningful and personalized interactions. This workshop aims to bring together researchers and industry experts in the fields of Human-Robot Interaction, Machine Learning and Robots in Health and Social Care. It will be an opportunity for all to share and discuss ideas, strategies and findings to guide the design and development of robot-assisted systems for therapy and social care implementations that can provide personalize, natural, engaging and autonomous interactions with patients (and health-care providers).

  • Sensing-enhanced therapy system for assessing children with autism spectrum disorders: A feasibility study
    Haibin Cai, Yinfeng Fang, Zhaojie Ju, Cristina Costescu, Daniel David, Erik Billing, Tom Ziemke, Serge Thill, Tony Belpaeme, Bram Vanderborght,et al.

    Institute of Electrical and Electronics Engineers (IEEE)
    It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features, such as body motion features, facial expressions, and gaze features, further assessing the children behaviors by mapping them to therapist-specified behavioral classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behavior assessment.

  • Conveying emotions by touch to the nao robot: A user experience perspective
    Beatrice Alenljung, Rebecca Andreasson, Robert Lowe, Erik Billing, and Jessica Lindblom

    MDPI AG
    Social robots are expected gradually to be used by more and more people in a wider range of settings, domestic as well as professional. As a consequence, the features and quality requirements on human–robot interaction will increase, comprising possibilities to communicate emotions, establishing a positive user experience, e.g., using touch. In this paper, the focus is on depicting how humans, as the users of robots, experience tactile emotional communication with the Nao Robot, as well as identifying aspects affecting the experience and touch behavior. A qualitative investigation was conducted as part of a larger experiment. The major findings consist of 15 different aspects that vary along one or more dimensions and how those influence the four dimensions of user experience that are present in the study, as well as the different parts of touch behavior of conveying emotions.

  • Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot
    Rebecca Andreasson, Beatrice Alenljung, Erik Billing, and Robert Lowe

    Springer Science and Business Media LLC

  • Conceptualizing Embodied Automation to Increase Transfer of Tacit knowledge in the Learning Factory
    Asa Fast-Berglund, Peter Thorvald, Erik Billing, Adam Palmquist, David Romero, and Georg Weichhart

    IEEE
    This paper will discuss how cooperative agent-based systems, deployed with social skills and embodied automation features, can be used to interact with the operators in order to facilitate sharing of tacit knowledge and its later conversion into explicit knowledge. The proposal is to combine social software robots (softbots) with industrial collaborative robots (co-bots) to create a digital apprentice for experienced operators in humanrobot collaboration workstations. This is to address the problem within industry that experienced operators have difficulties in explaining how they perform their tasks and later, how to turn this procedural knowledge (knowhow) into instructions to be shared among other operators. By using social softbots and co-bots, as cooperative agents with embodied automation features, we think we can facilitate the ‘externalization’ of procedural knowledge in human-robot interaction(s). This enabled by the capabilities of social cooperative agents with embodied automation features of continuously learning by looking over the shoulder of the operators, and documenting and collaborating with them in a non-intrusive way as they perform their daily tasks.

  • Designing for a wearable affective interface for the NAO robot: A study of emotion conveyance by touch
    Robert Lowe, Rebecca Andreasson, Beatrice Alenljung, Anja Lund, and Erik Billing

    MDPI AG
    We here present results and analysis from a study of affective tactile communication between human and humanoid robot (the NAO robot). In the present work, participants conveyed eight emotions to t ...

  • Robot enhanced therapy for children with autism (DREAM): A social model of autism
    Kathleen Richardson, Mark Coeckelbergh, Kutoma Wakunuma, Erik Billing, Tom Ziemke, Pablo Gomez, Bram Vanderborght, and Tony Belpaeme

    Institute of Electrical and Electronics Engineers (IEEE)
    The development of social robots for children with autism has been a growth field for the past 15 years. This article reviews studies in robots and autism as a neurodevelopmental disorder that impacts socialcommunication development, and the ways social robots could help children with autism develop social skills. Drawing on ethics research from the EU-funded Development of Robot-Enhanced Therapy for Children with Autism (DREAM) project (framework 7), this paper explores how ethics evolves and developed in this European project.