@ju.se
Department of Computing
School of Engineering at Jonkoping University
Artificial Intelligence, Theoretical Computer Science, Linguistics and Language, Philosophy
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Hansi Hettiarachchi, Amna Dridi, Mohamed Medhat Gaber, Pouyan Parsafard, Nicoleta Bocaneala, Katja Breitenfelder, Gonçal Costa, Maria Hedblom, Mihaela Juganaru-Mathieu, Thamer Mecharnia,et al.
Springer Science and Business Media LLC
Abstract Automatic Compliance Checking (ACC) within the Architecture, Engineering, and Construction (AEC) sector necessitates automating the interpretation of building regulations to achieve its full potential. Converting textual rules into machine-readable formats is challenging due to the complexities of natural language and the scarcity of resources for advanced Machine Learning (ML). Addressing these challenges, we introduce CODE-ACCORD, a dataset of 862 sentences from the building regulations of England and Finland. Only the self-contained sentences, which express complete rules without needing additional context, were considered as they are essential for ACC. Each sentence was manually annotated with entities and relations by a team of 12 annotators to facilitate machine-readable rule generation, followed by careful curation to ensure accuracy. The final dataset comprises 4,297 entities and 4,329 relations across various categories, serving as a robust ground truth. CODE-ACCORD supports a range of ML and Natural Language Processing (NLP) tasks, including text classification, entity recognition, and relation extraction. It enables applying recent trends, such as deep neural networks and large language models, to ACC.
Maria M. Hedblom, Fabian Neuhaus, and Till Mossakowski
Informa UK Limited
Mihai Pomarlan, Stefano De Giorgis, Rachel Ringe, Maria M. Hedblom, and Nikolaos Tsiogkas
IOS Press
Situationally-aware artificial agents operating with competence in natural environments face several challenges: spatial awareness, object affordance detection, dynamic changes and unpredictability. A critical challenge is the agent’s ability to identify and monitor environmental elements pertinent to its objectives. Our research introduces a neurosymbolic modular architecture for reactive robotics. Our system combines a neural component performing object recognition over the environment and image processing techniques such as optical flow, with symbolic representation and reasoning. The reasoning system is grounded in the embodied cognition paradigm, via integrating image schematic knowledge in an ontological structure. The ontology is operatively used to create queries for the perception system, decide on actions, and infer entities’ capabilities derived from perceptual data. The combination of reasoning and image processing allows the agent to focus its perception for normal operation as well as discover new concepts for parts of objects involved in particular interactions. The discovered concepts allow the robot to autonomously acquire training data and adjust its subsymbolic perception to recognize the parts, as well as making planning for more complex tasks feasible by focusing search on those relevant object parts. We demonstrate our approach in a simulated world, in which an agent learns to recognize parts of objects involved in support relations. While the agent has no concept of handle initially, by observing examples of supported objects hanging from a hook it learns to recognize the parts involved in establishing support and becomes able to plan the establishment/destruction of the support relation. This underscores the agent’s capability to expand its knowledge through observation in a systematic way, and illustrates the potential of combining deep reasoning with reactive robotics in dynamic settings.
Jorge Aguirregomezcorta Aina and Maria M. Hedblom
IEEE
Autonomous robotic systems need a flexible and safe method to interact with their surroundings. When encountering unfamiliar objects, the agents should be able to identify and learn the involved affordances to apply appropriate actions. Focusing on affordance learning, we introduce a neuro-symbolic AI system with a robot simulation capable of inferring appropriate action. The system's core is a visuo-lingual attribute detection module coupled with a probabilistic knowledge base. The system is accompanied by a Unity robot simulation that is used for evaluation. The system is evaluated through caption-inferring capabilities using image captioning and machine translation metrics on a case study of opening containers. The two main affordance-action relation pairs are the jar/bottle lids that are open using either a ‘twist’ or a ‘snap’ action. The results show the system is successful in opening all 50 containers in the test case, based on an accurate attribute captioning rate of 71%. The mismatch is likely due to the ‘snapping’ lids being able to open also after a twisting motion. Our system demonstrates that affordance inference can be successfully implemented using a cognitive visuo-lingual method that could be generalized to other affordance cases.
Mihai Pomarlan, Maria M. Hedblom, Laura Spillner, and Robert Porzel
Springer Nature Switzerland
Maria M. Hedblom, Fabian Neuhaus, and Till Mossakowski
Informa UK Limited
Mihai Pomarlan, Maria M. Hedblom, and Robert Porzel
Wiley
AbstractHuman beings and other biological agents appear driven by curiosity to explore the affordances of their environments. Such exploration is its own reward – children have fun when playing – but it probably also serves the practical purpose of learning theories with which to predict outcomes of actions. Cognitive robots however have yet to match the performance of human beings at learning and reusing manipulation skills. In this paper, we implement a method that emulates the curiosity drive and uses it as a heuristic to guide (simulated) exploration of a particular task – pouring liquids. The result of this exploration is a collection of symbolic rules linking qualitative descriptions of object arrangements and the pouring action with qualitative descriptions of likely outcomes. The manner in which qualitative descriptions of object arrangements and actions are converted to numerical descriptions for the purpose of simulation parametrization is via probability distributions, which themselves are adjusted in the process of simulated exploration. This allows the grounding of the symbolic descriptions to attempt to adapt itself to the task. The resulting symbolic rules form a theory that, together with the probability distributions that ground it in numerical parametrizations, is intended to be used to predict qualitative outcomes or select manners of pouring towards achieving a goal.
Sebastian Höffner, Robert Porzel, Maria M. Hedblom, Mihai Pomarlan, Vanja Sophie Cangalovic, Johannes Pfau, John A. Bateman, and Rainer Malaka
IOS Press
Going from natural language directions to fully specified executable plans for household robots involves a challenging variety of reasoning steps. In this paper, a processing pipeline to tackle these steps for natural language directions is proposed and implemented. It uses the ontological Socio-physical Model of Activities (SOMA) as a common interface between its components. The pipeline includes a natural language parser and a module for natural language grounding. Several reasoning steps formulate simulation plans, in which robot actions are guided by data gathered using human computation. As a last step, the pipeline simulates the given natural language direction inside a virtual environment. The major advantage of employing an overarching ontological framework is that its asserted facts can be stored alongside the semantics of directions, contextual knowledge, and annotated activity models in one central knowledge base. This allows for a unified and efficient knowledge retrieval across all pipeline components, providing flexibility and reasoning capabilities as symbolic knowledge is combined with annotated sub-symbolic models.
Guendalina Righetti, Daniele Porello, Nicolas Troquard, Oliver Kutz, Maria Hedblom, and Pietro Galliani
International Joint Conferences on Artificial Intelligence Organization
When considering two concepts in terms of extensional logic, their combination will often be trivial, returning an empty extension. Consider e.g. “a Fish Vehicle”, i.e., “a Vehicle which is also a Fish”. Still, people use sophisticated strategies to produce new, non-empty concepts. All these strategies involve the human ability to mend the conflicting attributes of the input concepts and to create new properties of the combination. We focus in particular on the case where a Head concept has superior ‘asymmetric’ control over steering the resulting combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical model of the cognitive and logical mechanics of this asymmetric form of hybridisation. Its implementation is then evaluated using a combination of example ontologies.
Guendalina Righetti, Daniele Porello, Nicolas Troquard, Oliver Kutz, Maria M. Hedblom, and Pietro Galliani
IOS Press
When people combine concepts these are often characterised as “hybrid”, “impossible”, or “humorous”. However, when simply considering them in terms of extensional logic, the novel concepts understood as a conjunctive concept will often lack meaning having an empty extension (consider “a tooth that is a chair”, “a pet flower”, etc.). Still, people use different strategies to produce new non-empty concepts: additive or integrative combination of features, alignment of features, instantiation, etc. All these strategies involve the ability to deal with conflicting attributes and the creation of new (combinations of) properties. We here consider in particular the case where a Head concept has superior ‘asymmetric’ control over steering the resulting concept combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical approach to concept combination and discuss an implementation based on axiom weakening, which models the cognitive and logical mechanics of this asymmetric form of hybridisation.
Kaviya Dhanabalachandran, Vanessa Hassouna, Maria M. Hedblom, Michaela Küempel, Nils Leusmann, and Michael Beetz
ACM
Autonomous robots struggle with plan adaption in uncertain and changing environments. Although modern robots can make popcorn and pancakes, they are incapable of performing such tasks in unknown settings and unable to adapt action plans if ingredients or tools are missing. Humans are continuously aware of their surroundings. For robotic agents, real-time state updating is time-consuming and other methods for failure handling are required. Taking inspiration from human cognition, we propose a plan adaption method based on event segmentation of the image-schematic states of subtasks within action descriptors. For this, we reuse action plans of the robotic architecture CRAM and ontologically model the involved objects and image-schematic states of the action descriptor cutting. Our evaluation uses a robot simulation of the task of cutting bread and demonstrates that the system can reason about possible solutions to unexpected failures regarding tool use.
Maria M. Hedblom
Springer International Publishing
Maria M. Hedblom
Springer International Publishing
Maria M. Hedblom
Springer International Publishing
Maria M. Hedblom
Springer International Publishing