Search
Generic filters
Exact matches only
Filter by content type
Users
Attachments

Interaction- and Communication-based Systems


Prof. Dr. Simon Mayer

In our research group, we explore interactions among devices and people in ubiquitous computing environments. Our focus is on the integration of physical things into the Web, on increasing the autonomy of Web-enabled devices, and on making interactions of connected devices intelligible for people.

News

Real-time Collaboration in Linked Data Systems

A new paper from our group has been accepted for publication in the Proceedings of the International Semantic Web Conference 2023 (Posters, Demos and Industry Tracks), and will be presented at the 22nd International Semantic Web Conference in Athens, Greece, from November 6-10, 2023.

Click here to get to the paper directly!

Abstract: Real-time collaboration has become commonplace in centralized Web applications, but decentralized Linked Data systems still lack readily accessible mechanisms. This demo paper proposes a novel approach that provides a viable solution to implement collaborative Linked Data in the Solid ecosystem using Conflict-free Replicated Data Types (CRDTs) and hypermedia-driven interaction. Specifically, we introduce a dedicated vocabulary for describing interactions with CRDT-based resources hosted in Solid Pods, empowering software clients to dynamically discover means for collaborative editing at run time. In contrast to current solutions for collaborative RDF, our approach works in combination with industry standard CRDTs to offer a seamless co-editing experience in decentralized Linked Data systems. To demonstrate the practicality of our approach, we showcase a Solid-hosted website that utilizes the vocabulary to expose hypermedia controls and a browser extension that effectively consumes these descriptions to enable real-time collaborative editing through CRDTs. By strategically shifting intelligence to the client-side, our approach significantly lowers the entry barrier for publishing real-time collaborative resources on the (Semantic) Web.

Author: Simon Mayer

Date: 26. September 2023

Embedding Autonomous Agents into Low-Power Wireless Sensor Networks

A new paper from our group has been published at the International Conference on Practical Applications of Agents and Multi-Agent Systems which took place in Guimarães, Portugal, from July 12-14, 2023.

Click here to get to the paper directly!

Abstract: Low-power sensors are increasingly becoming available, equipped with more energy-efficient processing and networking capabilities. Still, in order to accommodate the independent deployment and intermittent availability of such constrained devices, engineers often manually reconfigure system behavior for integrating sensors and actuators into complex and context-aware systems. The Multi-Agent Systems paradigm enables engineering systems where components can be deployed more independently and operate towards achieving their design objectives. In this process, they act autonomously and interact with others to perform context-aware decision-making without human intervention at run time. In this paper, we present autonomous agents implemented as low-power nodes that perceive and act in a shared environment through sensors and actuators. The autonomous agents on these constrained devices locally reason and act on the environment, and wirelessly interact with each other to share knowledge and enable more context-aware system behavior. The capabilities of our low-power autonomous nodes are demonstrated in a light-control scenario with two Belief-Desire-Intention agents. Our experiments demonstrate that running autonomous and social agents in low-power platforms incurs little overhead, indicating their feasibility.

Author: Simon Mayer

Date: 25. September 2023

HyperBrain: Human-inspired Hypermedia Guidance using a Large Language Model

A new paper from our group has been published at the 34th ACM Conference on Hypertext and Social Media which took place in Rome from September 4-8, 2023.

Click here to get to the paper directly!

Abstract: We present HyperBrain, a hypermedia client that autonomously navigates hypermedia environments to achieve user goals specified in natural language. To achieve this, the client makes use of a large language model to decide which of the available hypermedia controls should be used within a given application context. In a demonstrative scenario, we show the client’s ability to autonomously select and follow simple hyperlinks towards a high-level goal, successfully traversing the hypermedia structure of Wikipedia given only the markup of the respective resources. We show that hypermedia navigation based on language models is effective, and propose that this should be considered as a step to create hypermedia environments that are used by autonomous clients alongside people.

Author: Simon Mayer

Date: 24. September 2023

Sharing Personalized Mixed Reality Experiences

A new paper from our group has been published at the Workshop ABIS 2023 – 27th International Workshop on Personalization and Recommendation  which took place during the conference Mensch und Computer 2023 on September 3, 2023.

Abstract:

Nowadays, people encounter personalized services predominantly on the Web using personal computers or mobile devices. The increasing capabilities and pervasiveness of Mixed Reality (MR) devices, however, prepare the ground for personalization possibilities that are increasingly interwoven with our physical reality, extending beyond these traditional devices. Such ubiquitous, personalized MR experiences bring the potential to make our lives and interactions with our environments more convenient, intuitive, and safer. However, these experiences will also be prone to amplify the known beneficial and, notably, harmful implications of personalization. For instance, the loss of shared world objects or the nourishing of “real-world filter bubbles” might have serious social and societal consequences as they could lead to increasingly isolated experienced realities. In this work, we envision different modes for the sharing of personalized MR environments to counteract these potential harms of ubiquitous personalization. We furthermore illustrate the different modes with use cases and list open questions towards this vision.


Link to the full paper

Author: Jannis Strecker

Date: 1. September 2023

Domain-Expert Configuration of Hypermedia Multi-Agent Systems in Industrial Use Cases

A new paper from our group has been published at the 22nd International Conference on Autonomous Agents and Multiagent Systems: Domain-Expert Configuration of Hypermedia Multi-Agent Systems in Industrial Use Cases

Abstract: Based on the analysis of two real-world use cases for agriculture and manufacturing, we suggest that Hypermedia Multi-Agent Systems (MAS) are a viable option to interconnect and coordinate devices, services, machine-learning systems, and people in industrial scenarios. We propose and implement an architecture based on three components: an infrastructure that manages Web of Things environments and executes Hypermedia MAS, a visual development environment for programming agents, and a goal specification interface for end-users. While the infrastructure manages information flows between the system components and provides an environment for agents, the visual language enables domain experts to configure the behaviour of the system leveraging agent-oriented programming abstractions both at design time and run time, and the goal specification interface permits users to delegate goals to the running Hypermedia MAS while re-using domain vocabulary.

Author: Simon Mayer

Date: 25. May 2023

Pupillometry for Measuring User Response to Movement of an Industrial Robot

A new paper from our group has been published at the 2023 ACM Symposium of Eye Tracking Research & Applications (ETRA): Pupillometry for Measuring User Response to Movement of an Industrial Robot

Abstract: Interactive systems can adapt to individual users to increase productivity, safety, or acceptance. Previous research focused on different factors, such as cognitive workload (CWL), to better understand and improve the human-computer or human-robot interaction (HRI). We present results of an HRI experiment that uses pupillometry to measure users’ responses to robot movements. Our results demonstrate a significant change in pupil dilation, indicating higher CWL, as a result of increased movement speed of an articulated robot arm. This might permit improved interaction ergonomics by adapting the behavior of robots or other devices to individual users at run time.

Author: Simon Mayer

Date: 25. May 2023

WISER: We are Online!

The website for the WISER flagship project which our group is a part of is now online: wiser-climate.com

On the website, we explain how the WISER project will develop a digital ecosystem to facilitate the consistent & trustworthy assessment of greenhouse gas (GHG) from heterogeneous value chains. We expect that this solution, which is based on the integration of different data sources and standards, will empower companies and organizations in their search for pathways that can reduce their effects on the earth’s climate.

We thank Innosuisse for their support of the WISER project and our interdisciplinary consortium of 15 partners who are working together to reach this ambitious goal – and have started producing first results. Stay tuned 🙂

Author: Simon Mayer

Date: 19. May 2023

GEAR: Gaze-enabled Augmented Reality for Human Activity Recognition

A new paper from our group has been published at the 2023 ACM Symposium of Eye Tracking Research & Applications (ETRA): GEAR: Gaze-enabled Augmented Reality for Human Activity Recognition

Abstract: Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they allow novel interaction methods and provide insights into user attention, intentions, and activities. However, only few studies have used gaze-enabled AR displays for human activity recognition (HAR). In an experimental study, we collected gaze data from 10 users on a HoloLens 2 (HL2) while they performed three activities (i.e., read, inspect, search). We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved an up to 98.7% activity-recognition accuracy. On the HL2, we provided users with an AR feedback that is relevant to their current activity. We present the components of our system (GEAR) including a novel solution to enable the controlled sharing of collected data. We provide the scripts and anonymized datasets which can be used as teaching material in graduate courses or for reproducing our findings.

Author: Simon Mayer

Date: 5. May 2023

Team