Search
Generic filters
Exact matches only
Filter by content type
Users
Attachments

Interaction- and Communication-based Systems


Prof. Dr. Simon Mayer

In our research group, we explore interactions among devices and people in ubiquitous computing environments. Our focus is on the integration of physical things into the Web, on increasing the autonomy of Web-enabled devices, and on making interactions of connected devices intelligible for people.

News

Domain-Expert Configuration of Hypermedia Multi-Agent Systems in Industrial Use Cases

A new paper from our group has been published at the 22nd International Conference on Autonomous Agents and Multiagent Systems: Domain-Expert Configuration of Hypermedia Multi-Agent Systems in Industrial Use Cases

Abstract: Based on the analysis of two real-world use cases for agriculture and manufacturing, we suggest that Hypermedia Multi-Agent Systems (MAS) are a viable option to interconnect and coordinate devices, services, machine-learning systems, and people in industrial scenarios. We propose and implement an architecture based on three components: an infrastructure that manages Web of Things environments and executes Hypermedia MAS, a visual development environment for programming agents, and a goal specification interface for end-users. While the infrastructure manages information flows between the system components and provides an environment for agents, the visual language enables domain experts to configure the behaviour of the system leveraging agent-oriented programming abstractions both at design time and run time, and the goal specification interface permits users to delegate goals to the running Hypermedia MAS while re-using domain vocabulary.

Author: Simon Mayer

Date: 25. May 2023

GEAR: Gaze-enabled Augmented Reality for Human Activity Recognition

A new paper from our group has been published at the 2023 ACM Symposium of Eye Tracking Research & Applications (ETRA): Pupillometry for Measuring User Response to Movement of an Industrial Robot

Abstract: Interactive systems can adapt to individual users to increase productivity, safety, or acceptance. Previous research focused on different factors, such as cognitive workload (CWL), to better understand and improve the human-computer or human-robot interaction (HRI). We present results of an HRI experiment that uses pupillometry to measure users’ responses to robot movements. Our results demonstrate a significant change in pupil dilation, indicating higher CWL, as a result of increased movement speed of an articulated robot arm. This might permit improved interaction ergonomics by adapting the behavior of robots or other devices to individual users at run time.

Author: Simon Mayer

Date: 25. May 2023

WISER: We are Online!

The website for the WISER flagship project which our group is a part of is now online: wiser-climate.com

On the website, we explain how the WISER project will develop a digital ecosystem to facilitate the consistent & trustworthy assessment of greenhouse gas (GHG) from heterogeneous value chains. We expect that this solution, which is based on the integration of different data sources and standards, will empower companies and organizations in their search for pathways that can reduce their effects on the earth’s climate.

We thank Innosuisse for their support of the WISER project and our interdisciplinary consortium of 15 partners who are working together to reach this ambitious goal – and have started producing first results. Stay tuned 🙂

Author: Simon Mayer

Date: 19. May 2023

GEAR: Gaze-enabled Augmented Reality for Human Activity Recognition

A new paper from our group has been published at the 2023 ACM Symposium of Eye Tracking Research & Applications (ETRA): GEAR: Gaze-enabled Augmented Reality for Human Activity Recognition

Abstract: Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they allow novel interaction methods and provide insights into user attention, intentions, and activities. However, only few studies have used gaze-enabled AR displays for human activity recognition (HAR). In an experimental study, we collected gaze data from 10 users on a HoloLens 2 (HL2) while they performed three activities (i.e., read, inspect, search). We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved an up to 98.7% activity-recognition accuracy. On the HL2, we provided users with an AR feedback that is relevant to their current activity. We present the components of our system (GEAR) including a novel solution to enable the controlled sharing of collected data. We provide the scripts and anonymized datasets which can be used as teaching material in graduate courses or for reproducing our findings.

Author: Simon Mayer

Date: 5. May 2023

Introducing Refashion

The Refashion circular fashion system — a project by SOLVE Studio that is supported by our research group — was featured last week on the European Commission’s website as an inspiring story of change contributing to the realization of the EU Textile Strategy. Refashion is a novel fashion design strategy that uses pre-designed multifunctional fabric blocks to create garments in a wide range of styles. This fashion design strategy aims to be zero-waste and sustainable. We are looking forward to exploring the potential of industrializing this circular design strategy together with SOLVE Studio!

Author: Andrei Ciortea

Date: 17. March 2023

Dagstuhl Seminar on Agents on the Web

This week, several members of our group are present at the Dagstuhl Seminar on the topic of Agents on the Web, which was proposed by Prof. Dr. Andrei Ciortea together with a team of international researchers. This Dagstuhl Seminar aims to consolidate and further investigate the research opportunities identified in the Dagstuhl Seminar 21072 (Autonomous Agents on the Web) , and to continue the transfer of knowledge and results across the involved research communities. We believe this seminar can break new ground in all these areas of research – and can help pave the way for a new generation of Web-based autonomous systems composed of people and intelligent agents interacting and collaborating through the Web.

Author: Simon Mayer

Date: 20. February 2023

Signifiers as a First-class Abstraction in Hypermedia Multi-Agent Systems

A new paper from our group has been published at the 22nd International Conference on Autonomous Agents and Multiagent Systems: Signifiers as a First-class Abstraction in Hypermedia Multi-Agent Systems

Abstract: Hypermedia APIs enable the design of reusable hypermedia clients that discover and exploit affordances on the Web. However, the reusability of such clients remains limited since they cannot plan and reason about interaction. This paper provides a conceptual bridge between hypermedia-driven affordance exploitation on the Web and methods for representing and reasoning about actions that have been extensively explored for Multi-Agent Systems (MAS) and, more broadly, Artificial Intelligence. We build on concepts and methods from Affordance Theory and Human-Computer Interaction that support interaction efficiency in open and evolvable environments to introduce signifiers as a first-class abstraction in Web-based MAS: Signifiers are designed with respect to the agent-environment context of their usage and enable agents with heterogeneous abilities to act and to reason about action. We define a formal model for the contextual exposure of signifiers in hypermedia environments that aims to drive affordance exploitation. We demonstrate our approach with a prototypical Web-based MAS where two agents with different reasoning abilities proactively discover how to interact with their environment by perceiving only the signifiers that fit their abilities. We show that signifier exposure can be inherently managed based on the dynamic agent-environment context towards facilitating effective and efficient interactions on the Web.

Author: Simon Mayer

Date: 19. February 2023

QRUco: Interactive QR Codes Through Thermoresponsive Embeddings

A new paper from our group has been published at the 2023 ACM CHI Conference on Human Factors in Computing Systems, Interactivity Track: QRUco: Interactive QR Codes Through Thermoresponsive Embeddings

Abstract: Due to their low cost and ease of deployment, fiducial markers – primarily Quick Response (QR) codes – gained widespread popularity over the past decade. Given their original use cases in logistics, these markers were created with the goal of transmitting a single static payload. We introduce QRUco as an approach to create cheap yet interactive fiducial markers. QRUco uses thermochromic paint to embed three secondary markers into QR code finder patterns. Users may interact with these markers through rubbing or pressing/touching, thereby changing the appearance of the marker while leaving the primary QR code intact. In this paper, we present the QRUco concept and demonstrate that our proposed approach is effective. We emphasize that QRUco markers can be created cheaply and that they do not require any specialized scanning equipment. We furthermore discuss limitations of the proposed approach and propose application domains that would benefit from QRUco.

Author: Simon Mayer

Date: 18. February 2023

Team

Mayer

Full Professor, Interaction and Communication based Systems

Vachtsevanou

Research Assistant, Hypermedia Multi-agent Systems