In our research group, we explore interactions among devices and people in ubiquitous computing environments. Our focus is on the integration of physical things into the Web, on increasing the autonomy of Web-enabled devices, and on making interactions of connected devices intelligible for people.
Abstract: The increasing number of objects in ubiquitous computing environments creates a need for effective object detection and identification mechanisms that permit users to intuitively initiate interactions with these objects. While multiple approaches to such object detection-including through visual object detection, fiducial markers, relative localization, or absolute spatial referencing-are available, each of these suffers from drawbacks that limit their applicability. In this paper, we propose ODIF, an architecture that permits the fusion of object situation information from such heterogeneous sources and that remains vertically and horizontally modular to allow extending and upgrading systems that are constructed accordingly. We furthermore present BLEARVIS, a prototype system that builds on the proposed architecture and integrates computer-vision (CV) based object detection with radio-frequency (RF) angle of arrival (AoA) estimation to identify BLE-tagged objects. In our system, the front camera of a Mixed Reality (MR) head-mounted display (HMD) provides a live image stream to a vision-based object detection module, while an antenna array that is mounted on the HMD collects AoA information from ambient devices. In this way, BLEARVIS is able to differentiate between visually identical objects in the same environment and can provide an MR overlay of information (data and controls) that relates to them. We include experimental evaluations of both, the CV-based object detection and the RF-based AoA estimation, and discuss the applicability of the combined RF and CV pipelines in different ubiquitous computing scenarios. This research can form a starting point to spawn the integration of diverse object detection, identification, and interaction approaches that function across the electromagnetic spectrum, and beyond.
A new paper from our group has been accepted for publication in the Proceedings of the International Semantic Web Conference 2023 (Posters, Demos and Industry Tracks), and will be presented at the 22nd International Semantic Web Conference in Athens, Greece, from November 6-10, 2023.
Abstract: Real-time collaboration has become commonplace in centralized Web applications, but decentralized Linked Data systems still lack readily accessible mechanisms. This demo paper proposes a novel approach that provides a viable solution to implement collaborative Linked Data in the Solid ecosystem using Conflict-free Replicated Data Types (CRDTs) and hypermedia-driven interaction. Specifically, we introduce a dedicated vocabulary for describing interactions with CRDT-based resources hosted in Solid Pods, empowering software clients to dynamically discover means for collaborative editing at run time. In contrast to current solutions for collaborative RDF, our approach works in combination with industry standard CRDTs to offer a seamless co-editing experience in decentralized Linked Data systems. To demonstrate the practicality of our approach, we showcase a Solid-hosted website that utilizes the vocabulary to expose hypermedia controls and a browser extension that effectively consumes these descriptions to enable real-time collaborative editing through CRDTs. By strategically shifting intelligence to the client-side, our approach significantly lowers the entry barrier for publishing real-time collaborative resources on the (Semantic) Web.
Embedding Autonomous Agents into Low-Power Wireless Sensor Networks
A new paper from our group has been published at the International Conference on Practical Applications of Agents and Multi-Agent Systems which took place in Guimarães, Portugal, from July 12-14, 2023.
Abstract: Low-power sensors are increasingly becoming available, equipped with more energy-efficient processing and networking capabilities. Still, in order to accommodate the independent deployment and intermittent availability of such constrained devices, engineers often manually reconfigure system behavior for integrating sensors and actuators into complex and context-aware systems. The Multi-Agent Systems paradigm enables engineering systems where components can be deployed more independently and operate towards achieving their design objectives. In this process, they act autonomously and interact with others to perform context-aware decision-making without human intervention at run time. In this paper, we present autonomous agents implemented as low-power nodes that perceive and act in a shared environment through sensors and actuators. The autonomous agents on these constrained devices locally reason and act on the environment, and wirelessly interact with each other to share knowledge and enable more context-aware system behavior. The capabilities of our low-power autonomous nodes are demonstrated in a light-control scenario with two Belief-Desire-Intention agents. Our experiments demonstrate that running autonomous and social agents in low-power platforms incurs little overhead, indicating their feasibility.
Abstract: We present HyperBrain, a hypermedia client that autonomously navigates hypermedia environments to achieve user goals specified in natural language. To achieve this, the client makes use of a large language model to decide which of the available hypermedia controls should be used within a given application context. In a demonstrative scenario, we show the client’s ability to autonomously select and follow simple hyperlinks towards a high-level goal, successfully traversing the hypermedia structure of Wikipedia given only the markup of the respective resources. We show that hypermedia navigation based on language models is effective, and propose that this should be considered as a step to create hypermedia environments that are used by autonomous clients alongside people.
Nowadays, people encounter personalized services predominantly on the Web using personal computers or mobile devices. The increasing capabilities and pervasiveness of Mixed Reality (MR) devices, however, prepare the ground for personalization possibilities that are increasingly interwoven with our physical reality, extending beyond these traditional devices. Such ubiquitous, personalized MR experiences bring the potential to make our lives and interactions with our environments more convenient, intuitive, and safer. However, these experiences will also be prone to amplify the known beneficial and, notably, harmful implications of personalization. For instance, the loss of shared world objects or the nourishing of “real-world filter bubbles” might have serious social and societal consequences as they could lead to increasingly isolated experienced realities. In this work, we envision different modes for the sharing of personalized MR environments to counteract these potential harms of ubiquitous personalization. We furthermore illustrate the different modes with use cases and list open questions towards this vision.
Abstract: Based on the analysis of two real-world use cases for agriculture and manufacturing, we suggest that Hypermedia Multi-Agent Systems (MAS) are a viable option to interconnect and coordinate devices, services, machine-learning systems, and people in industrial scenarios. We propose and implement an architecture based on three components: an infrastructure that manages Web of Things environments and executes Hypermedia MAS, a visual development environment for programming agents, and a goal specification interface for end-users. While the infrastructure manages information flows between the system components and provides an environment for agents, the visual language enables domain experts to configure the behaviour of the system leveraging agent-oriented programming abstractions both at design time and run time, and the goal specification interface permits users to delegate goals to the running Hypermedia MAS while re-using domain vocabulary.
Abstract: Interactive systems can adapt to individual users to increase productivity, safety, or acceptance. Previous research focused on different factors, such as cognitive workload (CWL), to better understand and improve the human-computer or human-robot interaction (HRI). We present results of an HRI experiment that uses pupillometry to measure users’ responses to robot movements. Our results demonstrate a significant change in pupil dilation, indicating higher CWL, as a result of increased movement speed of an articulated robot arm. This might permit improved interaction ergonomics by adapting the behavior of robots or other devices to individual users at run time.
The website for the WISER flagship project which our group is a part of is now online: wiser-climate.com
On the website, we explain how the WISER project will develop a digital ecosystem to facilitate the consistent & trustworthy assessment of greenhouse gas (GHG) from heterogeneous value chains. We expect that this solution, which is based on the integration of different data sources and standards, will empower companies and organizations in their search for pathways that can reduce their effects on the earth’s climate.
We thank Innosuisse for their support of the WISER project and our interdisciplinary consortium of 15 partners who are working together to reach this ambitious goal – and have started producing first results. Stay tuned 🙂
We research systems of autonomous agents for the Web: software agents able to meet their design objectives through flexible autonomous use of Web resources, and to support transparent and trustworthy interactions with people. We explore application areas ranging from industrial manufacturing to counteracting online disinformation.
Deep Insights into Real-World Objects and Environments
We explore how real-world objects might provide insights into their state, behavior, and provenance for users and how these insights might be used by users to derive smarter choices when interacting with such objects.
Enhanced and Intuitive User-Object Interactions
The geographic boundaries and economic limitations can be breached in virtual spaces.
The increasing number of connected (IoT-) devices in everyday environments calls for methods that enables users to intuitively and homogeneously interact with them. Mixed Reality head-mounted displays, such as the Microsoft HoloLens 2, are a suitable mean, since they allow users to perform hands-free interactions and they can augment the physical space of a user. To provide a homogeneous way to interact with a plethora of devices that have been made by different manufactures, we propose the usage of the Web of Things Thing Description (TD), a standardized way of describing the programming interface of a device (Thing).
In this project, you will create a Unity application for the HoloLens 2, capable of:
accessing the TD of a Thing,
parsing the TD from JSON to C#,
semantically understanding the TD's content, and
providing means (e.g., buttons, levers, text fields) to interact with a Thing through Mixed Reality in an intuitive way.
If you are a Master's student, you will enhance this application by making it additionally capable of:
remotely accessing a Thing's properties,
remotely interacting with a Thing,
live-streaming a video of the Thing to a remote user in which the interaction with the Thing is embedded,
granting access to different parts of the Thing's interaction possibilities based on a user's permission.
This project is great for you, if you are:
Strongly interested or familiar in programming Mixed/Augmented Reality applications
Interested in Semantic Technologies
Interested in participating in state of the art research
Interested in publishing your research work in academic venues
The proposed project connects context-based unique IDs (CUIDs) of visually detected objects to their semantic description in Thing Descriptions (TDs). The project involves a visual perturbation (the execution of the approach might feel like activities in a ghost town to the users) to distinguish objects of similar appearances and assigns a relevant TD to each instance of the object for interaction. Additionally, TDs will be created and made available in a Knowledge Graph for discovery and use in the system.
The proposed project is a continuation of an earlier project, in which, through computer vision algorithms, we were able to identify and temporally track objects and their relationships, which allowed us to describe the environment based on a contextual understanding of it. The input for the system is entirely visual and has no other sensory mode. In this situation, assigning a unique ID to the objects visible in the surrounding is challenging. The difficulty becomes evident when it comes to distinguishing between two similar-looking objects present in the scene. For this, we defined and implemented the concept of CUIDs, which are assigned to objects detected in a scene according to their relationships with other objects. To get you familiar with this previous implementation, we will provide you with the recently submitted paper on this project.
A computer science student or excited to work on a technology project
Familiar with machine learning / Interested in computer vision
Interested in participating in state-of-the-art research
Interested in publishing your research work in academic venues
Given ever smarter environments, we are more and more exposed to devices that process (personal) data about ourselves. Smart devices are no longer exclusive to high-end buildings and industrial settings, they are becoming ubiquitous in our everyday environments, such as in educational and private spaces (e.g., households). While, for some of these devices, specifically the ones we own and have installed ourselves, we might have consented to the processing of our data, we do not have that choice for devices installed in public spaces. Moreover, previous research has shown that users tend to accept privacy policies, even if they do not completely understand them, given that the perceived value of the services they want to use outweighs their data processing concerns (if any). In this project, you will be creating a Digital Privacy Companion that looks after its users, by making them aware of the data processing activities of smart devices and by acting on their behalf to ensure that their privacy concerns are considered in the environment they are located in. To this end, the companion will be capable of:
Recognising, through computer vision, smart devices that process (personal) user data;
Semantically understanding the privacy policies of such observed devices;
Communicating in an understandable way (through Mixed Reality) the privacy processing activities of diverse smart devices;
Identifying “legal rights” that can be exercised by the users while in an environment enabled with devices that process users (personal) data;
Utilising uniform interfaces to act upon the smart devices that process users (personal) data (i.e., actionable privacy)
A computer science or electrical engineering student interested in privacy
Excited to work in a technology project that could have a positive societal impact
Familiar with machine learning / Interested in computer vision
Interested or familiar with Mixed Reality
Interested in participating in state of the art research
Interested in publishing your research work in academic venues
In our research group, we explore interactions among devices and people in ubiquitous computing environments. We are offering Master's and Bachelor's Thesis topics across a wide range of fields. Typically, a thesis at our group will include conceptual as well as implementation work, and we strive for achieving integration of our students with the rest of our research team.
If you would like to work with us, get in touch with Simon or another researcher in our team.
get_appKenan Bektas, Jannis Rene Strecker, Simon Mayer, Kimberly Garcia, Jonas Hermann, Kay Erik Jenss, Yasmine Sheila Antille, Marc Elias Soler
Conference or Workshop Item
Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they allow novel interaction methods and provide insights into user attention, intentions, and activities. However, only few studies have used gaze-enabled AR displays for human activity recognition (HAR). In an experimental study, we collected gaze data from 10 users on a HoloLens 2 (HL2) while they performed three activities (i.e., read, inspect, search). We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved an up to 98.7% activity-recognition accuracy. On the HL2, we provided users with an AR feedback that is relevant to their current activity. We present the components of our system (GEAR) including a novel solution to enable the controlled sharing of collected data. We provide the scripts and anonymized datasets which can be used as teaching material in graduate courses or for reproducing our findings.
Due to their low cost and ease of deployment, fiducial markers – primarily Quick Response (QR) codes – gained widespread popularity over the past decade. Given their original use cases in logistics, these markers were created with the goal of transmitting a single static payload. We introduce QRUco as an approach to create cheap yet interactive fiducial markers. QRUco uses thermochromic paint to embed three secondary markers into QR code finder patterns. Users may interact with these markers through rubbing or pressing/touching, thereby changing the appearance of the marker while leaving the primary QR code intact. In this paper, we present the QRUco concept and demonstrate that our proposed approach is effective. We emphasize that QRUco markers can be created cheaply and that they do not require any specialized scanning equipment. We furthermore discuss limitations of the proposed approach and propose application domains that would benefit from QRUco.
get_appDanai Vachtsevanou, Andrei Ciortea, Simon Mayer, Jérémy Lemee
Hypermedia APIs enable the design of reusable hypermedia clients that discover and exploit affordances on the Web. However, the reusability of such clients remains limited since they cannot plan and reason about interaction. This paper provides a conceptual bridge between hypermedia-driven affordance exploitation on the Web and methods for representing and reasoning about actions that have been extensively explored for Multi-Agent Systems (MAS) and, more broadly, Artificial Intelligence. We build on concepts and methods from Affordance Theory and Human-Computer Interaction that support interaction efficiency in open and evolvable environments to introduce signifiers as a first-class abstraction in Web-based MAS: Signifiers are designed with respect to the agent-environment context of their usage and enable agents with heterogeneous abilities to act and to reason about action. We define a formal model for the contextual exposure of signifiers in hypermedia environments that aims to drive affordance exploitation. We demonstrate our approach with a prototypical Web-based MAS where two agents with different reasoning abilities proactively discover how to interact with their environment by perceiving only the signifiers that fit their abilities. We show that signifier exposure can be inherently managed based on the dynamic agent-environment context towards facilitating effective and efficient interactions on the Web.
To enable people to interact more efficiently with virtual and physical services in their surroundings, it would be beneficial if information could more fluently be passed across digital and non-digital spaces. To this end, we propose to combine semantic technologies with Optical Character Recognition on an Augmented Reality (AR) interface to enable the semantic integration of (written) information located in our everyday environments with Internet of Things devices. We hence present SOCRAR, a system that is able to detect written information from a user’s physical environment while contextualizing this data through a semantic backend. The SOCRAR system enables in-band semantic translation on an AR interface, permits semantic filtering and selection of appropriate device interfaces, and provides cognitive offloading by enabling users to store information for later use. We demonstrate the feasibility of SOCRAR through the implementation of three concrete scenarios.
get_appAurelia Tamo-Larrieux, Andrei Ciortea, Simon Mayer
Intelligent machines surprise us with unexpected behaviors, giving rise to the question of whether such machines exhibit autonomous judgment. With judgment comes (the allocation of) responsibility. While it can be dangerous or misplaced to shift responsibility from humans to intelligent machines, current frameworks to think about responsible and transparent distribution of responsibility between all involved stakeholders are lacking. A more granular understanding of the autonomy exhibited by intelligent machines is needed to promote a more nuanced public discussion and allow laypersons as well as legal experts to think about, categorize, and differentiate among the capacities of artificial agents when distributing responsibility. To tackle this issue, we propose criteria that would support people in assessing the Machine Capacity of Judgment (MCOJ) of artificial agents. We conceive MCOJ drawing from the use of Human Capacity of Judgment (HCOJ) in the legal discourse, where HCOJ criteria are legal abstractions to assess when decision-making and judgment by humans must lead to legally binding actions or inactions under the law. In this article, we show in what way these criteria can be transferred to machines.
get_appMarc Blöchlinger, Jing Wu, Simon Mayer, Klaus Fuchs, Melanie Stoll, Lia Bally
The increasingly prevalent diet-related non-communicable diseases (NCDs) constitute a modern health pandemic. Higher nutrition literacy (NL) correlates with healthier diets, which in turn has favorable effects on NCDs. Assessing and classifying people's NL is helpful in tailoring the level of education required for disease self-management/empowerment and adequate treatment strategy selection. With recently introduced regulation in the European Union and beyond, it has become easier to leverage loyalty card data and enrich it with nutrition information about bought products. We present a novel system that utilizes such data to classify individuals into high- and low- NL classes, using well-known machine learning (ML) models, thereby permitting for instance better targeting of educational measures to support the population-level management of NCDs. An online survey (n = 779) was conducted to assess individual NL levels and divide participants into high- and low- NL groups. Our results show that there are significant differences in NL between male and female, as well as between overweight and non-overweight individuals. No significant differences were found for other demographic parameters that were investigated. Next, the loyalty card data of participants (n = 11) was collected from two leading Swiss retailers with the consent of participants and a ML system was trained to predict high or low NL for these individuals. Our best ML model, which utilizes the XGBoost algorithm and monthly aggregated baskets, achieved a Macro-F1-score of .89 at classifying NL. We hence show the feasibility of identifying individual NL levels based on household loyalty card data leveraging ML models, however due to the small sample size, the results need to be further verified with a larger sample size.
get_appJaehyun Ahn, Haifa Gaza, Jincheol Oh, Klaus Fuchs, Jing Wu, Simon Mayer, Jaewook Byun
We study ways to improve the shopping experience in a mixedreality environment. In the demonstration, the participants would experience the concept of mixed-reality-enabled convenience stores for healthier purchases. Over Microsoft HoloLens2, anywhere can become a convenience store where physical and virtual foods can be recognized and deployed. The prototype would help customers with healthier purchases. For each product, its nutrition facts and even an intuitive metric, NutriScore, are available for customers. Furthermore, the visualization of its footsteps in supply chains over the globe would guarantee transparency of its origin. GS1 EPCIS standard and temporal reachability computation enable the demonstration.
get_appYousouf Taghzouti, Danai Vachtsevanou, Simon Mayer, Andrei Ciortea
Content negotiation aims at enabling a server to provide a client with a representation of a resource that meets its needs. However, client and server might desire to negotiate constraints that go beyond the media type or language of the alternative representation. This is especially true in the Semantic Web, as a resource can be described with a single media type, but with different vocabularies (FOAF, schema.org, etc.), and may match specific patterns. In this paper, we propose an approach to increase the flexibility when negotiating a representation between client and server. Our approach follows the goals of the World Wide Web and uses a set of existing technologies: SHACL and profile-based negotiation. We define the mechanism (in terms of protocol and algorithm) for clients to announce their expectations and for servers to react and respond to them. We then explain, through a use case, how the same approach could be used in Web-based Multi-Agent Systems to help autonomous agents achieve their goals on the Web.
A higher degree of automation - and autonomization - of agricultural processes is expected to lead to productivity gains, especially in light of more environmentally-friendly farming practices, while improving the safety of agricultural processes. To exploit the potential of this development, it should be possible to flexibly integrate devices and services within service mashups, and thereby enable them to provide higher-value services together, However, current farm automation tools instead tend to reinforce vertical functional silos and tight coupling within often proprietary systems that manage the farm environment information. We propose to describe capabilities of individual devices and services and interlink them across components and with the description of the farm environment. We posit that this will better enable autonomous agents - software agents as well as humans - to perform complex agricultural tasks while integrating heterogeneous devices and services across multiple vendors. Concretely, we describe - and demonstrate in a laboratory setting - the usage of a Knowledge Graph to describe the environment and equipment used to perform farming tasks. We show how a multi-agent-based automation system for smart farming uses this graph to reason about the state of the environment and the agents to plan the achievement of user-specified goals. Furthermore, we show how such knowledge-driven autonomous systems may include human agents alongside artificial agents as first-class citizens, towards realizing "Social Machines" in the agriculture domain.
get_appClement Guitton, Aurelia Tamo-Larrieux, Simon Mayer
The field of computational law has increasingly moved into the focus of the scientific community, with recent research analysing its issues and risks. In this article, we seek to draw a structured and comprehensive list of societal issues that the deployment of automatically processable regulation could entail. We do this by systematically exploring attributes of the law that are being challenged through its encoding and by taking stock of what issues current projects in this field raise. This article adds to the current literature not only by providing a needed framework to structure arising issues of computational law but also by bridging the gap between theoretical literature and practical implementation. Key findings of this article are: (1) The primary benefit (efficiency vs. accessibility) sought after when encoding law matters with respect to the issues such an endeavor triggers; (2) Specific characteristics of a project—project type, degree of mediation by computers, and potential for divergence of interests—each impact the overall number of societal issues arising from the implementation of automatically processable regulation.
The Chair of Interaction- and Communication-based Systems offers a series of challenging but rewarding courses on the topics of Ubiquitous Computing and Web-based Autonomous Systems as well as introductory courses to Computer Systems and to Computer Science.
Dieser Kurs fokussiert auf die Hard‑ und Softwarekomponenten sowie die Übersetzungsprozesse, welche zusammen die Grundlage moderner Computersysteme bilden. Wir erarbeiten dadurch das notwendige Basiswissen und ‑verständnis bezüglich der Darstellung und Verarbeitung von Informationen in modernen Computersystemen.
Introduction to Computer Systems and Networks
The goal of this hands-on course is to equip participants with the fundamental knowledge and tools required to design, implement, analyze, and take decisions on distributed computer systems, in particular in the context of the Internet of Things.
This course covers fundamental concepts, technologies, drivers, trends, and implications of Ubiquitous Computing. The course gives both, an overview of UbiComp methods and deeper dives into foundational technologies of UbiComp that are integrated with practical exercises, a seminar, and a course project. In addition, we discuss implications of the proliferation of UbiComp on businesses and on society as a whole.
Collaborations and Funding
phone Your contact person
Institute of Computer Science (ICS-HSG)
Prof. Dr. Simon Mayer
CH-9000 St. Gallen
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.