Search
Generic filters
Exact matches only
Filter by content type
Users
Attachments

Publications Overview

International audit standards require the direct assessment of a financial statement’s underlying accounting transactions, referred to as journal entries. Recently, driven by the advances in artificial intelligence, deep learning inspired audit techniques have emerged in the field of auditing vast quantities of journal entry data. Nowadays, the majority of such methods rely on a set of specialized models, each trained for a particular audit task. At the same time, when conducting a financial statement audit, audit teams are confronted with (i) challenging time-budget constraints, (ii) extensive documentation obligations, and (iii) strict model interpretability requirements. As a result, auditors prefer to harness only a single preferably ‘multi-purpose’ model throughout an audit engagement. We propose a contrastive self-supervised learning framework designed to learn audit task invariant accounting data representations to meet this requirement. The framework encompasses deliberate interacting data augmentation policies that utilize the attribute characteristics of journal entry data. We evaluate the framework on two real-world datasets of city payments and transfer the learned representations to three downstream audit tasks: anomaly detection, audit sampling, and audit documentation. Our experimental results provide empirical evidence that the proposed framework offers the ability to increase the efficiency of audits by learning rich and interpretable ‘multi-task’ representations.

More
The only constant in our world is change. Why is there not a field of science that explicitly studies continuous change? We propose the establishment of process science, a field that studies processes: coherent series of changes, both man-made and naturally occurring, that unfold over time and occur at various levels. Process science is concerned with understanding and influencing change. It entails discovering and understanding processes as well as designing interventions to shape them into desired directions. Process science is based on four key principles; it (1) puts processes at the center of attention, (2) investigates processes scientifically, (3) embraces perspectives of multiple disciplines, and (4) aims to create impact by actively shaping the unfolding of processes. The ubiquitous availability of digital trace data, combined with advanced data analytics capabilities, offer new and unprecedented opportunities to study processes through multiple data sources, which makes process science very timely.

More
get_appFrancesca Zerbato, Ronny Seiger, Gemma Di Federico, Andrea Burattin, Barbara Weber
Conference or Workshop Item
Process mining techniques rely on the availability of event logs, where events have a certain granularity that is deemed appropriate for representing business activities. In this paper, we discuss why choosing a proper granularity level during preprocessing can be challenging and reflect on the implications that such a “fixed” view over the process bears for the analysis. Then, inspired by use cases in the context of user behavior analysis, we envision possible solutions that allow exploring and mining multiple granularity levels of process activities.

More
get_appJanick Spirig, Kimberly Garcia, Simon Mayer
Forthcoming
Research in proactive agents capable of anticipating users’ needs has been conducted in different application areas, from agents aiming to help users accomplish their next task while using a text editor, to digital assistants that provide support to medical professionals in hospitals. Considering these works and following the rapid development in machine learning, the growing adoption of the IoT, the prevalence of pervasive computing, and the various innovative user interfaces that are becoming mainstream, we propose the creation of expert digital companions. Expert digital companions are agents that know the specifics of an environment, the available physical equipment in a space, as well as software components that might be reachable in such an environment. These expert digital companions can interact with personal digital companions, which know the preferences of a specific user. Thus, expert and personal digital companions collaborate to provide customised assistance to a user given her environment. In this paper, we present an implementation of an expert digital companion for employees working in an office and a shop-floor environment that takes advantages of: the decoupling that the W3C WoT Thing Description provides to interact with and control devices and other tools; a Knowledge Graph to provide richer descriptions of elements in the environment; a Computer Vision Algorithm to perceive the physical world; Mixed Reality as the medium to deliver assistance to workers.

More
Over the past years, topics ranging from climate change to human rights have seen increasing importance for investment decisions. Hence, investors (asset managers and asset owners) who wanted to incorporate these issues started to assess companies based on how they handle such topics. For this assessment, investors rely on specialized rating agencies that issue ratings along the environmental, social and governance (ESG) dimensions. Such ratings allow them to make investment decisions in favor of sustainability. However, rating agencies base their analysis on subjective assessment of sustainability reports, not provided by every company. Furthermore, due to human labor involved, rating agencies are currently facing the challenge to scale up the coverage in a timely manner. In order to alleviate these challenges and contribute to the overall goal of supporting sustainability, we propose a heterogeneous ensemble model to predict ESG ratings using fundamental data. This model is based on feedforward neural network, CatBoost and XGBoost ensemble members. Given the public availability of fundamental data, the proposed method would allow cost-efficient and scalable creation of initial ESG ratings (also for companies without sustainability reporting). Using our approach we are able to explain 54% of the variation in ratings R2 using fundamental data and outperform prior work in this area.

More
We present an annotation approach to capturing emotional and cognitive empathy in student-written peer reviews on business models in German. We propose an annotation scheme that allows us to model emotional and cognitive empathy scores based on three types of review components. Also, we conducted an annotation study with three annotators based on 92 student essays to evaluate our annotation scheme. The obtained inter-rater agreement of α = 0.79 for the components and the π = 0.41 for the empathy scores indicate that the proposed annotation scheme successfully guides annotators to a substantial to moderate agreement. Moreover, we trained predictive models to detect the annotated empathy structures and embedded them in an adaptive writing support system for students to receive individual empathy feedback independent of an instructor, time, and location. We evaluated our tool in a peer learning exercise with 58 students and found promising results for perceived empathy skill learning, perceived feedback accuracy, and intention to use. Finally, we present our freely available corpus of 500 empathy-annotated, student-written peer reviews on business models and our annotation guidelines to encourage future research on the design and development of empathy support systems.

More
Process mining enables organizations to streamline and automate their business processes. The initial phases of process mining projects often include exploration activities aimed to familiarize with the data and understand the process. Despite being a crucial step of many analyses, exploration can be challenging and may demand targeted guidance and support. Still, little attention has been paid to understanding how process analysts approach this exploratory phase. With this goal in mind, in this paper, we report the results of an empirical study investigating exploration practices in process mining. Our study reveals that analysts follow different behavior patterns when exploring event logs and enact various strategies to understand the data and gain new insights. The results remark the need for a deeper understanding of process mining practices and inform future research directions to better support process analysts and explain the cognitive processes underlying the analysis.

More
Die technische Entwicklung der Künstlichen Intelligenz (KI) beeinflusst den Wandel unserer Zeit. Als derzeit erfolgreichstes Teilgebiet der KI führt Deep Learning in den letzten Jahren zu bahnbrechenden Fortschritten in den verschiedensten Lebensbereichen. Auch in der Wirtschaftsprüfung wird Deep Learning zunehmend als disruptive Schlüsseltechnologie betrachtet. Der folgende Artikel stellt die grundlegende Funktionsweise von Deep Learning vor und diskutiert Einsatzmöglichkeiten in der Wirtschaftsprüfung.

More
Industrial Internet of Things (IoT) systems are becoming increasingly complex with interconnected field devices. To cope with the complexity and to facilitate further interoperability between heterogeneous field devices, it is necessary to have a holistic understanding of different types of provenance information about field devices and associated processes. In this paper, we investigate the potential integration of such provenance with the W3C Web of Things and current industrial standards to realize more explainable, efficient, and safer industrial IoT systems.

More
Background: Insufficient physical activity and unhealthy diets are contributing to the rise in non-communicable diseases. Preventative mobile health (mHealth) interventions may enable reversing this trend, but present bias might reduce their effectiveness. Future-self avatar interventions have resulted in behavior change in related fields, yet evidence whether such interventions can change health behavior is lacking. Objective: Our primary objectives are to investigate the impact of a future-self avatar mHealth intervention on physical activity and food purchasing behavior, and to examine the feasibility of a novel automated nutrition tracking system. We also aim to understand how this intervention impacts related attitudinal and motivational constructs. Methods: We conducted a 12-week parallel randomized-controlled trial (RCT), followed by semi-structured interviews. German-speaking smartphone users aged ≥18 years living in Switzerland, and using at least one of the two leading Swiss grocery loyalty cards, were recruited for the trial. Data were collected from November 2020 to April 2021. The intervention group received the FutureMe intervention—a physical activity and food purchase tracking mobile phone application that uses a future-self avatar as the primary interface and provides participants with personalized food basket analysis and shopping tips. The control group received a conventional, text- and graphic-based primary interface intervention. We pioneered a novel system to track nutrition leveraging digital receipts from loyalty card data analyzing food purchases in a fully automated way. Data were consolidated in 4-week intervals and non-parametric tests were conducted to test for within- and between-group differences. Results: We recruited 167 participants; 95 eligible participants were randomized into either the intervention (n=42) or control group (n=53). The median age was 44.00 years (IQR 19.00), and the gender ratio was balanced (female 52/95, 55%). Attrition was unexpectedly high with only 30 participants completing the intervention, negatively impacting the statistical power of our study. The FutureMe intervention led to directional, small increases in physical activity (median +242 steps/day) and to directional improvements in the nutritional quality of food purchases (median –1.28 British Food Standards Agency Nutrient Profiling System Dietary Index points) at the end of the intervention. Intrinsic motivation significantly increased (P=.03) in the FutureMe group, but decreased in the control group. Outcome expectancy directionally increased for the FutureMe group, but decreased for the control group. Leveraging loyalty card data to track the nutritional quality of food purchases was found to be a feasible and an accepted fully automated nutrition tracking system. Conclusions: Preventative future-self avatar mHealth interventions promise to encourage improvements in physical activity and food purchasing behavior in healthy population groups. A full-powered RCT is needed to confirm this preliminary evidence and to investigate how future-self avatars might be modified to reduce attrition, overcome present bias, and promote sustainable behavior change.

More
The Fourth Industrial Revolution is in full progress. An increasing number of manufacturing companies are using information technology to digitize their products and services by integrating production machines and production processes with enterprise information systems and digital processes. These interconnected smart machines, products, and processes build the Industrial Internet of Things (IIoT). However, if we want to integrate the functionality and data provided by the production machines and resources with information systems such as Enterprise Resource Planning (ERP) systems and business process management systems (BPMS), several interoperability, abstraction, and interaction issues must be addressed: machines have different non-standardized programming interfaces; are programmed in low-level programming languages (e.g., G code); are very complex; and operate mostly in isolation. Nevertheless, this bi-directional integration of the IIoT devices with information systems shows a lot of potential and benefits for both areas. On the one hand, live status data about production machines and processes can be fed directly into the ERP systems, e.g., to optimize the production plans or adapt the production plans in case of machine failures or downtimes. On the other hand, the BPMS can monitor, analyze, and control production processes, e.g., to adapt or reconfigure the production processes in case of exceptions on the shop floor. Realizing these flexible and dynamic processes and production lines is among the main goals of Industry 4.0 developments. In this blog post, we present a novel software stack to reduce the gap between the production shop floor and enterprise-level management systems. First, we discuss how to abstract and integrate production machines via web services that can be called from BPMS. Then, we showcase this software stack by using the Camunda Modeler and Camunda Platform to automate and execute exemplary production processes in our own smart factory.

More
Journal paper
The delegation of decisions to machines has revived the debate on whether and how technology should and can embed fundamental legal values within its design. While these debates have predominantly been occurring within the philosophical and legal communities, the computer science community has been eager to provide tools to overcome some challenges that arise from ‘hardwiring’ law into code. What emerged is the formation of different approaches to code that adapts to legal parameters. Within this article, we discuss the translational, system-related, and moral issues raised by implementing legal principles in software. While our findings focus on data protection law, they apply to the interlinking of code and law across legal domains. These issues point towards the need to rethink our current approach to design-oriented regulation and to prefer ‘soft’ implementations, where decision parameters are decoupled from program code and can be inspected and modified by users, over ‘hard’ approaches, where decisions are taken by opaque pieces of program code.

More
This paper describes a demonstration setup that integrates cognitive agents with the latest W3C standardization efforts for the Web of Things (WoT). The conceptual foundations of the implemented system are the integration of cognitive agent abstractions with W3C Web Things, which are generic abstractions of devices and virtual services that provide agents with various interaction affordances (e.g., actions, events). Together with the W3C WoT Scripting API, which is an ECMAScript-compatible API for W3C WoT environments, these standards allow JavaScript-based agents to be deployed and to operate in heterogeneous WoT environments. The agents can then be effectively distributed across the physical-virtual space in a write once, run anywhere manner: we deploy agents across a heterogeneous information system landscape that includes Web servers, browser-based front-ends, and constrained devices (microcontrollers). The deployment only requires minor platform-specific adjustments to consider resource and performance limitations on constrained devices. As a running example, we demonstrate a semiautonomous assembly scenario with human-in-the-loop support.

More
get_appLukas Malburg, Manfred-Peter Rieder, Ronny Seiger, Patrick Klein, Ralph Bergmann
Conference or Workshop Item
The production industry is in a transformation towards more autonomous and intelligent manufacturing. In addition to more flexible production processes to dynamically respond to changes in the environment, it is also essential that production processes are continuously monitored and completed in time. Video-based methods such as object detection systems are still in their infancy and rarely used as basis for process monitoring. In this paper, we present a framework for video-based monitoring of manufacturing processes with the help of a physical smart factory simulation model. We evaluate three state-of-the-art object detection systems regarding their suitability to detect workpieces and to recognize failure situations that require adaptations. In our experiments, we are able to show that detection accuracies above 90% can be achieved with current object detection methods.

More
In the field of Business Process Management (BPM), modeling business processes and related data is a critical issue since process activities need to manage data stored in databases. The connection between processes and data is usually handled at the implementation level, even if modeling both processes and data at the conceptual level should help designers in improving business process models and identifying requirements for implementation. Especially in data- and decision-intensive contexts, business process activities need to access data stored both in databases and data warehouses. In this paper, we complete our approach for defining a novel conceptual view that bridges process activities and data. The proposed approach allows the designer to model the connection between business processes and database models and define the operations to perform, providing interesting insights on the overall connected perspective and hints for identifying activities that are crucial for decision support.

More
get_appKimberly Garcia, Zaira Zihlmann, Simon Mayer, Aurelia Tamo-Larrieux
Forthcoming
Smart products, such as toy robots, must comply with multiple legal requirements of the country they are sold and used. Currently, compliance with the legal environment requires manually customizing products for different markets. In this paper, we explore a design approach for smart products that enforces compliance with aspects of the European Union’s data protection principles within a product’s firmware through a case study on a toy robot. This endeavour has taken us through an exchange between computer scientists and legal scholars to determine the relevant data flows, their processing needs, and the implementation decisions that would allow a device to operate while complying with the EU data protection law. By designing a data-minimizing toy robot, we show how the variety, amount, and quality of data that is exposed, processed, and stored outside of a user’s premises can be considerably reduced while preserving the device’s functionality. In comparison with a robot designed using a traditional approach, where 90% of the collected types of information are stored with the data controller or a remote service, our proposed design leads to the mandatory exposure of only seven out of 15 collected types of information, all of which are legally required by the data controller to demonstrate consent.

More
Tanja Schneider, Klaus Fuchs und Simon Mayer widmen sich in ihrem Beitrag der zunehmenden Datafizierung von Alltagspraktiken und den damit einhergehenden neuen Verantwortungen der Konsumenten*innen aus der Perspektive der Science and Technology Studies (STS). Dabei stellen sie Praktiken des Konsums in den Mittelpunkt, und zwar, wie das Einkaufsverhalten der Konsument*innen digital erfasst und verwertet wird und welche Handlungsspielräume Konsument*innen haben. Sie fragen, wie sich Verantwortlichkeiten in sogenannten Datenassemblagen (Kitchin und Lauriault, 2015) verschieben oder verschieben lassen und gehen insbesondere der Frage nach, welche Bedeutung Datenaktivismus in diesem Kontext haben könnte. Als Anwendung der STS-Perspektive auf den Fall der Datafizierung stellen sie das kollaborative und interdisziplinäre Forschungsprojekt »FoodCoach« vor, das zum Ziel hat, mögliche ›alternative‹ Imaginationen bezüglich der Möglichkeiten, die diese Daten bieten, zu entwickeln.

More
Surveillance in private and public spaces provides observers with information that can enhance protection and efficiency but usually infringes upon the privacy of the individuals and groups. These informational privacy risks are centered on users’ perceived and design-induced threats. They cannot be removed completely but can be minimized using suitable anonymization techniques. To minimize the users’ informational privacy threats, we designed a privacy-aware surveillance system that gives the users leverage over the anonymization filters, to physically adjust the opaqueness of the camera lens used in the prototype according to their privacy requirements. We implement our prototype in the context of office space surveillance, where the proposed solution considers privacy requirements in such environments to improve users’ trust in the surveillance system and reduce their privacy concerns.

More
Nowadays, organizations collect vast quantities of sensitive information in 'Enterprise Resource Planning' (ERP) systems, such as accounting relevant transactions, customer master data, or strategic sales price information. The leakage of such information poses a severe threat for companies as the number of incidents and the reputational damage to those experiencing them continue to increase. At the same time, discoveries in deep learning research revealed that machine learning models could be maliciously misused to create new attack vectors. Understanding the nature of such attacks becomes increasingly important for the (internal) audit and fraud examination practice. The creation of such an awareness holds in particular for the fraudulent data leakage using deep learning-based steganographic techniques that might remain undetected by state-of-the-art 'Computer Assisted Audit Techniques' (CAATs). In this work, we first introduce a real-world 'threat model' designed to leak sensitive accounting data. Second, we show that a deep steganographic process, constituted by three neural networks, can be trained to hide such data in unobtrusive 'day-to-day' images. Finally, we provide qualitative and quantitative evaluations on two publicly available real-world payment datasets.

More
Our everyday lives are increasingly pervaded by digital assistants and smart devices forming the Internet of Things (IoT). While user interfaces to directly monitor and control individual IoT devices are becoming more sophisticated and end-user friendly, applications to connect standalone IoT devices and create more complex IoT processes for automating and assisting users with repetitive tasks still require a high level of technical expertise and programming knowledge. Related approaches for process modelling in IoT mostly suggest extensions to complex modelling languages, require high levels of abstraction and technical knowledge, and rely on unintuitive tools. We present a novel approach for end-user oriented--no-code--IoT process modelling using Mixed Reality (MR) technology: HoloFlows. Users are able to explore the IoT environment and model processes among sensors and actuators as first class citizens by simply "drawing" virtual wires among physical IoT devices. MR technology hereby facilitates the understanding of the physical contexts and relations among the IoT devices and provides a new and more intuitive way of modelling IoT processes. The results of a user study comparing HoloFlows with classical modelling approaches show an increased user experience and decrease of required modelling knowledge and technical expertise to create IoT processes.

More
Terms of use of a digital service are often framed in a binary way: Either one agrees to the service provider's data processing practices, and is granted access to the service, or one does not, and is denied the service. Many scholars have lamented these ‘take-it-or-leave-it’ situations, as this goes against the ideals of data protection law. To address this inadequacy, computer scientists and legal scholars have tried to come up with approaches to enable more privacy-friendly products and services. In this article, we call for a right to customize the processing of user data. Our arguments build upon technology-driven approaches as well as on the ideals of privacy by design and the now codified data protection by design and default norm within the General Data Protection Regulation. In addition, we draw upon the right to repair that is propagated to empower consumers and enable a more circular economy. We propose two technologically-oriented approaches, termed ‘variants’ and ‘alternatives’ that could enable the technical implementation of a right to customization. We posit that these approaches cannot be demanded without limitation, and that restrictions will depend on how reasonable a customization demand is.

More
abstract = {In the past decade, brain and autonomic nervous system activity measurement received increasing attention in the study of software engineering (SE). This paper presents a systematic literature review (SLR) to survey the existing NeuroSE literature. Based on a rigorous search protocol, we identified 89 papers (hereafter denoted as NeuroSE papers). We analyzed these papers to develop a comprehensive understanding of who had published NeuroSE research and classified the contributions according to their type. The 47 articles presenting completed empirical research were analyzed in detail. The SLR revealed that the number of authors publishing NeuroSE research is still relatively small. The thematic focus so far has been on code comprehension, while code inspection, programming, and bug fixing have been less frequently studied. NeuroSE publications primarily used methods related to brain activity measurement (particularly fMRI and EEG), while methods related to the measurement of autonomic nervous system activity (e.g., pupil dilation, heart rate, skin conductance) received less attention. We also present details of how the empirical research was conducted, including stimuli and independent and dependent variables, and discuss implications for future research. The body of NeuroSE literature is still small. Yet, high quality contributions exist constituting a valuable basis for future studies.

More
This work has been motivated by the needs we discovered when analyzing real-world processes from the healthcare domain that have revealed high flexibility demands and complex temporal constraints. When trying to model these processes with existing languages, we learned that none of the latter was able to fully address these needs. This motivated us to design TConDec-R, a declarative process modeling language enabling the specification of complex temporal constraints. Enacting business processes based on declarative process models, however, introduces a high complexity due to the required optimization of objective functions, the handling of various temporal constraints, the concurrent execution of multiple process instances, the management of cross-instance constraints, and complex resource allocations. Consequently, advanced user support through optimized schedules is required when executing the instances of such models. In previous work, we suggested a method for generating an optimized enactment plan for a given set of process instances created from a TConDec-R model. However, this approach was not applicable to scenarios with uncertain demands in which the enactment of newly created process instances starts continuously over time, as in the considered healthcare scenarios. Here, the process instances to be planned within a specific timeframe cannot be considered in isolation from the ones planned for future timeframes. To be able to support such scenarios, this article significantly extends our previous work by generating optimized enactment plans under a rolling planning horizon. We evaluate the approach by applying it to a particularly challenging healthcare process scenario, i.e., the diagnostic procedures required for treating patients with ovarian carcinoma in a Woman Hospital. The application of the approach to this sophisticated scenario allows avoiding constraint violations and effectively managing shared resources, which contributes to reduce the length of patient stays in the hospital.

More
The use of neurophysiological measurements to advance the design, development, use, acceptance, influence and adaptivity of information systems is receiving increasing attention. Within the field of education, neurophysiological measurements have commonly been used to capture a learner’s psychological constructs such as cognitive load, attention and emotion, which play an important role in student learning. This paper systematically examines the literature on the use of neurophysiological measurements in higher education. In particular, using a well-established Systematic Literature Review (SLR) method, we identified 83 papers reporting empirical evidence about the outcome of employing neurophysiological measurements within educational technologies in higher education. The findings of the SLR are divided into three main themes discussing the employed measurements, experimental settings and constructs and outcomes. Our findings identify that (1) electroencephalography and facial expression recognition are the dominantly employed types of measurement, (2) the majority of the experiments used a pre-experimental design, (3) attention and emotion are the two foremost cognitive and non-cognitive constructs under investigation, while less emphasis is paid to meta-cognitive constructs and (4) the reported results mostly focus on monitoring learners’ states, which are not always the same as the intended purpose, such as developing an adaptive system. On a broader term, the review of the literature provides evidence of the effective use of neurophysiological measurements by educational technologies to enhance learning; however, a number of challenges and concerns related to the accuracy and validity of the captured construct, the intrusiveness of the employed instruments as well as ethical and privacy considerations have surfaced, that need to be addressed before such technologies can be employed and adopted at scale.

More
The major driver of global warming has been identified as the anthropogenic release of greenhouse gas (GHG) emissions from industrial activities. The quantitative monitoring of these emissions is mandatory to fully understand their effect on the Earth's climate and to enforce emission regulations on a large scale. In this work, we investigate the possibility to detect and quantify industrial smoke plumes from globally and freely available multi-band image data from ESA's Sentinel-2 satellites. Using a modified ResNet-50, we can detect smoke plumes of different sizes with an accuracy of 94.3%. The model correctly ignores natural clouds and focuses on those imaging channels that are related to the spectral absorption from aerosols and water vapor, enabling the localization of smoke. We exploit this localization ability and train a U-Net segmentation model on a labeled sub-sample of our data, resulting in an Intersection-over-Union (IoU) metric of 0.608 and an overall accuracy for the detection of any smoke plume of 94.0%; on average, our model can reproduce the area covered by smoke in an image to within 5.6%. The performance of our model is mostly limited by occasional confusion with surface objects, the inability to identify semi-transparent smoke, and human limitations to properly identify smoke based on RGB-only images. Nevertheless, our results enable us to reliably detect and qualitatively estimate the level of smoke activity in order to monitor activity in industrial plants across the globe. Our data set and code base are publicly available.

More
get_appKimberly Garcia, Simon Mayer, Alessandro Ricci, Andrei Ciortea
Book Section
Artificial companions and digital assistants have been investigated for several decades, from research in the autonomous agents and social robots areas to the highly popular voice-enabled digital assistants that are already in widespread use (e.g., Siri and Alexa). Although these companions provide valuable information and services to people, they remain re- active entities that operate in isolated environments waiting to be asked for help. The Web is now emerging as a uniform hypermedia fabric that interconnects everything (e.g., devices, physical objects, abstract concepts, digital services), thereby enabling unprecedented levels of automation and comfort in our professional and private lives. However, this also results in increasingly complex environments that are becoming un- intelligible to everyday users. To ameliorate this situation, we envision proactive Digital Companions that take advantage of this new generation of pervasive hypermedia environments to provide assistance and protection to people. In addition to Digital Companions perceiving a person’s environment through vision and sound, pervasive hypermedia environments provide them with means to further contextualize the situation by exploiting information from available connected devices, and give them access to rich knowledge bases that allow to derive relevant actions and recommendations.

More
Digitale Forschungsinstrumente sind disruptive Technologien mit dem Potential, grundlegend die Art und Weise zu verändern, wie wir in den historischen Bildwissenschaften, in der Architektur- und Designgeschichte oder den Material Culture Studies forschen, ja welche Vorstellungen wir von unseren Disziplinen entwerfen. Künstliche Intelligenz und Verfahren maschinellen Lernens werden in den bildbezogenen Digitalen Geisteswissenschaften zum Zweck der Bildanalyse sowie der Klassifizierung visueller Merkmale genutzt. Waren Methoden im Bereich der Bildanalyse lange Zeit von niedrigschwelligen Methoden wie der Bildung von Clustern auf der Grundlage von Farbverteilungen oder lokalen Kontrasten für diesen Bereich bestimmend, zielen Forschungsinstrumente nun mehr darauf ab, Bildmerkmale zu identifizieren, zu klassifizieren oder gar semantisch zu segmentieren. Sie bieten also, anders ausgedrückt, erstmals informatisch halbautomatischen Zugang zur dem Bild eigenen Ebene der Repräsentation. Das Neoclassica-Projekt hat zum Ziel, Forschenden einen Zugang zu Instrumenten und Methoden zu bieten, die eine computergestützte Analyse materieller Kultur, genauer struktureller Merkmale und ästhetischer Formen klassizistischer Artefakte bieten. Wir möchten damit sowohl mehr hypothesen-getriebene, als auch explorative Zugänge zum Medium Bild unterstützen. Gegenwärtig konzentrieren wir uns dabei auf Raumkunst, insbesondere Mobiliar und Innenräume, aber auch Architektur und deren jeweilige bildliche Darstellung. In diesem Beitrag stellen wir unseren Zugang zur Analyse von Zimmerbildern vor, angefangen mit der Klassifizierung in Darstellungen von Einzelartefakten bis hin zur Identifizierung von Objekten in komplexen zeitgenössischen Interieurdarstellungen. Abschließend zeigen wir Anwendungsmöglichkeiten der Technologie und Perspektiven der weiteren Forschung auf.

More
get_appChristian Janisch, Agnes Koschmider, Massimo Mecella, Barbara Weber, Andrea Burattin, Claudio Di Ciccio, Giancarlo Fortino, Avigdor Gal, Udo Kannengiesser, Francesco Leotta, Felix Mannhardt, Andrea Marrella, Jan Mendling, Anderas Oberweis, Manfred Reichert, Stefanie Rinderle-Ma, Estefania Serral, WenZhan Song, Jianwen Su, Victoria Torres, Matthias Weidlich, Mathias Weske, Liang Zhang
Journal paper
The Internet of Things (IoT) refers to a network of connected devices collecting and exchanging data over the Internet. These things can be artificial or natural and interact as autonomous agents forming a complex system. In turn, Business Process Management (BPM) was established to analyze, discover, design, implement, execute, monitor and evolve collaborative business processes within and across organizations. While the IoT and BPM have been regarded as separate topics in research and practice, we strongly believe that the management of IoT applications will strongly benefit from BPM concepts, methods and technologies on the one hand; on the other one, the IoT poses challenges that will require enhancements and extensions of the current state-of-the-art in the BPM field. In this paper, we question to what extent these two paradigms can be combined and we discuss the emerging challenges and intersections from a research and practitioner’s point of view in terms of complex software systems development.

More
This demo paper presents an infrastructure to enable realtime monitoring of process events (i.e., telemetry). The infrastructure relies on the MQTT protocol which ensures minimum logging overhead. The paper presents a Java library for producing (i.e., logging) and consuming events, built on top of HiveMQ. Additionally, a prototype dashboard to display basic statistics is reported and described.

More
get_appRonny Seiger, Uwe Aßmann, Dominik Grzelak, Mikhail Belov, Paul Riedel, Ariel Podlubne, Wanqi Zhao, Jens Kerber, Jonas Mohr, Fabio Espinosa, Tim Schwartz
Journal paper
Im zukünftigen taktilen Internet wachsen die physische und virtuelle Welt auch über mehrere Standorte hinweg immer weiter zusammen. Robotic Co-working – das gemeinsame Kollaborieren von Robotern und Menschen – gewinnt dabei zunehmend im Kontext des Internet der Dinge (IoT) und cyber-physischer Systeme (CPS) an Bedeutung. Mit dieser Arbeit präsentieren wir eine Fallstudie, die anlässlich des 50-jährigen Jubiläums des Informatikstudiums in Deutschland durchgeführt wurde. In dieser arbeiten Menschen und Roboter über mehrere Standorte verteilt in einer virtuellen Co-Working-Zelle zusammen, um einen „physischen und virtuellen Informatik-Deutschland-Campus 2069“ zu konstruieren. Unter Nutzung von Sensorik, Aktuatorik und Software wird der cyber-physische Campus, von einem Workflow gesteuert, schrittweise errichtet. Mithilfe einer Mixed-Reality-Anwendung kann dieser Prozess immersiv, d. h. eintauchbar und interaktiv, unabhängig von einem konkreten Standort erlebt werden.

More
Während künstliche Intelligenz die Arbeitsweise verschiedener Berufsgruppen zunehmend und nachhaltig verändert, steckt ein solcher Wandel im Bereich der Wirtschaftsprüfung derzeit in seinen Anfängen. Der nachfolgende Beitrag soll Einsatzmöglichkeiten und Herausforderungen des maschinellen Lernens (ML), eines Teilgebiets der künstlichen Intelligenz, im Kontext der Abschlussprüfung aufzeigen.

More
The Internet of Things (IoT) enables software-based access to vast amounts of data streams from sensors measuring physical and virtual properties of smart devices and their surroundings. While sophisticated means for the control and data analysis of single IoT devices exist, a more process-oriented view of IoT systems is often missing. Such a lack of process awareness hinders the development of process-based systems on top of IoT environments and the application of process mining techniques for process analysis and optimization in IoT. We propose a framework for the stepwise correlation and composition of raw IoT sensor streams with events and activities on a process level based on Complex Event Processing (CEP). From this correlation we derive refined process event logs–possibly with ambiguities–that can be used for process analysis at runtime (i. e., online). We discuss the framework using examples from a smart factory.

More
With rapid advances in machine learning (ML), more of this technology is being deployed into the real world interacting with us and our environment. One of the most widely applied application of ML is facial recognition as it is running on millions of devices. While being useful for some people, others perceive it as a threat when used by public authorities. This discrepancy and the lack of policy increases the uncertainty in the ML community about the future direction of facial recognition research and development. In this paper we present results from a cross-national survey about public acceptance, privacy, and discrimination of the use of facial recognition technology (FRT) in the public. This study provides insights about the opinion towards FRT from China, Germany, the United Kingdom (UK), and the United States (US), which can serve as input for policy makers and legal regulators.

More
get_appAndrei Ciortea, Simon Mayer, Simon Bienz, Fabien Gandon, Olivier Corby
Journal paper
Recent W3C recommendations for the Web of Things (WoT) and the Social Web are turning hypermedia into a homogeneous information fabric that interconnects heterogeneous resources: devices, people, information resources, abstract concepts, etc. The integration of multi-agent systems with such hypermedia environments now provides a means to distribute autonomous behavior in worldwide pervasive systems. A central problem then is to enable autonomous agents to discover heterogeneous resources in worldwide and dynamic hypermedia environments. This is a problem in particular in WoT environments that rely on open standards and evolve rapidly—thus requiring agents to adapt their behavior at run time in pursuit of their design objectives. To this end, we developed a hypermedia search engine for the WoT that allows autonomous agents to perform approximate search queries in order to retrieve relevant resources in their environment in (weak) real time. The search engine crawls dynamic WoT environments to discover and index device metadata described with the W3C WoT Thing Description, and exposes a SPARQL endpoint that agents can use for approximate search. To demonstrate the feasibility of our approach, we implemented a prototype application for the maintenance of industrial robots in worldwide manufacturing systems. The prototype demonstrates that our semantic hypermedia search engine enhances the flexibility and agility of autonomous agents in a social and ubiquitous Web.

More
Mobile devices with high-speed connectivity provide us with access to gigabytes of high resolution images, videos, and graphics. For instance, a head-worn display can be used to augment the real view with digitized visual information (Figure 1). Eye tracking helps us to understand how we process visual information and it allows us to develop gaze-enabled interactive systems. For instance, foveated gaze-contingent displays (GCDs) dynamically adjust the level of detail according to the user’s point-of-interest. We propose that GCDs should take users’ attention and cognitive load into account, augment their vision with contextual information and provide personalized assistance in solving visual tasks. Grounded on existing literature, we identified several research questions that need to be discussed before developing such displays.

More
As smart products move between jurisdictions, their program code becomes subject to various and sometimes incompatible legal environments. Manufacturers are therefore required to create customized product variants for specific markets, which induces variance management overhead and undermines economies of scale. In our article we investigate how the legal environment of a smart product interacts with the programming of that product. Specifically, we are interested in how the General Data Protection Regulation (GDPR) principles can be mapped to legally relevant aspects of toy robots. These are of particular interest as they contain different kinds of privacy-sensitive sensors such as microphones and cameras, are continuously processing (personal) data, can easily be moved from one jurisdiction to another, and affect individuals, including vulnerable ones such as children, in their homes. The core goal of this article is to develop a methodology to map the GDPR’s principles to the program code of a GoPiGo3 toy robot. We describe this methodology and demonstrate a concrete mapping to GoPiGo3 (as a prototype). In this prototype, the robot’s functionality has been extended to include external face recognition services, as well as external data processing for direct advertising purposes, in order to apply within the research domain of privacy and especially privacy by design. In this article, we describe how the mapping can be done in principle and plan to make first steps towards automating the mapping process. The main research questions we analyze are: How can we describe data protection law’s core principles in a way that system and software engineers can implement such norms into device firmware? What difficulties arise and what implementation decisions have to be taken in order to enable encoding data protection principles into systems? What are the benefits and limits of our methodology to map the data protection principles into a device’s program code, specifically regarding the automation potential of this process? To answer our research questions, we start by sketching the data flow emanating from GoPiGo3 and the fictional, yet realistic, additional services within our application scenario. We then investigate upon what “lawful grounds” the data processing of the device takes place (Art. 5(1)(a) GDPR) to determine what consent - and by whom depending on the legislation of EU member states on children consent - must be given and which other legal grounds for processing can justify the processing (Art. 6 GDPR). The GoPiGo3 provides information and obtains consent from the user in accordance with Art. 13 of the GDPR given the robot and user context (e.g., location and applicable jurisdiction, user age, etc.). We dive into (legally) contested terminologies, such as the term ‘fairness’, and determine their mapping into GoPiGo3’s program code. We then determine which data items are collected by the software and for which purposes that data is actually processed in order to determine which data items are required and which ones are not. Upon this basis we discuss how the principles of purpose limitation, data minimization, and storage restrictions should be implemented in device code.

More
We demonstrate how autonomous goal-directed agents can exploit hypermedia to acquire and execute new behaviors at run time. In addition to behaviors programmed into the agents, in our system agents can discover and reuse behaviors extracted from machine-readable resource manuals. Such manuals can be published by developers, synthesized by agents through automated planning, or even specified by human users at run time. Agents can then discover and use physical and virtual resources in flexible ways, which allows them to better cope with the rapid evolution of open and dynamic Web environments.

More
get_appThiemo Wambsganss, Christina Niklaus, Matthias Cetto, Matthias Söllner, Siegfried Handschuh, Jan Marco Leimeister
Journal paper
Recent advances in Natural Language Processing (NLP) bear the opportunity to analyze the argumentation quality of texts. This can be leveraged to provide students with individual and adaptive feedback in their personal learning journey. To test if individual feedback on students' argumentation will help them to write more convincing texts, we developed AL, an adaptive IT tool that provides students with feedback on the argumentation structure of a given text. We compared AL with 54 students to a proven argumentation support tool. We found students using AL wrote more convincing texts with better formal quality of argumentation compared to the ones using the traditional approach. The measured technology acceptance provided promising results to use this tool as a feedback application in different learning settings. The results suggest that learning applications based on NLP may have a beneficial use for developing better writing and reasoning for students in traditional learning settings.

More
This study examines how firms in the autonomous driving industry that pursue artificial intelligence-based innovations attempt to appropriate returns from these innovations. It intends to contribute to the literature on value appropriation from innovation by investigating the extent to which firms can and do keep the key components of AI systems (data set, training approach, and model) private versus publishing them. Using a qualitative research design, we establish that there are regulatory, technical, and enforcement aspects to the components that prompt firms to either protect or publish.

More
get_appHasan S. Bank, Naveen K. Singa, Clifford Chou, Xiaofan Wu, Guiseppe F. Rigano, Simon Mayer
Digital Resource
A computer-based system and method is disclosed for spatial programming of a robotic device. A mixed reality tool may select an object related to one or more interactive tasks for the robotic device. A spatial location of the object may be located including Cartesian coordinates and orientation coordinates of the object. An application program may be executed to operate the robotic device using the spatial location. Based on initial parameters, execution of the one or more tasks by the robotic device on the object related to a skill set may be simulated in a mixed reality environment.

More
Recent standards are turning hypermedia into a homogenous information fabric that interconnects everything: physical objects, documents, abstract concepts etc. Clients can not only browse, but also observe and act on this hypermedia fabric. We present our vision for a new class of multi-agent systems that are: 1) aligned with the Web architecture to inherit the properties of the Web as a world-wide, open, and long-lived system, and 2) transparent and accountable to support acceptance by people.

More
Process design artifacts have been increasingly used to guide the modeling of business processes. To support users in designing and understanding process models, different process artifacts have been combined in several ways leading to the emergence of the so-called “hybrid process artifacts”. While many hybrid artifacts have been proposed in the literature, little is known about how they can actually support users in practice. To address this gap, this work investigates the way users engage with hybrid process artifacts during comprehension tasks. In particular, we focus on a hybrid representation of DCR Graphs (DCR-HR) combining a process model, textual annotations and an interactive simulation. Following a qualitative approach, we conduct a multi-granular analysis exploiting process mining, eye-tracking techniques, and verbal data analysis to scrutinize the reading patterns and the strategies adopted by users when being confronted with DCR-HR. The findings of the coarse-grained analysis provide important insights about the behavior of domain experts and IT specialists and show how user’s background and task type change the use of hybrid process artifacts. As for the fine-grained analysis, user’s behavior was classified into goal-directed and exploratory and different strategies of using the interactive simulation were identified. In addition, a progressive switch from an exploratory behavior to a goal-directed behavior was observed. These insights pave the way for an improved development of hybrid process artifacts and delineate several directions for future work.

More
Process modeling plays a central role in the development of today’s process-aware information systems both on the management level (e.g., providing input for requirements elicitation and fostering communication) and on the enactment level (providing a blue-print for process execution and enabling simulation). The literature comprises a variety of process modeling approaches proposing different modeling languages (i.e., imperative and declarative languages) and different types of process artifact support (i.e., process models, textual process descriptions, and guided simulations). However, the use of an individual modeling language or a single type of process artifact is usually not enough to provide a clear and concise understanding of the process. To overcome this limitation, a set of so-called “hybrid” approaches combining languages and artifacts have been proposed, but no common grounds have been set to define and categorize them. This work aims at providing a fundamental understanding of these hybrid approaches by defining a unified terminology, providing a conceptual framework and proposing an overarching overview to identify and analyze them. Since no common terminology has been used in the literature, we combined existing concepts and ontologies to define a “Hybrid Business Process Representation” (HBPR). Afterwards, we conducted a Systematic Literature Review (SLR) to identify and investigate the characteristics of HBPRs combining imperative and declarative languages or artifacts. The SLR resulted in 30 articles which were analyzed. The results indicate the presence of two distinct research lines and show common motivations driving the emergence of HBPRs, a limited maturity of existing approaches, and diverse application domains. Moreover, the results are synthesized into a taxonomy classifying different types of representations. Finally, the outcome of the study is used to provide a research agenda delineating the directions for future work.

More
Conference or Workshop Item
Process models provide a blueprint for process execution and an indispensable tool for process management. Bearing in mind their trending use for requirement elicitation, communication and improvement of business processes, the need for understandable process models becomes a must. In this paper, we propose a research model to investigate the impact of modularization on the understandability of declarative process models. We design a controlled experiment supported by eye-tracking, electroencephalography (EEG) and galvanic skin response (GSR) to appraise the understandability of hierarchical process models through measures such as comprehension accuracy, response time, attention, cognitive load and cognitive integration.

More
Imperative process models have become immensely popular. However, their use is usually limited to rigid and repetitive processes. Considering the inherent flexibility in most processes in the real-world and the increased need for managing knowledge-intensive processes, the adoption of declarative languages becomes more pertinent than ever. While the quality of imperative models has been extensively investigated in the literature, little is known about the dimensions affecting the quality of declarative models. This work takes an advanced stride to investigate the quality of declarative models. Following the theory of Personal Construct Psychology (PCT), our research introduces a novel method within the Business Process Management (BPM) field to explore quality in the eyes of expert modelers. The findings of this work summarize the dimensions defining the quality of declarative models. The outcome shows the potential of PCT as a basis to discover quality dimensions and advances our understanding of quality in declarative process models.

More
Data visualizations are versatile tools for gaining cognitive access to large amounts of data and for making complex relation-ships in data understandable. This paper proposes a method for assessing data visualizations according to the purposes theyfulfill in domain-specific data analysis settings. We introduce a framework that gets configured for a given analysis domainand allows to choose data visualizations in a methodically justified way, based on analysis questions that address differentaspects of data to be analyzed. Based on the concepts addressed by the analysis questions, the framework provides systematicguidance for determining which data visualizations are able to serve which conceptual analysis interests. In a second step ofthe method, we propose to follow a data-driven approach and to experimentally compare alternative data visualizations fora particular analytical purpose. More specifically, we propose to use eye tracking to support justified decisions about whichof the data visualizations selected with the help of the framework are most suitable for assessing the analysis domain in acognitively efficient way. We demonstrate our approach of how to come from analytical purposes to data visualizations usingthe example domain of Process Modeling Behavior Analysis. The analyses are performed on the background of representativeanalysis questions from this domain.

More
get_appThomas Hildebrandt, Amine Abbad Andaloussi, Lars Rune Christensen, Søren Debois, Nicklas Pape Healy, Hugo A. López, Morten Marquard, Naja L. Holten Møller, Anette Chelina Møller Petersen, Tijs Slaats, Barbara Weber
Conference or Workshop Item
We report on a new approach to co-creating adaptive case management systems jointly with end-users, developed in the context of the Effective co-created and compliant adaptive case Management Systems for Knowledge Workers (EcoKnow.org) research project. The approach is based on knowledge from prior ethnographic field studies and research in the declarative Dynamic Condition Response (DCR) technology for model-driven design of case management systems. The approach was tested in an operational environment jointly with the danish municipality of Syddjurs by conducting a service-design project and implementing an open source case manager tool and a new highlighter tool for mapping between textual specifications and the DCR notation. The design method and technologies were evaluated by understandability studies with endusers. The study showed that the development could be done in just 6 months, and that the new highlighter tool in combination with the traditional design and simulation tools, supports domain experts formalise and provide traceability between their interpretations of textual specifications and the formal models.

More