Search
Generic filters
Exact matches only
Filter by content type
Users
Attachments

Publications Overview

Digitale Forschungsinstrumente sind disruptive Technologien mit dem Potential, grundlegend die Art und Weise zu verändern, wie wir in den historischen Bildwissenschaften, in der Architektur- und Designgeschichte oder den Material Culture Studies forschen, ja welche Vorstellungen wir von unseren Disziplinen entwerfen. Künstliche Intelligenz und Verfahren maschinellen Lernens werden in den bildbezogenen Digitalen Geisteswissenschaften zum Zweck der Bildanalyse sowie der Klassifizierung visueller Merkmale genutzt. Waren Methoden im Bereich der Bildanalyse lange Zeit von niedrigschwelligen Methoden wie der Bildung von Clustern auf der Grundlage von Farbverteilungen oder lokalen Kontrasten für diesen Bereich bestimmend, zielen Forschungsinstrumente nun mehr darauf ab, Bildmerkmale zu identifizieren, zu klassifizieren oder gar semantisch zu segmentieren. Sie bieten also, anders ausgedrückt, erstmals informatisch halbautomatischen Zugang zur dem Bild eigenen Ebene der Repräsentation. Das Neoclassica-Projekt hat zum Ziel, Forschenden einen Zugang zu Instrumenten und Methoden zu bieten, die eine computergestützte Analyse materieller Kultur, genauer struktureller Merkmale und ästhetischer Formen klassizistischer Artefakte bieten. Wir möchten damit sowohl mehr hypothesen-getriebene, als auch explorative Zugänge zum Medium Bild unterstützen. Gegenwärtig konzentrieren wir uns dabei auf Raumkunst, insbesondere Mobiliar und Innenräume, aber auch Architektur und deren jeweilige bildliche Darstellung. In diesem Beitrag stellen wir unseren Zugang zur Analyse von Zimmerbildern vor, angefangen mit der Klassifizierung in Darstellungen von Einzelartefakten bis hin zur Identifizierung von Objekten in komplexen zeitgenössischen Interieurdarstellungen. Abschließend zeigen wir Anwendungsmöglichkeiten der Technologie und Perspektiven der weiteren Forschung auf.

More
get_appChristian Janisch, Agnes Koschmider, Massimo Mecella, Barbara Weber, Andrea Burattin, Claudio Di Ciccio, Giancarlo Fortino, Avigdor Gal, Udo Kannengiesser, Francesco Leotta, Felix Mannhardt, Andrea Marrella, Jan Mendling, Anderas Oberweis, Manfred Reichert, Stefanie Rinderle-Ma, Estefania Serral, WenZhan Song, Jianwen Su, Victoria Torres, Matthias Weidlich, Mathias Weske, Liang Zhang
Journal paper
The Internet of Things (IoT) refers to a network of connected devices collecting and exchanging data over the Internet. These things can be artificial or natural and interact as autonomous agents forming a complex system. In turn, Business Process Management (BPM) was established to analyze, discover, design, implement, execute, monitor and evolve collaborative business processes within and across organizations. While the IoT and BPM have been regarded as separate topics in research and practice, we strongly believe that the management of IoT applications will strongly benefit from BPM concepts, methods and technologies on the one hand; on the other one, the IoT poses challenges that will require enhancements and extensions of the current state-of-the-art in the BPM field. In this paper, we question to what extent these two paradigms can be combined and we discuss the emerging challenges and intersections from a research and practitioner’s point of view in terms of complex software systems development.

More
This demo paper presents an infrastructure to enable realtime monitoring of process events (i.e., telemetry). The infrastructure relies on the MQTT protocol which ensures minimum logging overhead. The paper presents a Java library for producing (i.e., logging) and consuming events, built on top of HiveMQ. Additionally, a prototype dashboard to display basic statistics is reported and described.

More
get_appRonny Seiger, Uwe Aßmann, Dominik Grzelak, Mikhail Belov, Paul Riedel, Ariel Podlubne, Wanqi Zhao, Jens Kerber, Jonas Mohr, Fabio Espinosa, Tim Schwartz
Journal paper
Im zukünftigen taktilen Internet wachsen die physische und virtuelle Welt auch über mehrere Standorte hinweg immer weiter zusammen. Robotic Co-working – das gemeinsame Kollaborieren von Robotern und Menschen – gewinnt dabei zunehmend im Kontext des Internet der Dinge (IoT) und cyber-physischer Systeme (CPS) an Bedeutung. Mit dieser Arbeit präsentieren wir eine Fallstudie, die anlässlich des 50-jährigen Jubiläums des Informatikstudiums in Deutschland durchgeführt wurde. In dieser arbeiten Menschen und Roboter über mehrere Standorte verteilt in einer virtuellen Co-Working-Zelle zusammen, um einen „physischen und virtuellen Informatik-Deutschland-Campus 2069“ zu konstruieren. Unter Nutzung von Sensorik, Aktuatorik und Software wird der cyber-physische Campus, von einem Workflow gesteuert, schrittweise errichtet. Mithilfe einer Mixed-Reality-Anwendung kann dieser Prozess immersiv, d. h. eintauchbar und interaktiv, unabhängig von einem konkreten Standort erlebt werden.

More
Während künstliche Intelligenz die Arbeitsweise verschiedener Berufsgruppen zunehmend und nachhaltig verändert, steckt ein solcher Wandel im Bereich der Wirtschaftsprüfung derzeit in seinen Anfängen. Der nachfolgende Beitrag soll Einsatzmöglichkeiten und Herausforderungen des maschinellen Lernens (ML), eines Teilgebiets der künstlichen Intelligenz, im Kontext der Abschlussprüfung aufzeigen.

More
The Internet of Things (IoT) enables software-based access to vast amounts of data streams from sensors measuring physical and virtual properties of smart devices and their surroundings. While sophisticated means for the control and data analysis of single IoT devices exist, a more process-oriented view of IoT systems is often missing. Such a lack of process awareness hinders the development of process-based systems on top of IoT environments and the application of process mining techniques for process analysis and optimization in IoT. We propose a framework for the stepwise correlation and composition of raw IoT sensor streams with events and activities on a process level based on Complex Event Processing (CEP). From this correlation we derive refined process event logs–possibly with ambiguities–that can be used for process analysis at runtime (i. e., online). We discuss the framework using examples from a smart factory.

More
With rapid advances in machine learning (ML), more of this technology is being deployed into the real world interacting with us and our environment. One of the most widely applied application of ML is facial recognition as it is running on millions of devices. While being useful for some people, others perceive it as a threat when used by public authorities. This discrepancy and the lack of policy increases the uncertainty in the ML community about the future direction of facial recognition research and development. In this paper we present results from a cross-national survey about public acceptance, privacy, and discrimination of the use of facial recognition technology (FRT) in the public. This study provides insights about the opinion towards FRT from China, Germany, the United Kingdom (UK), and the United States (US), which can serve as input for policy makers and legal regulators.

More
get_appAndrei Ciortea, Simon Mayer, Simon Bienz, Fabien Gandon, Olivier Corby
Journal paper
Recent W3C recommendations for the Web of Things (WoT) and the Social Web are turning hypermedia into a homogeneous information fabric that interconnects heterogeneous resources: devices, people, information resources, abstract concepts, etc. The integration of multi-agent systems with such hypermedia environments now provides a means to distribute autonomous behavior in worldwide pervasive systems. A central problem then is to enable autonomous agents to discover heterogeneous resources in worldwide and dynamic hypermedia environments. This is a problem in particular in WoT environments that rely on open standards and evolve rapidly—thus requiring agents to adapt their behavior at run time in pursuit of their design objectives. To this end, we developed a hypermedia search engine for the WoT that allows autonomous agents to perform approximate search queries in order to retrieve relevant resources in their environment in (weak) real time. The search engine crawls dynamic WoT environments to discover and index device metadata described with the W3C WoT Thing Description, and exposes a SPARQL endpoint that agents can use for approximate search. To demonstrate the feasibility of our approach, we implemented a prototype application for the maintenance of industrial robots in worldwide manufacturing systems. The prototype demonstrates that our semantic hypermedia search engine enhances the flexibility and agility of autonomous agents in a social and ubiquitous Web.

More
Mobile devices with high-speed connectivity provide us with access to gigabytes of high resolution images, videos, and graphics. For instance, a head-worn display can be used to augment the real view with digitized visual information (Figure 1). Eye tracking helps us to understand how we process visual information and it allows us to develop gaze-enabled interactive systems. For instance, foveated gaze-contingent displays (GCDs) dynamically adjust the level of detail according to the user’s point-of-interest. We propose that GCDs should take users’ attention and cognitive load into account, augment their vision with contextual information and provide personalized assistance in solving visual tasks. Grounded on existing literature, we identified several research questions that need to be discussed before developing such displays.

More
As smart products move between jurisdictions, their program code becomes subject to various and sometimes incompatible legal environments. Manufacturers are therefore required to create customized product variants for specific markets, which induces variance management overhead and undermines economies of scale. In our article we investigate how the legal environment of a smart product interacts with the programming of that product. Specifically, we are interested in how the General Data Protection Regulation (GDPR) principles can be mapped to legally relevant aspects of toy robots. These are of particular interest as they contain different kinds of privacy-sensitive sensors such as microphones and cameras, are continuously processing (personal) data, can easily be moved from one jurisdiction to another, and affect individuals, including vulnerable ones such as children, in their homes. The core goal of this article is to develop a methodology to map the GDPR’s principles to the program code of a GoPiGo3 toy robot. We describe this methodology and demonstrate a concrete mapping to GoPiGo3 (as a prototype). In this prototype, the robot’s functionality has been extended to include external face recognition services, as well as external data processing for direct advertising purposes, in order to apply within the research domain of privacy and especially privacy by design. In this article, we describe how the mapping can be done in principle and plan to make first steps towards automating the mapping process. The main research questions we analyze are: How can we describe data protection law’s core principles in a way that system and software engineers can implement such norms into device firmware? What difficulties arise and what implementation decisions have to be taken in order to enable encoding data protection principles into systems? What are the benefits and limits of our methodology to map the data protection principles into a device’s program code, specifically regarding the automation potential of this process? To answer our research questions, we start by sketching the data flow emanating from GoPiGo3 and the fictional, yet realistic, additional services within our application scenario. We then investigate upon what “lawful grounds” the data processing of the device takes place (Art. 5(1)(a) GDPR) to determine what consent - and by whom depending on the legislation of EU member states on children consent - must be given and which other legal grounds for processing can justify the processing (Art. 6 GDPR). The GoPiGo3 provides information and obtains consent from the user in accordance with Art. 13 of the GDPR given the robot and user context (e.g., location and applicable jurisdiction, user age, etc.). We dive into (legally) contested terminologies, such as the term ‘fairness’, and determine their mapping into GoPiGo3’s program code. We then determine which data items are collected by the software and for which purposes that data is actually processed in order to determine which data items are required and which ones are not. Upon this basis we discuss how the principles of purpose limitation, data minimization, and storage restrictions should be implemented in device code.

More
Conference or Workshop Item
We demonstrate how autonomous goal-directed agents can exploit hypermedia to acquire and execute new behaviors at run time. In addition to behaviors programmed into the agents, in our system agents can discover and reuse behaviors extracted from machine-readable resource manuals. Such manuals can be published by developers, synthesized by agents through automated planning, or even specified by human users at run time. Agents can then discover and use physical and virtual resources in flexible ways, which allows them to better cope with the rapid evolution of open and dynamic Web environments.

More
get_appThiemo Wambsganss, Christina Niklaus, Matthias Cetto, Matthias Söllner, Siegfried Handschuh, Jan Marco Leimeister
Journal paper
Recent advances in Natural Language Processing (NLP) bear the opportunity to analyze the argumentation quality of texts. This can be leveraged to provide students with individual and adaptive feedback in their personal learning journey. To test if individual feedback on students' argumentation will help them to write more convincing texts, we developed AL, an adaptive IT tool that provides students with feedback on the argumentation structure of a given text. We compared AL with 54 students to a proven argumentation support tool. We found students using AL wrote more convincing texts with better formal quality of argumentation compared to the ones using the traditional approach. The measured technology acceptance provided promising results to use this tool as a feedback application in different learning settings. The results suggest that learning applications based on NLP may have a beneficial use for developing better writing and reasoning for students in traditional learning settings.

More
This study examines how firms in the autonomous driving industry that pursue artificial intelligence-based innovations attempt to appropriate returns from these innovations. It intends to contribute to the literature on value appropriation from innovation by investigating the extent to which firms can and do keep the key components of AI systems (data set, training approach, and model) private versus publishing them. Using a qualitative research design, we establish that there are regulatory, technical, and enforcement aspects to the components that prompt firms to either protect or publish.

More
get_appHasan S. Bank, Naveen K. Singa, Clifford Chou, Xiaofan Wu, Guiseppe F. Rigano, Simon Mayer
Digital Resource
A computer-based system and method is disclosed for spatial programming of a robotic device. A mixed reality tool may select an object related to one or more interactive tasks for the robotic device. A spatial location of the object may be located including Cartesian coordinates and orientation coordinates of the object. An application program may be executed to operate the robotic device using the spatial location. Based on initial parameters, execution of the one or more tasks by the robotic device on the object related to a skill set may be simulated in a mixed reality environment.

More
Recent standards are turning hypermedia into a homogenous information fabric that interconnects everything: physical objects, documents, abstract concepts etc. Clients can not only browse, but also observe and act on this hypermedia fabric. We present our vision for a new class of multi-agent systems that are: 1) aligned with the Web architecture to inherit the properties of the Web as a world-wide, open, and long-lived system, and 2) transparent and accountable to support acceptance by people.

More
Journal paper
Process design artifacts have been increasingly used to guide the modeling of business processes. To support users in designing and understanding process models, different process artifacts have been combined in several ways leading to the emergence of the so-called “hybrid process artifacts”. While many hybrid artifacts have been proposed in the literature, little is known about how they can actually support users in practice. To address this gap, this work investigates the way users engage with hybrid process artifacts during comprehension tasks. In particular, we focus on a hybrid representation of DCR Graphs (DCR-HR) combining a process model, textual annotations and an interactive simulation. Following a qualitative approach, we conduct a multi-granular analysis exploiting process mining, eye-tracking techniques, and verbal data analysis to scrutinize the reading patterns and the strategies adopted by users when being confronted with DCR-HR. The findings of the coarse-grained analysis provide important insights about the behavior of domain experts and IT specialists and show how user’s background and task type change the use of hybrid process artifacts. As for the fine-grained analysis, user’s behavior was classified into goal-directed and exploratory and different strategies of using the interactive simulation were identified. In addition, a progressive switch from an exploratory behavior to a goal-directed behavior was observed. These insights pave the way for an improved development of hybrid process artifacts and delineate several directions for future work.

More
Process modeling plays a central role in the development of today’s process-aware information systems both on the management level (e.g., providing input for requirements elicitation and fostering communication) and on the enactment level (providing a blue-print for process execution and enabling simulation). The literature comprises a variety of process modeling approaches proposing different modeling languages (i.e., imperative and declarative languages) and different types of process artifact support (i.e., process models, textual process descriptions, and guided simulations). However, the use of an individual modeling language or a single type of process artifact is usually not enough to provide a clear and concise understanding of the process. To overcome this limitation, a set of so-called “hybrid” approaches combining languages and artifacts have been proposed, but no common grounds have been set to define and categorize them. This work aims at providing a fundamental understanding of these hybrid approaches by defining a unified terminology, providing a conceptual framework and proposing an overarching overview to identify and analyze them. Since no common terminology has been used in the literature, we combined existing concepts and ontologies to define a “Hybrid Business Process Representation” (HBPR). Afterwards, we conducted a Systematic Literature Review (SLR) to identify and investigate the characteristics of HBPRs combining imperative and declarative languages or artifacts. The SLR resulted in 30 articles which were analyzed. The results indicate the presence of two distinct research lines and show common motivations driving the emergence of HBPRs, a limited maturity of existing approaches, and diverse application domains. Moreover, the results are synthesized into a taxonomy classifying different types of representations. Finally, the outcome of the study is used to provide a research agenda delineating the directions for future work.

More
Conference or Workshop Item
Process models provide a blueprint for process execution and an indispensable tool for process management. Bearing in mind their trending use for requirement elicitation, communication and improvement of business processes, the need for understandable process models becomes a must. In this paper, we propose a research model to investigate the impact of modularization on the understandability of declarative process models. We design a controlled experiment supported by eye-tracking, electroencephalography (EEG) and galvanic skin response (GSR) to appraise the understandability of hierarchical process models through measures such as comprehension accuracy, response time, attention, cognitive load and cognitive integration.

More
Data visualizations are versatile tools for gaining cognitive access to large amounts of data and for making complex relation-ships in data understandable. This paper proposes a method for assessing data visualizations according to the purposes theyfulfill in domain-specific data analysis settings. We introduce a framework that gets configured for a given analysis domainand allows to choose data visualizations in a methodically justified way, based on analysis questions that address differentaspects of data to be analyzed. Based on the concepts addressed by the analysis questions, the framework provides systematicguidance for determining which data visualizations are able to serve which conceptual analysis interests. In a second step ofthe method, we propose to follow a data-driven approach and to experimentally compare alternative data visualizations fora particular analytical purpose. More specifically, we propose to use eye tracking to support justified decisions about whichof the data visualizations selected with the help of the framework are most suitable for assessing the analysis domain in acognitively efficient way. We demonstrate our approach of how to come from analytical purposes to data visualizations usingthe example domain of Process Modeling Behavior Analysis. The analyses are performed on the background of representativeanalysis questions from this domain.

More
get_appThomas Hildebrandt, Amine Abbad Andaloussi, Lars Rune Christensen, Søren Debois, Nicklas Pape Healy, Hugo A. López, Morten Marquard, Naja L. Holten Møller, Anette Chelina Møller Petersen, Tijs Slaats, Barbara Weber
Conference or Workshop Item
We report on a new approach to co-creating adaptive case management systems jointly with end-users, developed in the context of the Effective co-created and compliant adaptive case Management Systems for Knowledge Workers (EcoKnow.org) research project. The approach is based on knowledge from prior ethnographic field studies and research in the declarative Dynamic Condition Response (DCR) technology for model-driven design of case management systems. The approach was tested in an operational environment jointly with the danish municipality of Syddjurs by conducting a service-design project and implementing an open source case manager tool and a new highlighter tool for mapping between textual specifications and the DCR notation. The design method and technologies were evaluated by understandability studies with endusers. The study showed that the development could be done in just 6 months, and that the new highlighter tool in combination with the traditional design and simulation tools, supports domain experts formalise and provide traceability between their interpretations of textual specifications and the formal models.

More
Understanding how developers interact with different software artifacts when performing comprehension tasks has a potential to improve developers' productivity. In this paper, we propose a method to analyze eye-tracking data using process mining to find distinct reading patterns of how developers interacted with the different artifacts. To validate our approach, we conducted an exploratory study using eye-tracking involving 11 participants. We applied our method to investigate how developers interact with different artifacts during domain and code understanding tasks. To contextualize the reading patterns and to better understand the perceived benefits and challenges participants associated with the different artifacts and their choice of reading patterns, we complemented the eye-tracking data with the data obtained from think aloud. The study used behavior-driven development, a development practice that is increasingly used in Agile software development contexts, as a setting. The study shows that our method can be used to explore developers' behavior at an aggregated level and identify behavioral patterns at varying levels of granularity.

More
get_appJosep Sanchez-Ferreres, Luis Delicado, Amine Abbad Andaloussi, Andrea Burattin, Guillermo Calderon-Ruiz, Barbara Weber, Josep Carmona, Lluís Padró
Journal paper
The creation of a process model is primarily a formalization task that faces the challenge of constructing a syntactically correct entity which accurately reflects the semantics of reality, and is understandable to the model reader. This paper proposes a framework called Model Judge, focused towards the two main actors in the process of learning process model creation: novice modelers and instructors. For modelers, the platform enables the automatic validation of the process models created from a textual description, providing explanations about quality issues in the model. Model Judge can provide diagnostics regarding model structure, writing style, and semantics by aligning annotated textual descriptions to models. For instructors, the platform facilitates the creation of modeling exercises by providing an editor to annotate the main parts of a textual description, that is empowered with natural language processing (NLP) capabilities so that the annotation effort is minimized. So far around 300 students, in process modeling courses of five different universities around the world have used the platform. The feedback gathered from some of these courses shows good potential in helping students to improve their learning experience, which might, in turn, impact process model quality and understandability. Moreover, our results show that instructors can benefit from getting insights into the evolution of modeling processes including arising quality issues of single students, but also discovering tendencies in groups of students. Although the framework has been applied to process model creation, it could be extrapolated to other contexts where the creation of models based on a textual description plays an important role.

More
The audit of financial statements is designed to collect reasonable assurance that an issued statement is free from material misstatement ('true and fair presentation'). International audit standards require the assessment of a statements' underlying accounting relevant transactions referred to as 'journal entries' to detect potential misstatements. To efficiently audit the increasing quantities of such journal entries, auditors regularly conduct an 'audit sampling' i.e. a sample-based assessment of a subset of these journal entries. However, the task of audit sampling is often conducted early in the overall audit process, where the auditor might not be aware of all generative factors and their dynamics that resulted in the journal entries in-scope of the audit. To overcome this challenge, we propose the use of a Vector Quantised-Variational Autoencoder (VQ-VAE) neural networks to learn a representation of journal entries able to provide a comprehensive 'audit sampling' to the auditor. We demonstrate, based on two real-world city payment datasets, that such artificial neural networks are capable of learning a quantised representation of accounting data. We show that the learned quantisation uncovers (i) the latent factors of variation and (ii) can be utilised as a highly representative audit sample in financial statement audits.

More
Künstliche Intelligenz bietet Unternehmen neue Möglichkeiten Prozesse, Produkte, Dienstleistungen und Geschäftsmodelle zu innovieren und bestehende zu verändern. Daher wird das professionelle Management Künstlicher Intelligenz in Unternehmen zu einer zentralen Aufgabe, um die neuen Wertversprechen mit produktiven Systemen zu realisieren. Der Beitrag stellt das St. Galler Management Modell für KI (SGMM-KI) vor und zeigt sieben Handlungsfelder für den betrieblichen Einsatz von KI: (1) Management von Künstlicher Intelligenz, (2) Organisation des Betriebs, (3) Rechtliche Gestaltung, (4) Regulierung und Compliance, (5) Lebenszyklus-Management, (6) Management der Technologie-Infrastruktur, sowie (7) Cybersicherheit. Der vorliegende Artikel leitet konkrete erste Schritte an und richtet sich primär an Geschäftsleitungsmitglieder, IT- und Innovationsverantwortliche sowie Projektleiter, welche die neuen Wertversprechen der KI in der betrieblichen Praxis verwirklichen möchten.

More
Imperative process models have become immensely popular. However, their use is usually limited to rigid and repetitive processes. Considering the inherent flexibility in most processes in the real-world and the increased need for managing knowledge-intensive processes, the adoption of declarative languages becomes more pertinent than ever. While the quality of imperative models has been extensively investigated in the literature, little is known about the dimensions affecting the quality of declarative models. This work takes an advanced stride to investigate the quality of declarative models. Following the theory of Personal Construct Psychology (PCT), our research introduces a novel method within the Business Process Management (BPM) field to explore quality in the eyes of expert modelers. The findings of this work summarize the dimensions defining the quality of declarative models. The outcome shows the potential of PCT as a basis to discover quality dimensions and advances our understanding of quality in declarative process models.

More
In this paper, we present a novel annotation approach to capture claims and premises of arguments and their relations in student-written persuasive peer reviews on business models in Ger-man language. We propose an annotation scheme based on annotation guidelines that allows to model claims and premises as well as support and attack relations for capturing the structure of argumentative discourse in student-written peer reviews. We conduct an annotation study with three annotators on 50 persuasive essays to evaluate our annotation scheme. The obtained inter-rater agreement of α = 0.57 for argument components and α = 0.49 for argumentative relations indicates that the proposed annotation scheme successfully guides annotators to moderate agreement. Finally, we present our freely available corpus of 1,000 persuasive student-written peer reviews on business models and our annotation guidelines to encourage future research on the design and development of argumentative writing support systems for students.

More
Nowadays, organizations collect vast quantities of accounting relevant transactions, referred to as 'journal entries', in 'Enterprise Resource Planning' (ERP) systems. The aggregation of those entries ultimately defines an organization's financial statement. To detect potential misstatements and fraud, international audit standards demand auditors to directly assess journal entries using 'Computer Assisted AuditTechniques' (CAATs). At the same time, discoveries in deep learning research revealed that machine learning models are vulnerable to 'adversarial attacks'. It also became evident that such attack techniques can be misused to generate 'Deepfakes' designed to directly attack the perception of humans by creating convincingly altered media content. The research of such developments and their potential impact on the finance and accounting domain is still in its early stage. We believe that it is of vital relevance to investigate how such techniques could be maliciously misused in this sphere. In this work, we show an adversarial attack against CAATs using deep neural networks. We first introduce a real-world 'thread model' designed to camouflage accounting anomalies such as fraudulent journal entries. Second, we show that adversarial autoencoder neural networks are capable of learning a human interpretable model of journal entries that disentangles the entries latent generative factors. Finally, we demonstrate how such a model can be maliciously misused by a perpetrator to generate robust 'adversarial' journal entries that mislead CAATs.

More
User interfaces rarely adapt to the specific user preferences or the task at hand. We present a method that allows to quickly and inexpensively create personalized interfaces from plain paper. Users can cut out shapes and assign control functions to these paper snippets via a simple configuration interface. After configuration, control takes place entirely through the manipulation of the paper shapes, providing the experience of a tailored tangible user interface. The shapes and assignments can be dynamically changed during use. Our system is based on markerless tracking of the user’s fingers and the paper shapes on a surface using an RGBD camera mounted above the interaction space, which is the only hardware sensor required. Our approach and system are backed up by two studies where we determined what shapes and interaction abstractions users prefer, and verified that users can indeed employ our system to build real applications with paper snippet interfaces.

More
The integration of systems of autonomous agents in Web of Things (WoT) environments is a promising approach to provide and distribute intelligence in world-wide pervasive systems. A central problem then is to enable autonomous agents to discover heterogeneous resources in large-scale, dynamic WoT environments. This is true in particular if an environment relies on open-standards and evolves rapidly requiring agents to adapt their behavior to achieve their goals. To this end, we developed a search engine for the WoT that allows autonomous agents to perform approximate search queries in order to find relevant resources in their environments in (weak) real time. The search engine crawls dynamic WoT environments to discover and index device metadata described with the W3C WoT Thing Description, and exposes a SPARQL endpoint that agents can use for approximate search. To demonstrate the feasibility of our approach, we implemented a prototype application for the maintenance of industrial robots in world-wide manufacturing systems. The prototype demonstrates that our semantic hypermedia search engine enhances the flexibility and agility of autonomous agents in the WoT.

More
There is some theoretical evidence that deep neural networks with multiple hidden layers have a potential for more efficient representation of multidimensional mappings than shallow networks with a single hidden layer. The question is whether it is possible to exploit this theoretical advantage for finding such representations with help of numerical training methods. Tests using prototypical problems with a known mean square minimum did not confirm this hypothesis. Minima found with the help of deep networks have always been worse than those found using shallow networks. This does not directly contradict the theoretical findings—it is possible that the superior representational capacity of deep networks is genuine while finding the mean square minimum of such deep networks is a substantially harder problem than with shallow ones.

More
The detection of fraud in accounting data is a long-standing challenge in financial statement audits. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. In contrast, more advanced approaches inspired by the recent success of deep learning often lack seamless interpretability of the detected results. To overcome this challenge, we propose the application of adversarial autoencoder networks. We demonstrate that such artificial neural networks are capable of learning a semantic meaningful representation of real-world journal entries. The learned representation provides a holistic view on a given set of journal entries and significantly improves the interpretability of detected accounting anomalies. We show that such a representation combined with the networks reconstruction error can be utilized as an unsupervised and highly adaptive anomaly assessment. Experiments on two datasets and initial feedback received by forensic accountants underpinned the effectiveness of the approach.

More
Conference or Workshop Item
Complex industrial processes produce a multitude of information during the product/service lifecycle. Those data are often stored, but rarely used in the context of overall process optimization, due to their unstructured format and the inability to integrate them with stored formal knowledge about the domain. This paper proposes a way to mitigate this problem, by extending the standard SPARQL query language to enable the integration of formal knowledge and unstructured data, as well as their joint processing. The paper constitutes an initial definition of the proposed SPARQL extension and demonstrates its applicability in the context of selected examples.

More
get_appBernhard Bermeitinger, Sebastian Gassner, Siegfried Handschuh, Gernot Howanitz, Erik Radisch, Malte Rehbein
Journal paper
A large number of digital humanities projects focuses on text. This medial limitation may be attributed to the abundance of well-established quantitative methods applicable to text. Cultural Studies, however, analyse cultural expressions in a broad sense, including different non-textual media, physical artefacts, and performative actions. It is, to a certain extent, possible to transcribe these multi-medial phenomena in textual form; however, this transcription is difficult to automate and some information may be lost. Thus, quantitative approaches which directly access media-specific information are a desideratum for Cultural Studies. Visual media constitute a significant part of cultural production. In our paper, we propose Deep Watching as a way to analyze visual media (films, photographs, and video clips) using cutting-edge machine learning and computer vision algorithms. Unlike previous approaches, which were based on generic information such as frame differences (Howanitz 2015), color distribution (Burghardt/Wolff 2016) or used manual annotation altogether (Dunst/Hartel 2016), Deep Watching allows to automatically identify visual information (symbols, objects, persons, body language, visual configuration of the scene) in large image and video corpora. To a certain extent, Tilton and Arnold’s Distant-Viewing Toolkit uses a comparable approach (Tilton/Arnold 2018). However, by means of our customized training of state-of-the-art convolutional neural networks for object detection and face recognition we can, in comparison to this toolkit, automatically extract more information about individual frames and their contexts.

More
We present an approach for recursively splitting and rephrasing complex English sentences into a novel semantic hierarchy of simplified sentences, with each of them presenting a more regular structure that may facilitate a wide variety of artificial intelligence tasks, such as machine translation (MT) or information extraction (IE). Using a set of hand-crafted transformation rules, input sentences are recursively transformed into a two-layered hierarchical representation in the form of core sentences and accompanying contexts that are linked via rhetorical relations. In this way, the semantic relationship of the decomposed constituents is preserved in the output, maintaining its interpretability for downstream applications. Both a thorough manual analysis and automatic evaluation across three datasets from two different domains demonstrate that the proposed syntactic simplification approach outperforms the state of the art in structural text simplification. Moreover, an extrinsic evaluation shows that when applying our framework as a preprocessing step the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall. To enable reproducible research, all code is provided online.

More
In this paper, we present a patch-based land use and land cover classification approach using Sentinel-2 satellite images. The Sentinel-2 satellite images are openly and freely accessible, and are provided in the earth observation program Copernicus. We present a novel dataset, based on these images that covers 13 spectral bands and is comprised of ten classes with a total of 27000 labeled and geo-referenced images. Benchmarks are provided for this novel dataset with its spectral bands using state-of-the-art deep convolutional neural networks. An overall classification accuracy of 98.57% was achieved with the proposed novel dataset. The resulting classification system opens a gate toward a number of earth observation applications. We demon- strate how this classification system can be used for detecting land use and land cover changes, and how it can assist in improving geographical maps. The geo-referenced dataset EuroSAT is made publicly available at https://github.com/phelber/eurosat.

More
Journal paper
Artificial Intelligence models are becoming increasingly more powerful and accurate, supporting or even replacing humans' decision making. But with increased power and accuracy also comes higher complexity, making it hard for users to understand how the model works and what the reasons behind its predictions are. Humans must explain and justify their decisions, and so do the AI models supporting them in this process, making semantic interpretability an emerging field of study. In this work, we look at interpretability from a broader point of view, going beyond the machine learning scope and covering different AI fields such as distributional semantics and fuzzy logic, among others. We examine and classify the models according to their nature and also based on how they introduce interpretability features, analyzing how each approach affects the final users and pointing to gaps that still need to be addressed to provide more human-centered interpretability solutions.

More
Conference or Workshop Item
Interaction Affordances are central to the W3C Web of Things (WoT): they allow Consumers to identify and use the capabilities provided by Web Things. Ideally, interaction affordances would allow consumers to arrive-and-operate in any W3C WoT environment: given an entry URI, consumers would be able to achieve their tasks in an autonomous manner by navigating the hypermedia and deciding among the various options presented to them at run time. A central challenge then, which is not typically within the scope of Web engineering, is how to design, program, debug, monitor, and regulate such autonomous consumers of Web Things. The engineering of similar autonomous systems has been studied to a large extent in research on multi-agent systems (MAS), and we believe that tapping into the large body of MAS research holds great promise for unlocking the full potential of the W3C WoT. In this position paper, we motivate and present our vision for autonomous systems in the WoT, and support this vision with a prototype for industrial manufacturing. We then discuss some of the challenges and opportunities raised by bringing autonomy to the WoT.

More
The World Wide Web has evolved drastically over the past decade -- and the proliferation of Web APIs has turned it into the middleware of choice for most distributed systems. The recent focus on hypermedia-driven APIs together with initiatives such as the Web of Things and Linked Data are now promoting and advancing the development of a new generation of dynamic, open, and long-lived systems on the Web. These systems require agent-based solutions to the point that Web researchers have started to build autonomous systems on their own. It is thus both timely and necessary to investigate and align the latest developments in Web research and multi-agent systems (MAS) research. In this paper, we analyze in hindsight the factors that hindered the widespread acceptance of early Web-based MAS. We argue that the answer lies equally in a lack of practical use cases as well as the premature development and alignment of Web and agent technologies. We then present our vision for a new generation of autonomous systems on the Web, which we call hypermedia MAS, together with the research opportunities and challenges they bring.

More
Hypermedia is increasingly used in Web service design, particularly in Linked Data and Web of Things systems where the use of static service contracts is not practical. This evolution raises new challenges: to discover, consume, and integrate hypermedia services at runtime, clients have to become increasingly autonomous in pursuit of their design goals. Such autonomous systems have been studied to a large extent in research on multi-agent systems (MAS). To consolidate the evolution of hypermedia services, it is now necessary to have comprehensive discussions on integrating hypermedia systems and MAS, henceforth hypermedia MAS. The goal of this initiative is to establish a common forum for both the Web and MAS communities, and to create social, conceptual and technological bridges across these fields. We invite researchers and practitioners to imagine, design, build, evaluate, and share their thoughts and visions on what the future of the Web for both people and autonomous agents will be.

More
Conference or Workshop Item
In MAS programming and engineering, the environment and the organisation can be exploited as first-class design and programming abstractions besides the agent one. A main example of a platform implementing this view is JaCaMo, which allows the programming of a MAS in terms of an organisation of cognitive agents sharing acommon artifact-based environment. However, MAS models and platforms in general do not provide a satisfactory approach for MAS developers to uniformly deal with distribution at multiple dimensions — agent, environment, and organisation. Typically, environments are either centralised in a single node, or composed by parts that run on different nodes but with a poor support atthe programming and execution levels to deal with that. In this paper, we tackle this problem by proposing a model for engineering world-wide distributed environments and organisations for MAS. The approach integrates the A&A (Agents and Artifacts) conceptual model with a web/resource-oriented view of distributed systems as proposed by the REST architectural style. To evaluate the approach, an extension of the JaCaMo open-source platform has been developed implementing the proposed model.

More
Conference or Workshop Item
With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.

More
The Internet is abundant with opinions, sentiments, and reflections of the society about products, brands, and institutions hidden under tons of irrelevant and unstructured data. This work addresses the contextual augmentation of events in social media streams in order to fully leverage the knowledge present in social multimedia by making three major contributions. First, a global study of the Twitter Firehose is presented. To our knowledge this is the first study of this kind and comprehension providing valuable insights about variability of tweets with respect to multimedia content. The results for more than one billion tweets show the great potential of the stream for many application domains. As a second key contribution, a fully automated system was developed for the augmentation of social multimedia with contextual information on a large scale. The system trawls multimedia content from Twitter and performs a multi-modal analysis on it. The analysis considers temporal, visual, textual, geographical, and user-specific dimensions. Third, we present a near-duplicate detection approach based on deep learn- ing to detect the most frequent images being propagated through Twitter during events

More
The increased availability of high-resolution satellite imagery allows to sense very detailed structures on the surface of our planet and opens up new direc- tions in the analysis of remotely sensed imagery. While deep neural networks have achieved significant advances in semantic segmentation of high-resolution images, most of the existing approaches tend to produce predictions with poor boundaries. In this paper, we address the problem of preserving semantic seg- mentation boundaries in high-resolution satellite imagery by introducing a novel multi-task loss. The loss leverages multiple output representations of the seg- mentation mask and biases the network to focus more on pixels near bound- aries. We evaluate our approach on the large-scale Inria Aerial Image Label- ing Dataset. Our results outperform existing methods with the same architec- ture by about 3% on the Intersection over Union (IoU) metric without additional post-processing steps. Source code and all models are available under https: //github.com/bbischke/MultiTaskBuildingSegmentation.

More
Today's customers are characterized by individual requirements that lead the manufacturing industry to increased product variety and volume reduction. Manufacturing systems and more specifically assembly systems (ASs) should allow quick adaptation of manufacturing assets so as to respond to the evolving market requirements that lead to mass customization. Meanwhile, the manufacturing era is changing due to the fourth industrial revolution, i.e., Industry 4.0, that will change the traditional manufacturing environment to an IoT-based one. In this context, this paper introduces the concept of cyber-physical microservice in the Manufacturing and the ASs domain and presents the Cyber-Physical microservice and IoT-based (CPuS-IoT) framework. The CPuS-IoT framework exploits the benefits of the microservice architectural style and the IoT technologies, but also utilizes the existing in this domain huge investment based on traditional technologies, to support the life cycle of evolvable ASs in the age of Industry 4.0. It provides a solid basis to capture domain knowledge that is used by a model-driven engineering (MDE) approach to semi-automate the development, evolution and operation of ASs, as well as, to establish a common vocabulary for assembly system experts and IoT ones. The CPuS-IoT approach and framework effectively combines MDE with IoT and the microservice architectural paradigm. A case study for the assembly of an everyday life product is adopted to demonstrate the approach even to non-experts of this domain.

More
get_appMarkel Iglesias-Urkia, Diego Casado-Mansilla, Simon Mayer, Josu Bilbao, Aitor Urbieta
Journal paper
Electrical substations are crucial elements of Smart Grids where they are mainly responsible for voltage transformations. However, due to the integration of distributed energy resources in the grid, substations now have to provide additional grid management capabilities which in turn require supervision and automation solutions for large low-voltage grids. A recurring challenge in such deployments are siloed systems that are due to non-interoperable communication protocols across substations: although most substations’ communication is based on the International Electrotechnical Commission (IEC) 61850 standard, deployed legacy protocols lag behind modern communication technologies in terms of performance, hindering the full transition to lightweight protocols. This paper demonstrates that IEC 61850 can be fully mapped to the Constrained Application Protocol (CoAP) in combination with the Concise Binary Object Representation (CBOR) format while improving system performance compared to existing alternatives (e.g. WS-SOAP and HTTP). On average, CoAP+CBOR needs 44% and 18% of the message size and 71% and 85% of the time compared to systems based on HTTP and WS-* Web Services, respectively – this is especially relevant for resource-constrained devices and networks in electrical grids. In addition, CoAP is based on the Representational State Transfer (REST) architectural style, which supports system integration and interoperability through uniform identification and interaction. This approach fosters the standard-compliant integration of legacy platforms with modern substations as well as current IoT systems in neighboring domains such as building management and infrastructure automation systems.

More
get_appSimon Donig, Bernhard Bermeitinger, Maria Christoforaki, Siegfried Handschuh
Journal paper
In den letzten Jahren hat die Anwendung von Verfahren der Computer Vision im Bereich der digitalen Kunstgeschichte und Objektforschung erheblich an Bedeutung gewonnen. Dabei stellt das Schließen der semantischen Lücke eine zentrale Herausforderung für (teil-)automatisierte algorithmische Verfahren dar. Hier schlagen wir einen multimodalen Zugang vor, in dem wir eine fruchtbringende Lösung des Problems sehen und den wir im Kontext des Neoclassica-Projekts entwickeln.

More