Search
Generic filters
Exact matches only
Filter by content type
Users
Attachments

Publications Overview

Während künstliche Intelligenz die Arbeitsweise verschiedener Berufsgruppen zunehmend und nachhaltig verändert, steckt ein solcher Wandel im Bereich der Wirtschaftsprüfung derzeit in seinen Anfängen. Der nachfolgende Beitrag soll Einsatzmöglichkeiten und Herausforderungen des maschinellen Lernens (ML), eines Teilgebiets der künstlichen Intelligenz, im Kontext der Abschlussprüfung aufzeigen.

More
With rapid advances in machine learning (ML), more of this technology is being deployed into the real world interacting with us and our environment. One of the most widely applied application of ML is facial recognition as it is running on millions of devices. While being useful for some people, others perceive it as a threat when used by public authorities. This discrepancy and the lack of policy increases the uncertainty in the ML community about the future direction of facial recognition research and development. In this paper we present results from a cross-national survey about public acceptance, privacy, and discrimination of the use of facial recognition technology (FRT) in the public. This study provides insights about the opinion towards FRT from China, Germany, the United Kingdom (UK), and the United States (US), which can serve as input for policy makers and legal regulators.

More
This study examines how firms in the autonomous driving industry that pursue artificial intelligence-based innovations attempt to appropriate returns from these innovations. It intends to contribute to the literature on value appropriation from innovation by investigating the extent to which firms can and do keep the key components of AI systems (data set, training approach, and model) private versus publishing them. Using a qualitative research design, we establish that there are regulatory, technical, and enforcement aspects to the components that prompt firms to either protect or publish.

More
The audit of financial statements is designed to collect reasonable assurance that an issued statement is free from material misstatement ('true and fair presentation'). International audit standards require the assessment of a statements' underlying accounting relevant transactions referred to as 'journal entries' to detect potential misstatements. To efficiently audit the increasing quantities of such journal entries, auditors regularly conduct an 'audit sampling' i.e. a sample-based assessment of a subset of these journal entries. However, the task of audit sampling is often conducted early in the overall audit process, where the auditor might not be aware of all generative factors and their dynamics that resulted in the journal entries in-scope of the audit. To overcome this challenge, we propose the use of a Vector Quantised-Variational Autoencoder (VQ-VAE) neural networks to learn a representation of journal entries able to provide a comprehensive 'audit sampling' to the auditor. We demonstrate, based on two real-world city payment datasets, that such artificial neural networks are capable of learning a quantised representation of accounting data. We show that the learned quantisation uncovers (i) the latent factors of variation and (ii) can be utilised as a highly representative audit sample in financial statement audits.

More
Künstliche Intelligenz bietet Unternehmen neue Möglichkeiten Prozesse, Produkte, Dienstleistungen und Geschäftsmodelle zu innovieren und bestehende zu verändern. Daher wird das professionelle Management Künstlicher Intelligenz in Unternehmen zu einer zentralen Aufgabe, um die neuen Wertversprechen mit produktiven Systemen zu realisieren. Der Beitrag stellt das St. Galler Management Modell für KI (SGMM-KI) vor und zeigt sieben Handlungsfelder für den betrieblichen Einsatz von KI: (1) Management von Künstlicher Intelligenz, (2) Organisation des Betriebs, (3) Rechtliche Gestaltung, (4) Regulierung und Compliance, (5) Lebenszyklus-Management, (6) Management der Technologie-Infrastruktur, sowie (7) Cybersicherheit. Der vorliegende Artikel leitet konkrete erste Schritte an und richtet sich primär an Geschäftsleitungsmitglieder, IT- und Innovationsverantwortliche sowie Projektleiter, welche die neuen Wertversprechen der KI in der betrieblichen Praxis verwirklichen möchten.

More
Nowadays, organizations collect vast quantities of accounting relevant transactions, referred to as 'journal entries', in 'Enterprise Resource Planning' (ERP) systems. The aggregation of those entries ultimately defines an organization's financial statement. To detect potential misstatements and fraud, international audit standards demand auditors to directly assess journal entries using 'Computer Assisted AuditTechniques' (CAATs). At the same time, discoveries in deep learning research revealed that machine learning models are vulnerable to 'adversarial attacks'. It also became evident that such attack techniques can be misused to generate 'Deepfakes' designed to directly attack the perception of humans by creating convincingly altered media content. The research of such developments and their potential impact on the finance and accounting domain is still in its early stage. We believe that it is of vital relevance to investigate how such techniques could be maliciously misused in this sphere. In this work, we show an adversarial attack against CAATs using deep neural networks. We first introduce a real-world 'thread model' designed to camouflage accounting anomalies such as fraudulent journal entries. Second, we show that adversarial autoencoder neural networks are capable of learning a human interpretable model of journal entries that disentangles the entries latent generative factors. Finally, we demonstrate how such a model can be maliciously misused by a perpetrator to generate robust 'adversarial' journal entries that mislead CAATs.

More
The detection of fraud in accounting data is a long-standing challenge in financial statement audits. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. In contrast, more advanced approaches inspired by the recent success of deep learning often lack seamless interpretability of the detected results. To overcome this challenge, we propose the application of adversarial autoencoder networks. We demonstrate that such artificial neural networks are capable of learning a semantic meaningful representation of real-world journal entries. The learned representation provides a holistic view on a given set of journal entries and significantly improves the interpretability of detected accounting anomalies. We show that such a representation combined with the networks reconstruction error can be utilized as an unsupervised and highly adaptive anomaly assessment. Experiments on two datasets and initial feedback received by forensic accountants underpinned the effectiveness of the approach.

More
In this paper, we present a patch-based land use and land cover classification approach using Sentinel-2 satellite images. The Sentinel-2 satellite images are openly and freely accessible, and are provided in the earth observation program Copernicus. We present a novel dataset, based on these images that covers 13 spectral bands and is comprised of ten classes with a total of 27000 labeled and geo-referenced images. Benchmarks are provided for this novel dataset with its spectral bands using state-of-the-art deep convolutional neural networks. An overall classification accuracy of 98.57% was achieved with the proposed novel dataset. The resulting classification system opens a gate toward a number of earth observation applications. We demon- strate how this classification system can be used for detecting land use and land cover changes, and how it can assist in improving geographical maps. The geo-referenced dataset EuroSAT is made publicly available at https://github.com/phelber/eurosat.

More
The Internet is abundant with opinions, sentiments, and reflections of the society about products, brands, and institutions hidden under tons of irrelevant and unstructured data. This work addresses the contextual augmentation of events in social media streams in order to fully leverage the knowledge present in social multimedia by making three major contributions. First, a global study of the Twitter Firehose is presented. To our knowledge this is the first study of this kind and comprehension providing valuable insights about variability of tweets with respect to multimedia content. The results for more than one billion tweets show the great potential of the stream for many application domains. As a second key contribution, a fully automated system was developed for the augmentation of social multimedia with contextual information on a large scale. The system trawls multimedia content from Twitter and performs a multi-modal analysis on it. The analysis considers temporal, visual, textual, geographical, and user-specific dimensions. Third, we present a near-duplicate detection approach based on deep learn- ing to detect the most frequent images being propagated through Twitter during events

More
The increased availability of high-resolution satellite imagery allows to sense very detailed structures on the surface of our planet and opens up new direc- tions in the analysis of remotely sensed imagery. While deep neural networks have achieved significant advances in semantic segmentation of high-resolution images, most of the existing approaches tend to produce predictions with poor boundaries. In this paper, we address the problem of preserving semantic seg- mentation boundaries in high-resolution satellite imagery by introducing a novel multi-task loss. The loss leverages multiple output representations of the seg- mentation mask and biases the network to focus more on pixels near bound- aries. We evaluate our approach on the large-scale Inria Aerial Image Label- ing Dataset. Our results outperform existing methods with the same architec- ture by about 3% on the Intersection over Union (IoU) metric without additional post-processing steps. Source code and all models are available under https: //github.com/bbischke/MultiTaskBuildingSegmentation.

More
Das Erkennen von fehlerbehafteten und dolosen Journalbuchungen in umfangreichen Mengen von Buchungsdaten der Finanzbuchhaltung ist eine komplexe Herausforderung für die risikoorientierte Prüfung von Jahresabschlüssen und forensische Unterschlagungsprüfungen. Die Mehrzahl der heutzutage angewandten Analyseverfahren entspricht regel- und gleichzeitig hypothesenbasierten Testverfahren – sogenannte „Red-Flags“, z.B. die Analyse kurzfristiger Änderungen kreditorischer Bankverbindungen oder Buchungen außerhalb regulärer Arbeitszeiten. Praktische Erfahrungen zeigen jedoch, dass es im Besonderen internen Tätern oftmals gelingt, durch adaptive Verhaltensmuster unternehmensschädigende Buchungen zu verschleiern und so unentdeckt zu bleiben. Neuartige Methoden der Künstlichen Intelligenz – etwa tiefe Maschinelle Lernverfahren (sogenanntes „Deep Learning“) – stellen eine hilfreiche Ergänzung klassischer Testverfahren dar. Dies gilt im Besonderen vor dem Hintergrund der fortschreitenden Digitalisierung von Unternehmens- bzw. Geschäftsprozessen und den damit einhergehenden veränderten Anforderungen an Prüfungshandlungen.

More
Recently, sound recognition has been used to identify sounds, such as the sound of a car, or a river. However, sounds have nuances that may be better described by adjective-noun pairs such as “slow car” and verb-noun pairs such as “flying insects,” which are underexplored. Therefore, this work investigates the relationship between audio content and both adjective-noun pairs and verb-noun pairs. Due to the lack of datasets with these kinds of annotations, we collected and processed the AudioPairBank corpus consisting of a combined total of 1123 pairs and over 33,000 audio files. In this paper, we include previously unavailable documentation of the challenges and implications of collecting audio recordings with these types of labels. We have also shown the degree of correlation between the audio content and the labels through classification experiments, which yielded 70% accuracy. The results and study in this paper encourage further exploration of the nuances in sounds and are meant to complement similar research performed on images and text in multimedia analysis.

More
get_appPalacio Sebastian, Folz Joachim, Raue Federico, Borth Damian, Dengel Andreas
Conference or Workshop Item
We propose a novel way to measure and understand convolutional neural networks by quantifying the amount of input signal they let in. To do this, an autoencoder (AE) was fine-tuned on gradients from a pre-trained classifier with fixed parameters. We compared the reconstructed samples from AEs that were fine-tuned on a set of image classifiers (AlexNet, VGG16, ResNet-50, and Inception~ v3) and found substantial differences. The AE learns which aspects of the input space to preserve and which ones to ignore, based on the information encoded in the backpropagated gradients. Measuring the changes in accuracy when the signal of one classifier is used by a second one, a relation of total order emerges. This order depends directly on each classifier's input signal but it does not correlate with classification accuracy or network size. Further evidence of this phenomenon is provided by measuring the normalized mutual information between original images and auto-encoded reconstructions from different fine-tuned AEs. These findings break new ground in the area of neural network understanding, opening a new way to reason, debug, and interpret their results. We present four concrete examples in the literature where observations can now be explained in terms of the input signal that a model uses.

More
Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning, we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network's reconstruction error obtainable for a journal entry and regularized by the entry's individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies.

More
get_appThomas Bieger, Paul Achleitner, Urs Landolf, Luca Serratore, Yannik Breitenstein, Bernadette Dilger, Marc Angélil, Sou Fujimoto, Andreas Böhm, Miriam Meckel, Damian Borth, Urs Gasser, Melinda Florina Lohmann, Heike Bruch, Sandra Berenbold, Oliver Gassmann, Barbara Josef, Andrea Back, Falk Uebernickel, Marc Kohler, Jennifer Hehn, Jan Marco Leimeister, Matthias Söllner, Walter Brenner, Thomas Gutzwiller, Markus Frank, Wolfgang Schürer, Henry Kissinger
Book
Wie wird das HSG Learning Center aussehen und was wird es leisten? Wie wird die digitale Transformation unser Leben und unser Lernen verändern? Wie können wir sie gestalten? Das Buch zeigt das Potential des HSG Learning Centers auf und zeichnet ein Panorama der Aktivitäten in Forschung und Lehre im Kontext des digitalen Wandels auf.

More
get_appBischke Benjamin, Helber Patrick, Schulze Christian, Srinivasan Venkat, Dengel Andreas, Borth Damian
Conference or Workshop Item
This paper provides a description of the MediaEval 2017 Multimedia Satellite Task. The primary goal of the task is to extract and fuse content of events which are present in Satellite Imagery and Social Media. Establishing a link from Satellite Imagery to Social Multi- media can yield to a comprehensive event representation which is vital for numerous applications. Focusing on natural disaster events in this year, the main objective of the task is to leverage the combined event representation withing the context of emergency response and environmental monitoring. In particular, our task focuses this year on flooding events and consists of two subtasks. The first Disaster Image Retrieval form Social Media subtask requires participants to retrieve images from Social Media which show a direct evidence of the flooding event. The second task Flood Detec- tion in Satellite Images aims to extract regions in satellite images which are affected by a flooding event. Extracted content from both tasks can be fused by means of the geographic information. The task seeks to go beyond state-of-the-art flooding map generation towards recent approaches in Deep-Learning while augmenting the satellite information at the same time with rich social multimedia.

More