Search
Generic filters
Exact matches only
Filter by content type
Users
Attachments

Publications Overview

get_appAndrei Ciortea, Simon Mayer, Simon Bienz, Fabien Gandon, Olivier Corby
Journal paper
Recent W3C recommendations for the Web of Things (WoT) and the Social Web are turning hypermedia into a homogeneous information fabric that interconnects heterogeneous resources: devices, people, information resources, abstract concepts, etc. The integration of multi-agent systems with such hypermedia environments now provides a means to distribute autonomous behavior in worldwide pervasive systems. A central problem then is to enable autonomous agents to discover heterogeneous resources in worldwide and dynamic hypermedia environments. This is a problem in particular in WoT environments that rely on open standards and evolve rapidly—thus requiring agents to adapt their behavior at run time in pursuit of their design objectives. To this end, we developed a hypermedia search engine for the WoT that allows autonomous agents to perform approximate search queries in order to retrieve relevant resources in their environment in (weak) real time. The search engine crawls dynamic WoT environments to discover and index device metadata described with the W3C WoT Thing Description, and exposes a SPARQL endpoint that agents can use for approximate search. To demonstrate the feasibility of our approach, we implemented a prototype application for the maintenance of industrial robots in worldwide manufacturing systems. The prototype demonstrates that our semantic hypermedia search engine enhances the flexibility and agility of autonomous agents in a social and ubiquitous Web.

More
As smart products move between jurisdictions, their program code becomes subject to various and sometimes incompatible legal environments. Manufacturers are therefore required to create customized product variants for specific markets, which induces variance management overhead and undermines economies of scale. In our article we investigate how the legal environment of a smart product interacts with the programming of that product. Specifically, we are interested in how the General Data Protection Regulation (GDPR) principles can be mapped to legally relevant aspects of toy robots. These are of particular interest as they contain different kinds of privacy-sensitive sensors such as microphones and cameras, are continuously processing (personal) data, can easily be moved from one jurisdiction to another, and affect individuals, including vulnerable ones such as children, in their homes. The core goal of this article is to develop a methodology to map the GDPR’s principles to the program code of a GoPiGo3 toy robot. We describe this methodology and demonstrate a concrete mapping to GoPiGo3 (as a prototype). In this prototype, the robot’s functionality has been extended to include external face recognition services, as well as external data processing for direct advertising purposes, in order to apply within the research domain of privacy and especially privacy by design. In this article, we describe how the mapping can be done in principle and plan to make first steps towards automating the mapping process. The main research questions we analyze are: How can we describe data protection law’s core principles in a way that system and software engineers can implement such norms into device firmware? What difficulties arise and what implementation decisions have to be taken in order to enable encoding data protection principles into systems? What are the benefits and limits of our methodology to map the data protection principles into a device’s program code, specifically regarding the automation potential of this process? To answer our research questions, we start by sketching the data flow emanating from GoPiGo3 and the fictional, yet realistic, additional services within our application scenario. We then investigate upon what “lawful grounds” the data processing of the device takes place (Art. 5(1)(a) GDPR) to determine what consent - and by whom depending on the legislation of EU member states on children consent - must be given and which other legal grounds for processing can justify the processing (Art. 6 GDPR). The GoPiGo3 provides information and obtains consent from the user in accordance with Art. 13 of the GDPR given the robot and user context (e.g., location and applicable jurisdiction, user age, etc.). We dive into (legally) contested terminologies, such as the term ‘fairness’, and determine their mapping into GoPiGo3’s program code. We then determine which data items are collected by the software and for which purposes that data is actually processed in order to determine which data items are required and which ones are not. Upon this basis we discuss how the principles of purpose limitation, data minimization, and storage restrictions should be implemented in device code.

More
Conference or Workshop Item
We demonstrate how autonomous goal-directed agents can exploit hypermedia to acquire and execute new behaviors at run time. In addition to behaviors programmed into the agents, in our system agents can discover and reuse behaviors extracted from machine-readable resource manuals. Such manuals can be published by developers, synthesized by agents through automated planning, or even specified by human users at run time. Agents can then discover and use physical and virtual resources in flexible ways, which allows them to better cope with the rapid evolution of open and dynamic Web environments.

More
get_appHasan S. Bank, Naveen K. Singa, Clifford Chou, Xiaofan Wu, Guiseppe F. Rigano, Simon Mayer
Digital Resource
A computer-based system and method is disclosed for spatial programming of a robotic device. A mixed reality tool may select an object related to one or more interactive tasks for the robotic device. A spatial location of the object may be located including Cartesian coordinates and orientation coordinates of the object. An application program may be executed to operate the robotic device using the spatial location. Based on initial parameters, execution of the one or more tasks by the robotic device on the object related to a skill set may be simulated in a mixed reality environment.

More
User interfaces rarely adapt to the specific user preferences or the task at hand. We present a method that allows to quickly and inexpensively create personalized interfaces from plain paper. Users can cut out shapes and assign control functions to these paper snippets via a simple configuration interface. After configuration, control takes place entirely through the manipulation of the paper shapes, providing the experience of a tailored tangible user interface. The shapes and assignments can be dynamically changed during use. Our system is based on markerless tracking of the user’s fingers and the paper shapes on a surface using an RGBD camera mounted above the interaction space, which is the only hardware sensor required. Our approach and system are backed up by two studies where we determined what shapes and interaction abstractions users prefer, and verified that users can indeed employ our system to build real applications with paper snippet interfaces.

More
The integration of systems of autonomous agents in Web of Things (WoT) environments is a promising approach to provide and distribute intelligence in world-wide pervasive systems. A central problem then is to enable autonomous agents to discover heterogeneous resources in large-scale, dynamic WoT environments. This is true in particular if an environment relies on open-standards and evolves rapidly requiring agents to adapt their behavior to achieve their goals. To this end, we developed a search engine for the WoT that allows autonomous agents to perform approximate search queries in order to find relevant resources in their environments in (weak) real time. The search engine crawls dynamic WoT environments to discover and index device metadata described with the W3C WoT Thing Description, and exposes a SPARQL endpoint that agents can use for approximate search. To demonstrate the feasibility of our approach, we implemented a prototype application for the maintenance of industrial robots in world-wide manufacturing systems. The prototype demonstrates that our semantic hypermedia search engine enhances the flexibility and agility of autonomous agents in the WoT.

More
Conference or Workshop Item
Complex industrial processes produce a multitude of information during the product/service lifecycle. Those data are often stored, but rarely used in the context of overall process optimization, due to their unstructured format and the inability to integrate them with stored formal knowledge about the domain. This paper proposes a way to mitigate this problem, by extending the standard SPARQL query language to enable the integration of formal knowledge and unstructured data, as well as their joint processing. The paper constitutes an initial definition of the proposed SPARQL extension and demonstrates its applicability in the context of selected examples.

More
Conference or Workshop Item
Interaction Affordances are central to the W3C Web of Things (WoT): they allow Consumers to identify and use the capabilities provided by Web Things. Ideally, interaction affordances would allow consumers to arrive-and-operate in any W3C WoT environment: given an entry URI, consumers would be able to achieve their tasks in an autonomous manner by navigating the hypermedia and deciding among the various options presented to them at run time. A central challenge then, which is not typically within the scope of Web engineering, is how to design, program, debug, monitor, and regulate such autonomous consumers of Web Things. The engineering of similar autonomous systems has been studied to a large extent in research on multi-agent systems (MAS), and we believe that tapping into the large body of MAS research holds great promise for unlocking the full potential of the W3C WoT. In this position paper, we motivate and present our vision for autonomous systems in the WoT, and support this vision with a prototype for industrial manufacturing. We then discuss some of the challenges and opportunities raised by bringing autonomy to the WoT.

More
The World Wide Web has evolved drastically over the past decade -- and the proliferation of Web APIs has turned it into the middleware of choice for most distributed systems. The recent focus on hypermedia-driven APIs together with initiatives such as the Web of Things and Linked Data are now promoting and advancing the development of a new generation of dynamic, open, and long-lived systems on the Web. These systems require agent-based solutions to the point that Web researchers have started to build autonomous systems on their own. It is thus both timely and necessary to investigate and align the latest developments in Web research and multi-agent systems (MAS) research. In this paper, we analyze in hindsight the factors that hindered the widespread acceptance of early Web-based MAS. We argue that the answer lies equally in a lack of practical use cases as well as the premature development and alignment of Web and agent technologies. We then present our vision for a new generation of autonomous systems on the Web, which we call hypermedia MAS, together with the research opportunities and challenges they bring.

More
Hypermedia is increasingly used in Web service design, particularly in Linked Data and Web of Things systems where the use of static service contracts is not practical. This evolution raises new challenges: to discover, consume, and integrate hypermedia services at runtime, clients have to become increasingly autonomous in pursuit of their design goals. Such autonomous systems have been studied to a large extent in research on multi-agent systems (MAS). To consolidate the evolution of hypermedia services, it is now necessary to have comprehensive discussions on integrating hypermedia systems and MAS, henceforth hypermedia MAS. The goal of this initiative is to establish a common forum for both the Web and MAS communities, and to create social, conceptual and technological bridges across these fields. We invite researchers and practitioners to imagine, design, build, evaluate, and share their thoughts and visions on what the future of the Web for both people and autonomous agents will be.

More
Conference or Workshop Item
In MAS programming and engineering, the environment and the organisation can be exploited as first-class design and programming abstractions besides the agent one. A main example of a platform implementing this view is JaCaMo, which allows the programming of a MAS in terms of an organisation of cognitive agents sharing acommon artifact-based environment. However, MAS models and platforms in general do not provide a satisfactory approach for MAS developers to uniformly deal with distribution at multiple dimensions — agent, environment, and organisation. Typically, environments are either centralised in a single node, or composed by parts that run on different nodes but with a poor support atthe programming and execution levels to deal with that. In this paper, we tackle this problem by proposing a model for engineering world-wide distributed environments and organisations for MAS. The approach integrates the A&A (Agents and Artifacts) conceptual model with a web/resource-oriented view of distributed systems as proposed by the REST architectural style. To evaluate the approach, an extension of the JaCaMo open-source platform has been developed implementing the proposed model.

More
Conference or Workshop Item
With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.

More
get_appMarkel Iglesias-Urkia, Diego Casado-Mansilla, Simon Mayer, Josu Bilbao, Aitor Urbieta
Journal paper
Electrical substations are crucial elements of Smart Grids where they are mainly responsible for voltage transformations. However, due to the integration of distributed energy resources in the grid, substations now have to provide additional grid management capabilities which in turn require supervision and automation solutions for large low-voltage grids. A recurring challenge in such deployments are siloed systems that are due to non-interoperable communication protocols across substations: although most substations’ communication is based on the International Electrotechnical Commission (IEC) 61850 standard, deployed legacy protocols lag behind modern communication technologies in terms of performance, hindering the full transition to lightweight protocols. This paper demonstrates that IEC 61850 can be fully mapped to the Constrained Application Protocol (CoAP) in combination with the Concise Binary Object Representation (CBOR) format while improving system performance compared to existing alternatives (e.g. WS-SOAP and HTTP). On average, CoAP+CBOR needs 44% and 18% of the message size and 71% and 85% of the time compared to systems based on HTTP and WS-* Web Services, respectively – this is especially relevant for resource-constrained devices and networks in electrical grids. In addition, CoAP is based on the Representational State Transfer (REST) architectural style, which supports system integration and interoperability through uniform identification and interaction. This approach fosters the standard-compliant integration of legacy platforms with modern substations as well as current IoT systems in neighboring domains such as building management and infrastructure automation systems.

More
get_appMichele Nati, Simon Mayer, Angelo Capossele, Paolo Missier
Journal paper
The number of connected devices is quickly approaching 20 Billion thanks to the increasing availability of connectivity technologies and decreasing hardware costs and driven by a growing ability to mine the generated data. This market is expected to grow even larger if access to devices and their data were open, with a flourishing ecosystem of applications and services being created around them.Web of Things APIs are paving the way to standardize this process; however, with more access and more revenue come more responsibilities for device manufacturers and providers of add-on services alike: misuse of data and devices needs to be prevented and trust systems need to be in place to reinforce customer confidence in the resulting service systems. In this paper we discuss the opportunities and risks associated with opening up connected devices and propose a technical framework for regulating the resulting service ecosystems.

More
Komponenten in industriellen Anlagen stehen miteinander in einer Vielzahl von physischen und virtuellen Abhängigkeiten. Wir stellen ein System vor, welches diese Abhängigkeiten – physischer, funktionaler, virtueller, und regulativer Natur sowie bezüglich dem Laufzeitverhalten einer Komponente – für Anwender intuitiv sichtbar macht und ihnen ermöglicht, Abfragen zu den Komponenten und ihren Interaktionen und Interdependenzen zu stellen. Die vorgestellte Herangehensweise und unsere Prototypen basieren auf relationalen Informationsmodellen, die diese Informationen für Nutzerschnittstellen aufbereiten und einfach zugänglich machen sowie einer Mixed Reality Schnittstelle, um Abhängigkeiten zwischen Komponenten sowie Abfrageresultate zu visualisieren. Dabei eignet sich unser Ansatz nicht nur für bereits digitalisierte „schlaue“ Komponenten, sondern auch für „dumme“ Gegenstände wie Werkflächen wie Böden, Ablageplatten und -fächern, ihren Materialien, und den bearbeiteten Werkstücken selbst.

More
get_appSimon Mayer, Andrei Ciortea, Alessandro Ricci, Maria Ines Robles, Matthias Kovatsch, Angelo Croatti
Journal paper
Current standardization efforts of the Web of Things provide a unique opportunity to integrate technologies from the research domain of multiagent systems and the human‐computer interaction field that could take us closer to creating intuitively usable distributed artificial intelligence. In this paper, we discuss what bridges are missing between these largely detached research communities: we propose to use the Hypermedia As The Engine Of Application State principle that is at the core of the Web architecture to underpin systems that integrate multiagent planning and acting with semantic technologies and with interoperable mixed reality interfaces. This should enable the creation of highly augmented environments in private as well as commercial and industrial environments where physical and digital things coexist and interact with one another.

More
Multi-agent systems (MAS) have long been envisioned as a key enabling technology in manufacturing, but this promise is yet to be realized: the lack of proper models, architectures, tooling, and the high level of expertise required for designing and programming agent-based manufacturing systems have hindered their large-scale acceptance. The emerging Web of Things (WoT), now being standardized at the W3C and IETF, provides new research opportunities that could help MAS enter the mainstream. In this paper, we present an approach to design scalable and flexible agent-based manufacturing systems that integrates automated planning with multi-agent oriented programming for the WoT: autonomous agents synthesize production plans using semantic descriptions of Web-based artifacts and coordinate with one another via multi-agent organizations; engineers can program and repurpose the systems on the fly via an intuitive Web user interface. The systems use the Web as an application architecture (and not just as a transport layer), which facilitates the seamless integration of geographically distributed production cells. To demonstrate our approach, we implemented a prototypical production cell that uses industry-grade robots and an augmented reality interface for human workers. Together, these contributions demonstrate a means to achieve an intriguing vision for the forthcoming fourth industrial revolution: a global collective intelligence for manufacturing.

More
The advent of distributed energy resources in conjunction with recent developments in the context of smart embedded devices has fostered the increased integration of intelligent generators and prosumers into the energy system. As a consequence, previously passively operated grids have become highly dynamic environments, which require active monitoring and control. To allow for such monitoring and control, grid operators require lightweight SCADA-like functionality that can be flexibly adapted to dynamic grid contexts and is suitable for resource-constrained embedded systems. Grid Watch Dog enables system operators to create monitoring and control rules for low-voltage field devices on the fly, and to immediately deploy them in the field. The system combines a stream reasoner on field devices with an intuitive user interface for creating and managing rules. Grid Watch Dog has been implemented as a proof of concept in a smart grid testbed and we demonstrate its application in the context of monitoring a battery energy storage system.

More
In the current Internet of Things (IoT) all sorts of devices and objects with diverse capabilities are being connected to the Internet and the Web. Consequently, new lightweight network protocols are also being developed to connect resource-constrained devices or networks with each other. One of these protocols is the Constrained Application Protocol, which provides a REST architecture to resource and network-constrained devices. Besides the RESTful client-server paradigm, CoAP supports a publish-subscribe model thanks to its Observe extension which mimics some of MQTT's functionalities. However, this extension has some limitations in the subscription mechanism. Hence, this paper proposes several enhancements that define new CoAP options and response codes for such mechanisms. Furthermore, a theoretical comparison of the current CoAP capabilities against the new possibilities is also provided. For that, an specific use case is proposed, and a comparison in terms of the overhead required to exchange payloads and to subscribe to state change notifications on resources.

More
This paper explores how the functional safety of industrial deployments can be improved through emerging Industrie 4.0 approaches. We discuss how new sources of data, that are becoming accessible through advancing digitalization, can be used for this purpose, and how principles from predictive maintenance systems can be applied to industrial fail-safe applications: based on data from the industrial components themselves and from their environment as well as on metadata about interactions between these systems and people, we propose to create a model-based monitoring and controlling system that focuses on preserving the functional safety of the installation as a whole. We expect such a Predictive Fail-Safe system to mitigate or even prevent unsafe consequences of failures even in highly dynamic "smart factories", thereby reducing or preventing harm to other equipment, the environment, and the involved people.

More
For the operation of electrical distribution system an increased shift towards smart grid operation can be observed. This shift provides operators with a high level of reliability and efficiency when dealing with highly dynamic distribution grids. Technically, this implies that the support for a bidirectional flow of data is critical to realizing smart grid operation, culminating in the demand for equipping grid entities (such as sensors) with communication and processing capabilities. Unfortunately, the retrofitting of brown-field electric substations in distribution grids with these capabilities is not straightforward – this scenario requires a solution that provides "industry-grade" Internet of Things capabilities at "consumer-grade" prices (e.g., off-the-shelf communication standards and hardware). In this paper, we discuss the particular challenge of precisely time-synchronized wireless data collection in secondary substations that at the same time supports on-site configuration by authorized maintenance personnel through a mobile application: to achieve this, we propose a combined implementation of IPv6 over Bluetooth Low Energy.

More
One of the most critical challenges in human-robot collaborative work settings is ensuring the health and safety of the involved human workers. We propose to integrate task-level planning with semantically represented workplace safety rules that are published by regulatory bodies, meaning that our system can adapt to produce different variants of a product while respecting workplace safety regulation. Our prototype system interacts with human workers and machine agents via Activity Streams and a speech synthesis interface, and we have shown that its SPIN reasoning engine can scale to scenarios that incorporate complex products and many agents. The current system state and action logs of the agents and products are easily observable using a dashboard interface. The semantic models were evaluated by five experts in workplace safety and process engineering who expressed confidence about using, maintaining, and even extending the models themselves after only negligible training, a crucial factor for the real-world adoption of such systems.

More
The creation of open APIs is one of the core concepts advocated by the Web of Things community. However, drastically open APIs to the basic functional components of physical devices are only found in research projects and seem to not yet find favor with a wider audience. With this paper, I would like to initiate a discussion about the value of open APIs for the producers and users of smart devices, and for society as a whole: How can manufacturers be motivated to outfit their products with open APIs? How can consumers be educated about the benefits of not being locked into manufacturer ecosystems? And might the decision of opening up a device API even be subject to antitrust law?

More
We present HoloMiracle, a system that enables operators of industrial equipment and beyond to pose queries about physical, virtual, regulatory, and functional relationships between components of the equipment and that visualizes the responses to their queries in-situ, as a holographic overlay. We report on HoloMiracle's system architecture and discuss a concrete use case in the automotive manufacturing domain.

More
get_appSimon Mayer, Jack Hodges, Dan Yu, Mareike Kritzler, Florian Michahelles
Journal paper
Production processes in factories increasingly involve dynamic interactions between individual manufacturing devices that allow processes to evolve and enable rapid reconfigurations for the mass-customization of products. When participating machines start to be aware of their own functionality, processes can even be dynamically reconfigured at run time. We propose a system that supports operators within such environments by enabling them to directly observe which devices interact with each other and what data is transmitted between them at any given moment, using computer-mediated reality technology. We demonstrate the application of this system in a goal-driven manufacturing use case and discuss its applicability to the monitoring of smart environments in users’ homes.

More
One of the central research challenges in the Internet of Things and Ubiquitous Computing domains is how users can be enabled to "program" their personal and industrial smart environments by combining services that are provided by devices around them. We present a service composition system that enables the goal-driven configuration of smart environments for end users by combining semantic metadata and reasoning with a visual modeling tool. In contrast to process-driven approaches where service mashups are statically defined, we make use of embedded semantic API descriptions to dynamically create mashups that fulfill the user’s goal. The main advantage of our system is its high degree of flexibility, as service mashups can adapt to dynamic environments and are fault-tolerant with respect to individual services becoming unavailable. To support users in expressing their goals, we integrated a visual programming tool with our system that allows to model the desired state of a smart environment graphically, thereby hiding the technicalities of the underlying semantics. Possible applications of the presented system include the management of smart homes to increase individual well-being, and reconfigurations of smart environments, for instance in the industrial automation or healthcare domains.

More
In this paper, we present the application of an automatic service composition system in the context of collaborative industrial manufacturing. We discuss how our system could support a production-as-a-service paradigm and present the architecture we created to realize this potential. The basis of our approach are embedded semantic descriptions of a production cell's available resources (machines and human workers). Using these descriptions and given a specification of the product to be manufactured, our system is able to derive a collaborative plan that composes its available resources to achieve that manufacturing goal. This ability to flexibly reconfigure industrial equipment in a goal-driven way is becoming increasingly important in the industrial domain, where mass customization leads to ever smaller lot sizes and potentially even to a lot-size-one world. It can revolutionize the way we think about manufacturing: instead of highly optimized integrated high-volume production lines, this would allow individual manufacturing cells to offer their capabilities as a service and compete with other cells in a paradigm that we refer to as UberManufacturing.

More
get_appErik Wilhelm, Joshua Siegel, Simon Mayer, Leyna Sadamori, Sohan Dsouza, Chi-Kin Chau, Sanjay Sarma
Journal paper
We present a novel approach to developing a vehicle communication platform consisting of a low-cost, open-source hardware for moving vehicle data to a secure server, a Web Application Programming Interface (API) for the provision of third-party services, and an intuitive user dashboard for access control and service distribution. The CloudThink infrastructure promotes the commoditization of vehicle telematics data by facilitating easier, flexible, and more secure access. It enables drivers to confidently share their vehicle information across multiple applications to improve the transportation experience for all stakeholders, as well as to potentially monetize their data. The foundations for an application ecosystem have been developed which, taken together with the fair value for driving data and low barriers to entry, will drive adoption of CloudThink as the standard method for projecting physical vehicles into the cloud. The application space initially consists of a few fundamental and important applications (vehicle tethering and remote diagnostics, road-safety monitoring, and fuel economy analysis) but as CloudThink begins to gain widespread adoption, the multiplexing of applications on the same data structure and set will accelerate its adoption.

More
The Internet of Things (IoT) envisions cross-domain applications that combine digital services with services provided by resource-constrained embedded devices that connect to the physical world. Such smart environments can comprise a large number of devices from various different vendors. This requires a high degree of decoupling and neither devices nor user agents can rely on a priori knowledge of service APIs. Semantic service descriptions are applicable to heterogeneous application domains due to their high level of abstraction and can enable automatic service composition. This paper shows how the RESTdesc description format and semantic reasoning can be applied to create Web-like mashups in smart environments. Our approach supports highly dynamic environments with resource-constrained IoT devices where services can become unavailable due to device mobility, limited energy, or network disruptions. The concepts are backed by a concrete system architecture whose implementation is publicly available. It is used to evaluate the semantics-based approach in a realistic IoT-related scenario. The results show that current reasoners are able to produce medium-sized IoT mashups, but struggle with state space explosion when physical states become part of the proofing process.

More
We present a system that allows drivers and fleet managers to interact with their connected vehicles both by means of direct control and indirect goal-setting. The ability to move data from vehicles to a remote server is established by the flexible and secure open vehicle telematics platform "CloudThink." Based on this platform, we present several prototypes of how people can be enabled to conveniently interact with connected vehicles: First, we demonstrate a system that allows users to select and interact with vehicles using object recognition methods and automatically generated user interfaces on smartphones or personal wearable devices. Second, we show how functional semantic metadata can be used to smooth the boundaries for interacting with vehicles in the physical and virtual worlds. Finally, we present a method for monitoring interactions between vehicles and remote services which increases safety and security by enhancing driver oversight and control over the data that leaves and enters their vehicle.

More
Social and socially-enabled applications have established themselves as a large and important set of scenarios that make sense in personal as well as in enterprise settings. However, it is still not clear how to best promote an open and extensible ecosystem of social applications. This makes it hard to design and implement applications that are good SOA citizens and can participate in an open and growing set of social interactions. Our work presents an architecture and implementation based on Activity Streams, which uses and slightly extends the format to work as a foundation for a SOA that allows applications to bridge vertical silos in the Internet of Things. In this paper, we discuss the application of our architecture in several different domains and our findings from mashing up applications across silos using Activity Streams. In addition to demonstrating the feasibility of this approach, we believe that our work can serve as illustration and guidance in the ongoing effort to standardize the next version of Activity Streams.

More
We present a tool for visualizing device interactions in smart environments as a magic lens by augmenting the live camera view of a tablet with relevant connections between recognized devices in the camera's field of view.

More
Keeping track of device interactions in smart environments is a challenging task for everyday users. Given the expected high number of communicating devices in future smart homes, it will become increasingly important to put users more in control of their smart environments by providing tools to monitor and control the interactions between smart objects and remote services. We present a system for collecting and visualizing interactions of Web-enabled smart things and Web services in an intuitive and visually appealing way. Our tool displays device interactions both using a Web-based visualization application and in the form of a "magic lens" by augmenting the camera view of a tablet with relevant connections between recognized devices in the camera's field of view.

More
We present an approach that combines semantic metadata and reasoning with a visual modeling tool to enable the goal-driven configuration of smart environments for end users. In contrast to process-driven systems where service mashups are statically defined, this approach makes use of embedded semantic API descriptions to dynamically create mashups that fulfill the user's goal. The main advantage of the presented system is its high degree of flexibility, as service mashups can adapt to dynamic environments and are fault-tolerant with respect to individual services becoming unavailable. To support end users in expressing their goals, we integrated a visual programming tool with our system. This tool enables users to model the desired state of their smart environment graphically and thus hides the technicalities of the underlying semantics and the reasoning. Possible application scenarios of the presented system include the configuration of smart homes to increase individual well-being, and reconfigurations of smart environments, for instance in the industrial automation or healthcare domains.

More
For the Internet of Things to be adopted in people’s homes and at their workplaces, it is important to provide mechanisms that support them when controlling and monitoring smart things in their surroundings. We present the concept of user interface beaming, where the capabilities of different personal wearable computers are combined to allow users to conveniently interact with smart things in their environment. Smartglasses are used to select a target smart thing by means of current object recognition technologies. Then, an appropriate user interface for the target is rendered on the user’s smartwatch. This interface is continuously updated to reflect state changes of the target and can be used to interact with that smart thing using different interaction modalities.

More
The configuration of smart homes represents a difficult task for end-users. We propose a goal-driven approach to this challenge, where users express their needs using a graphical configuration environment. Our system then uses semantic descriptions of devices in the user's surroundings to derive a plan to reach the desired situation. We are able to satisfy complex demands using only first-order logic, which makes this system flexible yet fast. The focus of this paper is to demonstrate how to achieve high usability of the proposed system without burdening users with the underlying semantic technologies. Our initial demo supports setting the ambient temperature, alarms, and media playback, but the use of semantics allows to extend the system with many different kinds of services in a decentralized way.

More
A fundamental paradigm shift is currently taking place in the field of computing: due to the miniaturization of computing devices and the proliferation of embedded systems, tiny, networked computers can now be easily integrated into everyday objects, turning them into smart things. In the resulting Internet of Things, physical items are no longer disconnected from the virtual world but rather become accessible through computers and other networked devices, and can even make use of protocols that are widely deployed in the World Wide Web, in a paradigm that we call the Web of Things. Eventually, smart things will be able to communicate, analyze, decide, and act - and thereby provide an invisible background assistance that should make life more enjoyable, entertaining, and also safer. However, in an environment that is populated by hundreds of Web-enabled smart things, it will become increasingly difficult for humans to interact with devices that are relevant to their current needs, and to find, select, and control them. The objective of this thesis is to investigate how human users could be enabled to conveniently interact with individual smart objects in their surroundings and to interconnect devices and configure the resulting physical mashups to perform higher-level tasks on their behalf. To achieve basic interoperability between devices, we rely on the World Wide Web with its proven protocols and architectural patterns which emphasize scalability, generic interfaces, and loose coupling between components. As a first step to facilitate the interaction with smart things on top of the basic Web principles, we propose the embedding of metadata for automatically generating user interfaces for smart devices. Our specific approach enables not only the generation of more intuitive graphical widgets but also the mapping of interactive components to gesture-based, speech-based, and physical interfaces by describing the high-level interaction semantics of smart devices instead of specifying purely interface-specific information. The provisioning of an interaction mechanism with a smart object is thus reduced to the embedding of simple interaction information into the representation of the smart thing. Before users can start interacting with a smart device, it must, however, first be selected. To permit users to choose which of the many smart objects in their surroundings should be involved in an interaction, we propose to use technologies for optical image recognition. The visual selection of smart things and automatically generated user interfaces enable end users to conveniently interact with individual services in their surroundings that are embodied as specific physical objects. To complement the direct interaction with smart devices, the second part of this thesis focuses on more complex use cases where multiple smart objects must collaborate to achieve the user's goal. Such situations arise, for instance, in home or office automation scenarios, or in smart factories, where machines or assembly lines could adjust to better support the operator. To put users more in control of entire environments of smart devices, we present a system that records interactions between smart things and with remote services and displays this data to users in real time. To do this, we use an augmented reality overlay on the camera feed of handheld or wearable devices such as smartphones and smartglasses. Next, we propose a management infrastructure for smart things that makes the services they offer discoverable and composeable, and fully integrates them with more traditional Web-based information providers. This system enables humans to find and use data and functionality provided by physical devices and allows machines to support users in finding services within densely populated smart environments and even to discover and use required services themselves, on behalf of the user. The basis for these applications is a generic mechanism that allows smart devices to provide semantic descriptions of the services they offer. Specifically, our infrastructure supports the embedding of functional semantic metadata into smart things that describes which functionality a concrete object provides and how to invoke it. Based on this metadata, a semantic reasoning component can find out which composite tasks can be achieved by a user's smart environment and can provide instructions about how to reach concrete goals, thus enabling the configuration of entire smart environments for end users. As a concrete use case, we present a platform that applies our proposed interaction modes with smart things to automobiles: a mobile application recognizes cars, downloads information about them from a back-end server, and displays this information - as well as interaction capabilities with the car and its services - on the user's interface device. The back-end server furthermore exposes functional metadata about the capabilities of individual cars to make their services automatically usable within physical mashups. Finally, it records client interactions to enable car owners to monitor in real time who accesses which kind of data and services on their vehicles. The overarching objective of this thesis is to show how current technologies could support the interaction of end users with Web-enabled smart devices. To achieve this, we make use of a number of technologies from different areas of the computer science discipline: A management infrastructure makes smart things discoverable for human users and machines and builds upon current research in the distributed systems domain. State-of-the-art computer vision technologies allow users to select devices in their environment using handheld or wearable computers such as smartphones or smartglasses. Novel methods from the field of computer-human-interaction enable the embedding of metadata that allows for automatically generating user interfaces. Finally, semantic technologies enable flexible compositions of smart things that collaborate to achieve the user's goal.

More
With ever more devices being connected to the Internet and everyday objects becoming "smart" due to embedded processors and communication capabilities, the provisioning of intuitive user interfaces to control smart things is quickly gaining importance. To address this issue, we present a model-based interface description scheme that enables automatic, modality-independent user interface generation. User interface description languages based on our approach carry enough information to suggest appropriate and intuitive interfaces. Still, they are simple enough to enable developers to describe the interaction semantics of a smart thing using very little, easily producible markup. This is enabled by describing the atomic interactive components of a device rather than the device as a whole, and capturing the high-level semantics of an interaction. As a concrete language based on this approach, we propose a taxonomy of abstract sensing and actuation primitives and present a smartphone application that can act as a ubiquitous device controller. An evaluation of our approach in a laboratory setup, home environments, and a lecture hall automation system as well as the results of a user study highlight the accessibility of the proposed description scheme for application developers, its suitability for controlling smart devices, and its generality with respect to describing heterogeneous smart things.

More
get_appNemanja Memarovic, Simon Mayer, Paul Baumann, Marco Pasch
Book
Facilitating the interaction of human users and machines with smart devices is important to drive the successful adoption of the Internet of Things in people's homes and at their workplaces. In this poster contribution, we present an approach to support users controlling smart devices in their environment. To do this, we propose to embed semantic metadata in the representations of smart things. By means of this metadata and a semantic reasoning service, our system enables users to specify a desirable state of their smart environment and produces a machine-readable description that details which steps are necessary to reach this state, where each step corresponds to a Web request to a smart device. A client application that could, for instance, run on the user's smartphone, can distill the necessary steps required to reach the user's goal state from this description and execute them to modify the smart environment on behalf of the user.

More
Facilitating the interaction of human users and machines with smart devices is important to drive the successful adoption of the Internet of Things in people's homes and at their workplaces. In this paper, we present a system that helps users control their smart environment, by embedding semantic metadata in the representations of smart things. The system enables users to specify a desirable state of their smart environment and produces a machine-readable description that details which steps are necessary to reach this state, where each step corresponds to a Web request to a smart device. A client application that, for instance, runs on the user's smartphone, then implements these steps to reconfigure the user's smart environment. We report on our experiences when integrating semantic technologies with smart devices and on two use cases from the home and office automation domains that we implemented in our office space.

More