Tentative program

The Talks

Presentation should be confined to 10 minutes (approx.) followed by 15 minutes of discussion.

Schedule

Monday, September 4

12:00 - 13:30 Arrival, registration and lunch
13:30-15:00 Session 1: Model-Based Engineering
Chair: Luciana Rebelo
13:30 - 13:55 Co-verification for robotics: from simulation to verification of hybrid systems
Pedro Ribeiro (University of York, UK)
Robots are expected to play important roles in furthering prosperity, however providing formal guarantees on their (safe) behaviour is not yet fully within grasp given the multifaceted nature of such cyber-physical systems. Simulation, favoured by practitioners, provides an avenue for experimenting with different scenarios before committing to expensive tests and proofs. In this talk, I will discuss how models may be brought together for (co-)verification of system properties, with simulation complementing verification. This will be cast using the model-driven RoboStar framework, that clearly identifies models of the software, hardware, and scenario, and has heterogeneous formal semantics amenable to verification using state-of-the-art model-checkers and theorem provers, such as Isabelle/UTP.
13:55 - 14:20 Specification and Task Scheduling of Mission Tasks for Heterogeneous Robot Systems
Gricel Vazquez Flores (University of York, UK)
The specification of mission requirements for multi-robot systems poses a set of challenges since they consider multiple spatial and time constraints, and optimal objectives, such as minimizing the battery consumption while maximizing the probability of mission success. To ease the formalization of requirements and avoid unambiguity, we present QUARTET, a catalogue of robotic mission specifications in probabilistic temporal logic. Moreover, we introduce a formal tasK AllocatioN and scheduling apprOAch for multi-robot missions (KANOA). KANOA handles the allocation of the mission tasks to robots to mitigate the complexity of robotic mission planning, and the scheduling of the allocated tasks separately. To that end, the task allocation problem is formalized in first-order logic and resolved using the Alloy model analyzer, and the task scheduling problem is encoded as a Markov decision process and resolved using the PRISM probabilistic model checker.
14:20 - 14:45 Runtime Monitoring of Robotic Applications
Sven Peldszus (Ruhr University Bochum, Germany)
In my talk, I will present how to trace non-functional properties throughout the development process of robotic applications to monitor and verify these at runtime.
14:45 - 15:00 Discussion
15:00 - 15:30 Coffee/Tea Break
15:30-17:00 Session 2: Testing
Chair: Luciana Rebelo
15:30-15:55 A Study in Software Testing for Robotics
Argentina Ortega (Hochschule Bonn-Rhein-Sieg and Ruhr University Bochum, Germany)
15:55 - 16:20 A Framework for Test Model Refinement
Hugo Araujo (King's College London, UK)
Model-based testing (MBT) is a testing method that employs (mathematical) models of a system under test (SUT) in order to generate test cases. Choosing the right level of abstraction for the test model is crucial to the success of a model-based testing strategy. We argue that the level of abstraction in models impacts the strategy's effectiveness and precision. In order to investigate our hypothesis, we conducts an experiment that employs several distinct test models that have been developed using our process for step-wise enrichment of test models for robots and autonomous systems (RAS). It is intended as a step towards guidelines for those who build behaviour models for the purpose of testing.
16:20 - 16:45 Conformance Testing for Trustworthy Autonomous Systems
Mohammad Mousavi (King's College London, UK)
16:45 - 17:00 Discussion

Tuesday, September 5

09:00 - 10:00 Keynote: What Should I Verify?
Marie Farell (University of Manchester, UK)
The harsh environment, distances involved and communication issues surrounding space exploration make it both necessary and difficult to deploy autonomous robotic systems for these missions. However, to ensure the success of such expensive and mission-critical systems we need a fundamental step change in the way that these systems are verified and assessed. Formal methods, combined with testing and simulation-based approaches to verification, all play a role in analysing the robustness of these systems. But each of these techniques cannot be used effectively unless the requirements of the system are clearly and unambiguously defined. In fact, requirements specification for autonomous systems is an especially difficult part of the development process. This talk will provide an overview of my recent work on examining requirements for software that learns. Though difficult, once the requirements are specified, complex robotic systems typically benefit from a heterogeneous and integrated approach to verification, and I will provide some examples of this.
10:00 - 10:30 Coffee/Tea Break
10:30 - 12:00 Session 3: Adaptation and Human-Robot Interactions
Chair: Patrizio Pelliccione
10:30 - 10:55 Model-Driven Software Composition as Enabler for Cognitive Robotic Systems
Christian Schlegel (Technische Hochschule Ulm, Germany)
Robots are the promise of being universal machines: flexible and versatile in use, independent in fulfilling tasks, easily adaptable to new tasks, and even being able to select on their own those skills which are best suited to fulfill assigned jobs in an adequate manner. This comes with a tremendous complexity of their software systems, the need for introspection by the robot itself and the need for a semantic interaction between the user and the robot. A model-driven approach for robotic software systems based on service-oriented components following the principles of separation of roles and of composition already proved to be successful in (a) reducing the overall effort of composing and modifying complex robotics software systems, (b) matching task-level plots with available skills, and (c) using intuitive graphical representations for a semantic configuration (e.g., place a 4-way-stop policy for robots at a crossing). A core asset are the so-called digital data sheets which describe the various building blocks (software components, skills, task plots etc.) as blocks with ports and from an outside view. Digital data sheets are abstracted representation which are not suitable to generate a building block but they inform the user about what are the required and provided ports of the asset, what kind of operating modes it possesses, what are its configurations and their impacts on its behavior and qualities, etc. Digital data sheets are directly derived from the workflow in the model-driven toolchain.
10:55 - 11:20 Humans versus Machines: The Battle for Supporting Cyber­-Physical Systems Design
Claudio Menghi (University of Bergamo, Italy)
Developing robotics and cyber­-physical systems requires engineers to detect and fix design flaws before their deployment. This activity is complex, error-prone, and expensive. Engineers often rely on machines for support in system design since, unlike humans, they can perform massive computations in a limited time. However, machines do not possess the reasoning capabilities typical of humans. This talk will reflect on the "Humans versus Machines" dilemma. It will argue that developing effective techniques that combine human and machine capabilities is necessary to support the design of robotics and cyber­-physical systems applications of the future. The talk will present recent techniques that rely on this idea, discuss results, and describe lessons learned.
11:20 - 11:45 Towards Adaptive Planning of Assistive-care Robot Tasks
Ioannis Stefanakos (University of York, UK)
I will present an adaptive path planning framework for robotic mission execution in assistive-care applications. The framework provides a graph-based environment modelling approach, with dynamic path finding performed using Dijkstra’s algorithm. A predictive module is used to estimate the human’s movement through the environment, allowing replanning of the robot’s path. I will demonstrate the use of the framework in a simulated assistive-care case study in which a mobile robot navigates through the environment and monitors an end user with mild physical or cognitive impairments.
11:45 - 12:00 Discussion
12:00 - 13:30 Lunch
13:30 - 15:00 Session 4: Domain Engineering and Variability
Chair: Thorsten Berger
13:30 - 13:55 On Modelling and Analysing Autonomous Underwater Robots as Probabilistic Featured Transition Systems
Juliane Päßler (University of Oslo, Norway)
13:55 - 14:20 Exploring the Architecture and Development Process of Open-Source ML-Enabled Software Systems
Yorick Sens (Ruhr University Bochum, Germany)
14:20 - 14:45 Tracing Security Features in Robotic Systems
Kevin Hermann (Ruhr University Bochum, Germany)
Maintaining security features is crucial for any robotic system especially if a security incident occured. Once a security breach is detected, it is important to quickly provide a fix to avoid damage or costs. To this end, developers need to find vulnerable code within the system as fast as possible. For that tracing links can be leveraged to locate the corresponding code fragments of affected security features. In my presentation, I talk about methods and the importance of tracing security features in robotic systems.
14:45 - 15:00 Discussion
15:00 - 15:30 Coffee/Tea Break
15:30 - 17:00 Session 5: Formal Methods for Robotics
Chair: Marie Farrell
15:30 - 15:55 Mapping Properties of Control Theory to Software Engineering Properties with TCTL and Property Specification Pattern
Nils Chur (Ruhr University Bochum, Germany)
Nowadays, self-adaptive software is becoming increasingly popular, with many implementations relying on embedded controls and feedback loops. Verifying that these systems are functionally correct is an essential part of the verification process, which is traditionally done through intensive simulation-based testing using test cases. However, properties such as stability and performance of the system are often difficult to prove analytically, which is why formal verification methods are used. Since control theory (CT) provides a well-established theoretical background for designing feedback loops and guarantees certain properties, it has received more attention. Until now, it has been an open research topic how software engineering (SE) approaches for self-adaptive software take into account control-theoretic properties; therefore, most control-theoretic software implementations lack a theoretical background. The formulation of these properties in a language understandable to model checkers and mapping these properties to both worlds is a challenging task. To address this challenge, this thesis deals with mapping control theoretical properties into a formal language with the help of the Property Specification Pattern (PSP). We take a bottom-up approach, defining a control design (in Simulink) for an adaptive cruise control, which is then transferred to code (C). Next, we define properties using PSP, which allow us to show whether the CT properties still hold at the code level. And finally, we provide evidence that these properties are fulfilled by the system with the UPPAAL model checker. The specified properties are intended to provide reusable patterns that can be used for similar applications.
15:55 - 16:20 A Formal Model of Liability for Autonomous Vehicles
Kaveh Aryan (King's College London, UK)
16:20 - 16:45 Review on Architectures in the Space Domain to support AI and DevOps: Midterm Findings
Luciana Rebelo (Gran Sasso Science Institute, Italy)
In this study, we aim at surveying the state of the art in electrical, hardware, and software architectures in the space domain able to support Artifical Intelligence (AI) and/or promote DevOps, i.e. the Continuous Integration and Deployment (CI/CD). We are specifically interested in the use of AI on board of space devices and on solutions enabling the development of in-orbit frameworks with DevOps technologies that allow the development, testing, and deployment of flight software even when the software platform is in orbit. Aspects such as key architecture solutions, developed techniques/methods, experiments, analytical and simulation results, and challenges related to the peculiarities of space domain are investigated.
17:00 - Social event (walk to Westminster, riding the London Eye)

Wednesday, September 6

09:00 - 10:00 Keynote: Learning in RoboStar
Ana Cavalcanti (University of York, UK)
The RoboStar framework for Software Engineering for Robotics includes a collection of domain-specific notations to model various artefacts, and techniques for model-transformation, simulation, testing, and proof. In this presentation, we give an overview of the RoboStar approach, and focus on our support for automated testing. It covers techniques for automatic generation of software tests, both for reactive systems, and for cyclic systems (simulations and code that uses a cyclic executive). We explain the techniques for fault-based testing and test conversion, which ensures traceability.
10:00 - 10:30 Coffee/Tea Break
10:30 - 12:00 Session 6: Testing and Verification
Chair: Mohammad Mousavi
10:30 - 10:55 Building Digital Twins for Robots
Mirgita Frasheri (Aarhus University, Denmark)
10:55 - 11:20 Neural Radiance Fields for Testing Vision Components in Autonomous Underwater Vehicles
Laura Weihl (IT University of Copenhagen, Denmark)
Navigation for autonomous underwater vehicles (AUVs) is challenging. If underwater visibility permits, incorporating input from onboard cameras into the robotic navigation stack can provide a rich source of information to navigate small GPS-denied environments. Testing vision algorithms for navigation is traditionally facilitated in simulation. However, simulators are fundamentally limited by the skill and domain knowledge of the engineers, and, specifically in the underwater setting, fail to model the long-tail of safety critical scenarios. To generate realistic test data for AUV vision components I train Neural Radiance Fields on video data of small underwater environments. Incorporating NeRFs into systematic testing procedures can address challenges of AUV navigation for safety-critical missions.
11:20 - 11:45 How to Specify Properties of Probabilistic RoboChart Models
Jim Woodcock (University of York, UK)
RoboChart is a core notation in the RoboStar framework that brings modern modelling and formal verification technologies to software engineering for robotics. It is a timed, probabilistic domain-specific language and provides a UML-like architectural and state machine modelling. This work presents RoboCertProb for specifying quantitative properties of probabilistic robotic systems modelled in RoboChart. RoboCertProb is the probabilistic extension of RoboCert, a property specification for robotics. RoboCertProb is based on PCTL* and is a subset of the discrete-time part of the Prism property language with extensions to support RoboChart. In addition to property specification, RoboCertProb configures loose constants and uninterpreted functions and operations in RoboChart models. It lets us set up environmental inputs to verify reactive probabilistic systems not directly supported in Prism with its closed-world assumption. The grammar of RoboCertProb is flexible. We implement RoboCertProb in RoboTool for specifying properties and automatically generating Prism properties. We have used it to analyse the behaviour of software controllers for two real-world robots: an industrial painting robot and an agricultural robot for treating plants with UV lights.
11:45 - 12:00 Discussion
12:00 - 13:30 Lunch
13:30 - 15:00 Session 7: Robotic Applications
Chair: Claudio Menghi
13:30 - 13:55 From chaos to structure: Unifying software architecture of robots
Lenka Mudrich (University of Leeds, UK)
In the rapidly evolving field of robotics, software architecture plays a pivotal role in enabling intelligent behaviour and efficient operation. However, the increasing complexity and heterogeneity of robot systems have led to a proliferation of diverse software architectures, resulting in fragmented development processes and limited interoperability. This talk aims to explore the critical need for software architecture unification in the realm of robotics. I will explore the concept of software architecture unification, which seeks to establish a common framework and set of principles for designing and implementing software architectures in robots. Furthermore, I will discuss the role of modularisation, standardisation, and open-source initiatives in fostering a unified ecosystem. Overall, this talk aims to shed light on the importance of software architecture unification for robots, exploring its benefits, challenges, and potential solutions. By embracing a unified approach, the robotics community can propel the field forward, enabling faster innovation, wider adoption, and more robust and versatile robot systems.
13:55 - 14:20 Runtime reconfiguration of robot control systems
Davide Brugali (University of Bergamo, Italy)
14:20 - 14:45 Imitation Learning for creating 3D twin models using a manipulator robot arm
Juan Antonio Pinera Garcia (Gran Sasso Science Institute, Italy)
Obtaining a precise description of the materials used in ancient art works has proven to be a challenging task, and while progress has been made in employing X-Ray technology to create detailed twin models of two dimensional paintings, 3D objects remain a considerable challenge. This work describes the idea of using Imitation Learning techniques to allow a Human-In-The-Loop algorithm to create high precision models of three dimensional art works. Although the technology exists to be able to determine with high precision the atomic composition of very small segments of an object, the algorithm must automatically detect the different object shapes and perform efficient motion planning in order to cover the object entirely at a desirable distance.
14:45 - 15:00 Discussion
15:00 - 15:30 Coffee/Tea Break
15:30 - 17:00 Break-out discussions, planning collaboration
Chair:
18:30 Dinner at Strand Palace Hotel (Google Map link, directions from the King's Building, directions from the Stamford Appartments)

Thursday, September 7

09:00 - 10:00 Causal Temporal Reasoning for Markov Decision Processes
Nicola Paoletti (King's College London, UK)
10:00 - 10:30 Coffee/Tea Break
10:30 - 12:30 Session 8: Robotic Applications
Chair:
10:30 - 10:55 Ethical-aware robots
Patrizio Pelliccione (Gran Sasso Science Institute, Italy)
10:55 - 11:20 Enabling Adaptation at Runtime: A Knowledge-Based Framework for Task and Architecture Co-Adaptation in Autonomous Robots
Gustavo Rezende Silva (TU Delft, The Netherlands)
While operating, robots are subject to uncertainties, both from internal system factors (e.g., sensor failures) and external environment variables (e.g., different terrains). To overcome uncertainties, robots must be able to adapt at runtime their mission execution (e.g., change task being performed) and their software architecture (e.g., deactivate and activate software components). This presentation introduces a knowledge-based self-adaptation framework designed to enable robots to adapt their mission and architecture at runtime. The proposed framework builds upon the Metacontrol framework, offering a novel contribution in the form of a new metamodel for the knowledge base. The proposed metamodel aims to capture essential information about the robotic system's architecture, encompassing all available configuration variants, relationships and constraints among configurations, attributes of the system and environment, and interdependencies between the robots' tasks and configurations. The metamodel is modeled as a hypergraph, enabling a more intuitive representation of the relationship. To guide the design phase and enable evaluation, we defined a set of requirements for robotic self-adaptive systems. Then, in the evaluation step, we verify if the framework fulfills the defined requirements, and use this assessment as criteria for comparing our approach against other existing self-adaptation frameworks. Furthermore, we showcase the applicability of our framework through its implementation and application to a use case involving an underwater robot deployed for pipeline inspection.
11:20 - 11:45 Teaching autonomous vehicle engineering
Thorsten Berger (Ruhr University Bochum, Germany)
11:45 - 12:10 Robotics Software Engineering in the SESAME H2020 Project
Simos Gerasimou (University of York, UK)
Within the European project SESAME, we develop a model-based approach supporting the systematic engineering of dependable learning-enabled robotic systems. In this talk, we will overview recent advances made by the project team to provide assurances for the trustworthy, robust and explainable operation of robotic systems, focusing particularly on techniques for testing the robotics and their AI-based components. The talk will be complemented by an overview of the diverse SESAME use cases from the domains of healthcare, manufacturing, agriculture and emergency response.
12:10 - 12:30 Discussion and Closing
12:30 - 14:00 Lunch