Robotics and Software Engineering 2024 (RSE’24) is the third edition of an unofficial and annual meeting to promote discussion and interaction between researchers. The main objective is to strengthen cooperation and dialogue, bringing together the communities of Robotics and Software Engineering. It is an ideal opportunity to exchange ideas on topics of interest, including (but not limited to): development of systems, software architecture, dependability, software reuse, software validation and verification, robot modeling, robot control architectures, autonomous systems, and multi-robot systems.
The format of RSE’24 meeting consists of short presentations from each participant with enough time for discussion. Researchers at different career stages are wellcome to present their research, provide and get feedback from peers, engage into discussions and establish new collaborations. RSE is not a publication venue. Participants can present previously published work as well as unpublished work, including early ideas and work in progress, can be a published paper, an idea, a master thesis, etc. The main point is to encourage discussions, to give and receive feedback, and to create a network for new collaborations.
RSE’24 will take place at the University of Bremen in Germany, from September 2nd until September 5th, 2024.
In this edition, we plan for a four-day event, from Monday 11:00 to Thursday at 15:00.
RSE’24 does not require any paper submission, however, an abstract about the research that is to be presented is required to apply.
The program is advertised but will be flexible in the sense that one can jump in if the talk fits well as a follow-up to another talk. Thus, everyone should be prepared to give their talk at any time, which gives synergy to the event.
Application
Please note that every participant needs to give a talk, this is the event rule.
Important Dates
Meeting
September 2nd-5th
Application
June 1st Extended deadline: June 22nd
announcement Applications are now closed. Please send an email to the organizers.
Due to organizational reasons, there is a limit of 50 participants. We will notify the selected participants shortly after the deadline. A registration fee of €210/participant will be required to confirm your participation.The registration fee includes lunch, coffee and snacks during the coffee breaks, the social activity, and the RSE dinner.
Schedule
The tentative plan for the schedule is to start on Monday, September 2nd at 11:00 until Thursday, September 5th 15:00. The program includes keynote talks, tours at industrial and research labs in and around Bremen, and social activities in the evening.
Depending on the amount of participants, we might be able to offer additional activities or discussions for Friday, September 6th.
Overview
Monday 2nd
Check in
Lunch
Welcome talk
Sessions & Discussions
Coffee Break
Sessions & Discussions
Round Table Discussions
Tuesday 3rd
Sessions & Discussions
Coffee Break
Sessions & Discussions
Lunch
Sessions & Discussions
Coffee Break
Keynote
Guided Visit
Round Table Discussions
Wednesday 4th
Keynote
Coffee Break
Sessions & Discussions
Lunch
Sessions & Discussions
Coffee Break
Sessions & Discussions
Round Table Discussions
Social Event
Dinner
Thursday 5th
Sessions & Discussions
Coffee Break
Sessions & Discussions
Closing Remarks
Lunch
Schedule
Monday 2nd
-
Check in
-
Lunch
-
Welcome talk
Nico Hochgeschwender
Professor of Computer Science,
University of Bremen
Bio: Nico Hochgeschwender is Full Professor of Software Engineering for Cognitive Robots and Systems at University of Bremen. His research interests lie at the intersection of AI-enabled Robotics and Software Engineering with a focus on assuring dependability, transparency and explainability of robotics and autonomous systems, benchmarking and performance evaluation, and domain-specific modelling and languages for robotics. He holds a PhD degree from the University of Luxembourg, he is Co-Chair of the IEEE RAS Technical Committee on Software Engineering for Robotics and Automation, and he is currently PI of the EU-funded research project SOPRANO (Socially-Acceptable and Trustworthy Human-Robot Teaming for Agile Industries).
Software Engineering for Robotics: Research Challenges and Opportunities Abstract: This talk will start with a broad overview of our research on software engineering for cognitive robots. I will review some challenges and insights gained from interacting with industrial robotic partners through various research projects. I argue that we need to fundamentally rethink how we develop, maintain, test, and deploy robotic software by considering the scale from day one, that is, by anticipating the change, openness, and uncertainty of robotic applications in our overall development process.
-
Sessions & Discussions
Gianluca Filippone, Gran Sasso Science Institute
Towards the realization of service-oriented multi-robot systemsAbstract: During the last years, multi-robot systems (MRSs) gained interest in many areas as versatile means for addressing complex tasks in various domains. MRSs realized through service robots that cooperatively perform useful tasks will increasingly operate within society and collaborate with humans to support everyday life tasks in various domains. However, it will soon be impractical to have robots that are each specialized in a specific task; instead, robots will be multi-purpose, i.e., they will have the capabilities to accomplish various tasks and their mission will be specified only after production in a subsequent “programming” phase.
To realize cooperative missions, robots, possibly from different vendors, are required to interoperate in such a way they can be provided with a mission specification and communicate with each other through interfaces that can suitably "hide" the internal robot technological stack. Moreover, by exploiting the interface layer of the robots, missions can be defined by non-domain expert users since the robot's skills are exposed on a higher abstraction level.
Given this setting, solving this "interoperability problem" of robots is a challenging task with many similarities to the service interoperability problems that have led to service-oriented architectures.
Abstracting robots' skills and exposing them as REST APIs will allow robots to interoperate in a service-oriented fashion and system designers to compose robots within missions as in service-oriented architectures.
Ruichao Wu, Fraunhofer IPA
Enhancing Robotic Systems: Elevating From Reliability To Robustness And Resilience In The Lifecycle Of Software Fault ManagementAbstract: As robots are increasingly deployed across diverse scenarios, they require advanced technology and a broad range of hardware to meet the demands of applications. However, as robot systems grow more complex, the likelihood of failure increases. The traditional focus on reliability in software fault management is no longer sufficient. This presentation explores the paradigm shift towards robustness and resilience in managing software faults throughout the lifecycle of robotic systems.
To achieve that, I propose a framework that includes two synergistic processes: robot self-adaptation and human evolution. The framework performs monitoring and anomaly awareness to identify issues and perform cause analysis. If the cause can be autonomously addressed by the robot, it enters the self-adaptation process, applying self-recovery strategies to resolve the issue. This process enables the robot system to recover from failures, minimizing downtime and mitigating adverse impacts, thereby achieving resilience. Conversely, if the fault requires human intervention, it transitions to the evolution process, where developers resolve the problem based on the cause analysis provided by the framework, leading to software evolution and improved robustness.
I will also dive into the methodology behind the proposed framework by modelling the representation of mutual relationships among entities in the robot system. Attendees will gain insights into how these two interconnected process can lead to more reliable, robust, and resilient robotic systems.
Lastly, I welcome discussion and feedback since this research builds on the concept of my PhD topic.
-
Coffee Break
-
Sessions & Discussions
Anna-Maria Meer, Fraunhofer IPA
Enhancing Mobile Robot Safety and Interaction in Public Spaces: A Testing Framework to Bridge the Gap Between Industrial Environments and Urban DeploymentsAbstract: With advancements in mobile robot technology enabling deployment in public spaces, it has become crucial to ensure performance, safety, and seamless Human-Robot Interaction (HRI). Although mobile robots are frequently used in controlled industrial environments, their introduction to public settings remains uncommon due to complex, unforeseen scenarios and ambiguous public perceptions. To address these challenges, gain insights into factors driving human-robot-interaction quality, and derive future guidelines for robot providers and user for successful use of robots in the public space, we propose a modular testing environment comprising both hardware blocks and software modules. To ensure adaptability to various environments, test cases can be composed through an interactive GUI that allows the input of different categories (e.g. sensors, test standards, conditions) to create customized test scenarios. A set of modular hardware building blocks and 3D tracking equipment allows to construct matching physical test scenarios and analyse interaction situations. In order to address design and behavioral aspects of the robots, the GUI also includes recommendations towards these, which can enhance the public perception and perceived safety of pedestrians. The test environment is developed based on the expertise of the consortium members of the ”Roboter Kompetenz- und Interaktionscluster RoKit” project [1]. This interdisciplinary project aims to enhance the deploy ability of robots in public spaces by addressing various challenges, including legal, ethical, technical, economic, and safety aspects. While investigating the topics of perceived safety and norm-based testcases first, our long term goal is to create a test environment that can guide robot manufacturers and users to deploy safe, meaningful and well-received robots in public spaces.
Sam Wiest, University of Bremen
Simplifying Testing through Automated and Varied Environment GenerationAbstract: The testing and validation of robots in simulation is often limited due to the challenge inherent in creating simulated environments and specifying the robot's tasks. This challenge is compounded by the broad variance of scenarios that the robot may have to face, forcing robot developers to consider variants of environments and tasks, which in turn increases the effort necessary for robust testing.
The Floorplan DSL aims to simplify this process by enabling developers to automatically generate varied simulated environments, to which task specifications are applied to reduce the effort of simulated testing. The composable and modular nature of the Floorplan DSL allows developers to include both static and dynamic features, building upon simple scenarios to create more complex ones. Related Gazebo plugins enable developers to easily apply and test on these features.
This tutorial will demonstrate how the Floorplan DSL and its plugins can be used to perform automated navigation testing, using a digital twin of the SECORO lab. Starting with a simple floorplan, static and dynamic obstacles will be added to modify the complexity of the scenario.
Future work on the Floorplan DSL includes additional Gazebo plugins that implement new door types and behavior.
Argentina Ortega, University of Bremen
Composable and Executable Scenarios for Simulation-Based Testing of Mobile RobotsAbstract: Few mobile robot developers already test their software on simulated robots in virtual environments or sceneries. However, the majority still shy away from simulation-based test campaigns because it remains challenging to specify and execute suitable testing scenarios, that is, models of the environment and the robots' tasks. Through developer interviews, we identified that managing the enormous variability of testing scenarios is a major barrier to the application of simulation-based testing in robotics. Furthermore, traditional CAD or 3D-modelling tools such as SolidWorks, 3ds Max, or Blender are not suitable for specifying sceneries that vary significantly and serve different testing objectives. For some testing campaigns, it is required that the scenery replicates the dynamic (e.g., opening doors) and static features of real-world environments, whereas for others, simplified scenery is sufficient. Similarly, the task and mission specifications used for simulation-based testing range from simple point-to-point navigation tasks to more elaborate tasks that require advanced deliberation and decision-making. We propose the concept of composable and executable scenarios and associated tooling to support developers in specifying, reusing, and executing scenarios for the simulation-based testing of robotic systems. Our approach differs from traditional approaches in that it offers a means of creating scenarios that allow the addition of new semantics (e.g., dynamic elements such as doors or varying task specifications) to existing models without altering them. Thus, we can systematically construct richer scenarios that remain manageable.
We evaluated our approach in a small simulation-based testing campaign, with scenarios defined around the navigation stack of a mobile robot. The scenarios gradually increased in complexity, composing new features into the scenery of previous scenarios. Our evaluation demonstrated how our approach can facilitate the reuse of models and revealed the presence of errors in the configuration of the publicly available navigation stack of our SUT, which had gone unnoticed despite its frequent use.
-
Round Table Discussions
Tuesday 3rd
-
Sessions & Discussions
Marie Farrell, University of Manchester
Robotics: A New Mission for FRET RequirementsAbstract: Mobile robots are used to support planetary exploration and safety-critical environments such as nuclear plants. Central to the development of mobile robots is the specification of complex required behaviors known as missions. This talk will summarise how we use NASA’s Formal Requirements Elicitation Tool (FRET) to specify functional robotic mission requirements.
Marco Autili, University of L'Aquila
RobEthiChor: automated context-aware ethics-based negotiation in the robot domainAbstract: The presence of autonomous decision-making systems is growing at a fast pace and it is impacting many aspects of our daily life. Designed to learn and act independently, they are capable of autonomous decision-making without human assistance. A major challenge for their successful deployment in our lives is integrating human-like ethical values into their decision-making process. Introducing ethics in the decision-making process raises a new challenge: how may systems interact should their ethical preferences differ? The absence of universal ethics implies that they need to reach an ethical agreement. To address this challenge, the RobEthiChor architecture supports a novel context-aware ethics-based negotiation approach in which autonomous robots utilize ethical profiles together with contextual factors and user status conditions to control their autonomy while collaboratively negotiating to reach an ethical agreement that satisfies the ethical beliefs of all parties involved. In this talk, I describe the RobEthiChor architecture and its full implementation in the robot domain, illustrating its practical applicability to showcase its relevance in real-world scenarios.
-
Coffee Break
-
Sessions & Discussions
Kishan Ravindra Sawant, University of Bremen
Knowledge-based Adaptation of Robotic Control and Estimation Architectures in the Presence of UncertaintyAbstract: Uncertainty in the knowledge about the environment is often unavoidable, necessitating robot controllers to adapt to varying or partially unknown environment conditions. For instance, robot lifting an object of unknown weight should adjust the lifting force by estimating the physical properties of the object. Similarly, if a joint motor fails, the robot should still be able to complete the task. The control architecture facilitates this by monitoring expected behavior, modifying strategies, estimating uncertainties, and adapting accordingly.
General-purpose languages often requires explicit formulation of adaptation rules and lack support for interpreting decisions due to limited semantic understanding of functions and variables. Although existing DSLs and tool-chains offer functionalities like component composition, automatic controller parameter tuning, and code generation, they do not fully leverage the advantages of modelling for adapting control architecture based on the abstract task descriptions.
This work explores strategies of coordination and reconfiguration of control architecture by integrating controllers with monitors and estimation components using model-driven development. We present metamodels supporting various adaptation policies and demonstrate their concrete implementation in a box-lifting use case.
Juliane Päßler, University of Oslo
Template Decision Diagrams for Meta ControlAbstract: Decision tree classifiers (DTs) provide an effective machine learning
model, well-known for its intuitive interpretability. However,
they still miss opportunities well-established in software engineering that
could further improve their explainability: separation of concerns, encapsulation,
and reuse of behaviors. To enable these concepts, we introduce
templates in decision diagrams (DDs) as an extension of multi-valued
DDs. Templates allow to encapsulate and reuse common decision-making
patterns. We use a case study from the autonomous underwater robotics
domain to illustrate the benefits of template DDs for modeling and explaining
meta controllers, i.e., hierarchical control structures with underspecified
entities. Further, we implement a template-generating refactoring
method for DTs. Our evaluation on standard controller benchmarks
shows that template DDs can improve explainability of controller DTs
by reducing their sizes by more than one order of magnitude.
Maksym Figat, Warsaw University of Technology
Synthesis of robot system controllers based on formal specificationAbstract: Despite the availability of numerous tools, developing controllers for robotic systems remains a significant challenge, requiring expertise in software engineering, robotics, control, and artificial intelligence. To reduce the complexity of robotic systems, the field has moved from object-oriented to component-based approaches. However, the latter often neglects the formal specification of the system, resulting in architectures that are difficult to reuse. The model-driven approach was introduced as a solution to this problem.
During the seminar, Ph.D. Maksym Figat will present insights from his Ph.D. thesis (awarded by the Prime Minister of Poland), which focused on the automatic generation of controllers for robotic systems from the formal specification. The essence of his research was the development of a methodology for robotic systems based on a parameterized meta-model. This meta-model, when provided with appropriate parameters expressed in the Robotic System Specification Language (RSSL), is transformed into a specific system model. This methodology aims to establish a universal specification method for robotic systems using the Embodied Agent Approach (EAA) and Model-Driven Engineering (MDE), integrating the structure and activities of the system (expressed using Petri nets) into the architecture of robotic systems. This approach supports different stages of development, including verification and code generation, while balancing the level of detail in specifications to ensure cost-effectiveness and provide clear implementation guidance.
The seminar will further discuss design guidelines for robotic systems, emphasizing a "black box" approach to balance design flexibility with constraints, thereby improving the design process. It will show how this methodology can be extended and explore its potential development into a framework for the design of safety-critical robotic systems. The presentation will summarize the research methodology, the results, and their implications for robotic system design.
Nils Chur, Ruhr University Bochum
A Study of Controller Engineering in Robotics SoftwareAbstract: Embedded systems such as robots, coupling software components and physical hardware, are often safety-critical in, e.g., transportation, healthcare and manufacturing. A robot's behavior depends heavily on its controller, which bridges between software and hardware. Whereas control theory provides principles to guarantee stability and robustness when designing controllers under uncertainties, the design and testing of controllers in tools such as Matlab/Simulink do not account for the challenges of implementation. This leaves a notable gap in research: It is an open question how controllers are actually implemented and whether the guarantees offered by control theory remain valid for these implementations.
To start filling this gap, we study real-world controller implementations to understand their characteristics. We examine controller implementations in ROS2 (Robot Operating System) repositories on GitHub, provide an overview of common controller applications, analyze the implementations in terms of discretization, and gain insights into the verification and validation techniques used.
Our study reveals that many applications in open source repositories fall into standardized control design tasks. The majority of control laws are based on continuous systems, and the manner of handling discretization appears to be ''ad hoc.'' Testing practices found in our dataset are insufficient, less than 50% conduct any testing at all, whether it is code-based or simulation-based testing.
-
Lunch
-
Sessions & Discussions
Davide Brugali, University of Bergamo
Future Directions in Software Engineering for Autonomous Robots: A Starting Point for TrustworthinessAbstract: We aim at raising awareness in software engineering for robotics with the objective of building bridges between the communities of software engineering and robotics. Specifically, we sug-
gest a range of possible directions with new challenges for robot software engineering to be explored. We base these on recent studies on the state of the practice in software development for
robotics and on the discussion among the participants of the 2023 IEEE International Conference on Robotics and Automation (ICRA 23) Workshop on Robot Software Architectures.
Patrizio Pelliccione, Gran Sasso Science Institute
Democratising the programming and use of robots
Abstract: In this talk I will focus on the democratization of the programming and use of autonomous systems in everyday-life scenarios. Specifically, I will describe our experience and current projects in making robots accessible to experts of the domain but without expertise in robotics.
-
Coffee Break
-
Keynote
Malte Langosz
Team Lead Software Backbone,
German Research Center for Artificial Intelligence
Bio: Dr. Langosz studied computer science at the University of Bremen. He joined the DFKI (German Research Center for Artificial Intelligence) in 2007. His work focused on software development of robotic simulations, motion controllers for legged robots, learning frameworks and evolutionary algorithms. His research focus was on utilizing evolutionary methods to evolve and optimize robotic kinematic structures and their controller. In 2019 he finished his Ph.D. with the title “Evolutionary Robotics”. Since then, he is the lead of the software backbone team with the goal to collect, maintain and provide software solutions developed in research projects.
Software development for kinematically komplex, autonomous robots at DFKI-RIC Abstract: The presentation will introduce the Robotics Innovation Center of the German Research Center for Artificial Intelligence (DFKI). It will focus on challenges that arise in the software development of kinematically complex robots in a scientific environment. On the one hand the presentation will cover aspects of the low-level control responsible for the motion control of robotics systems. Additionally, it will introduce the high-level robotic framework X-Rock and KiMMI SF. Finally, concepts that are implemented at the RIC to ensure software quality and robustness in a scientific environment are presented.
-
Guided Visit
Tour of the German Research Center for Artificial Intelligence (Deutsches Forschungszentrum für Künstliche Intelligenz - DFKI)
-
Round Table Discussions
Wednesday 4th
-
Keynote
Georg Bartels
Co-Founder and CTO,
Ubica Robotics
Bio: Georg Bartels studied computer engineering at the Technical University of Berlin and Shanghai Jiao Tong University. In 2012, he joined Prof. Michael Beetz at Technical University of Munich to pursue a Ph.D. and followed Prof. Beetz when he moved his research group to University of Bremen. Mr. Bartels doctoral research was focused on software development of autonomous, mobile service robots, specifically on the synergetic combination of knowledge-based motion and action representations with methods for constraint-based robot motion control. In 2019, he left his research to co-found the Ubica Robotics GmbH which develops autonomous scanning robots for brick-and-mortar retail stores. Since then, he has been leading Ubica’s product development as CTO, with a strong focus on software development.
Working in the dark – On taking the prototype of an autonomous mobile robot to productive mass deployment Abstract: This presentation will recall the journey that the development team of Ubica Robotics took when it endeavored to transfer the research prototype of a robot developed during a EU-funded research project to a mass-deployed robotic product. The talk will introduce the company Ubica Robotics and its autonomous mobile shelf scanning robots for brick-and-mortal retail stores, identifying key and unique technical development challenges of this particular robotic product. The core of the presentation will describe the CI/CD and monitoring processes and services that Ubica adopted to reliably ship new iterations of its software to its fleet of robots in the field. As a conclusion, several key take aways will be formulated as brief lessons learned.
-
Coffee Break
-
Sessions & Discussions
Jude Gyimah, Ruhr University Bochum
featX: Controlled Robotic Configurations with Flexible Feature BindingAbstract: ROS (robot operating system) is the de facto middleware for implementing robotic systems. Such systems often need to be customized---or configured---towards different execution environments, hardware, or non-functional properties, such as energy consumption.
Configuration options---a.k.a. features---can control a wide range of system functionalities. They can enable or disable, as well as calibrate (e.g., tweak or fine-tune) different parts of the software, ranging from whole subsystems over components to lines of code. Many features---i.e., their implementations in code---are also scattered across the codebase. Unfortunately, features often have intricate dependencies, which need to be managed. While simple configuration mechanisms exists in ROS, they are far from the state-of-the-art in configuration mechanisms, which centrally manage large configuration spaces supported by intelligent tooling. In addition, robotic systems require the consistent management of bindings. For instance, dynamic components can change at runtime, while static ones can only be bound once---leading to intricate dependencies that need to be assured to avoid undefined situations. We present the configuration-management system featX, which extends feature models as the state-of-the-art in software configuration with support for flexible feature binding. We contribute a novel feature-modeling language, a configurator, and tooling integrated with ROS.
Momina Rizwan, Lund University
Integrating Recovery Strategies in Autonomous Robots: Challenges and Research InsightsAbstract: Ensuring the safety of autonomous robots is a challenging task. Runtime safety monitors help keep robots safe by monitoring externally observable software or hardware properties, such as sensor values, and taking actions to keep these values within a safe range. Inspired by DeROS, we developed a Domain-Specific Language ROSSMARie, to write safety specifications for robots to generate a safety monitor. In DeROS, the response to any safety rule violation is to bring the system to a complete stop. We improve on this by enabling more elaborate recovery actions to ensure the robot maintains safety while staying operational. For instance, when the system approaches the boundary of the safety range, corrective actions — such as slowing down, switching controllers, or modifying the task plan — are initiated.
In this talk, I will present various robot scenarios to highlight the practical challenges of introducing recovery strategies. I will also discuss key research questions to determine the optimal level of information shared between a safety monitor and the high-level planner of the robot to affect autonomous robot decisions, ensuring both safety and mission success.
Discussion on Future Directions and Challenges
-
Lunch
-
Sessions & Discussions
Jan Sollmann, Ruhr-University Bochum
Introduction to Root Cause AnalysisAbstract: Root Cause Analysis (RCA) is a critical method for identifying and addressing the underlying causes of failures in systems. This talk will delve into the definitions and essential importance of RCA, focusing on its role in maintaining the reliability, safety, and efficiency of distributed systems such as robotics and industrial automation. We will trace the evolution of RCA methodologies, starting from Reiter's foundational Theory of Diagnosis to their adaptation across various engineering disciplines. Attendees will gain valuable insights into how RCA can be applied to solve issues, enhance system performance, and prevent future failures.
Vicente Romeiro de Moraes, Universidade de Brasilia
Event Based Runtime Verification for Robotic SystemsAbstract: Robotic Systems deal with many uncertainties which may hamper the operability and achievability of a mission. Robots need to be autonomous but also be able to reliably achieve an objective. In that sense, robotic systems offer great scenarios for requirement specification and verification. The challenge lies in precisely modelling adaptability within the system and verifying adaptable requirements at runtime. To that end, we propose an event based approach to runtime verification for ROS2 based systems that aims to bridge the gap between dynamic requirements specification and runtime verification by monitoring the state of the system and creating discrete events which are sent to a mission controller.
Diana Carolina Benjumea Hernandez, University of Manchester
A Verifiable Architecture for Robotic Autonomous Systems in Critical DomainsAbstract: This research presents an innovative approach to ensuring the safe operation of Robotic Autonomous Systems (RAS) in critical domains, focusing specifically on the UK nuclear safety regime as a case study. Motivated by the growing need to deploy robots in hazardous environments to enhance safety and efficiency, we aim to develop a systematic approach that demonstrates the feasibility of RAS deployment while ensuring that operational risk is tolerable and As Low as Reasonably Practicable (ALARP). The proposed framework involves deriving formal properties from functional safety requirements for RAS, developing rules-based Safety Instrumented Functions (SIFs), implementing them in the Safety System, and formally verifying compliance with these derived safety properties. Integration of these SIFs into operational RAS is facilitated by a system architecture that ensures the independence and functionality of both the safety and control systems. This approach is demonstrated through a real-world application in the UK nuclear industry, underscoring the importance of safety assurance throughout the entire life cycle of RAS, from hazard analysis to operational deployment and beyond.
-
Coffee Break
-
Sessions & Discussions
Hannan Ejaz Keen, Xitaso Gmbh
Giving helicopters the ability to see: Robust Multi-Sensor Perception Systems for Safe Take-off and Landing of Aerial Vehicles.Abstract: The take-off and landing of helicopters in unknown terrain during rescue missions present significant challenges and risks for pilots often leading to accidents or time-consuming assessment for safe landing site. To address this issue, the ENGEL project (started in Jan 2024) aims to automate helicopter flight cruise-to-landing and take-off phases, which are currently performed manually. Central to this automation is a robust multi-sensor perception system integrated with a Human Machine Interface (HMI). This system recommends safe landing spots by evaluating factors such as terrain type and slope, weather conditions, and ground obstacles while suggesting alternative routes to assist pilot decision-making. Safety is the core requirement of the system, demanding exceptional robustness in the perception components, which include cameras, LiDAR, and RADAR. This study investigates the challenges, design requirements, and open questions essential for developing an effective perception system for safe helicopter take-off and landing in various scenarios.
Yorick Sens, Ruhr University Bochum
ML-Engineering for Robotics: State of the Art and ChallengesAbstract: Recent advances in Deep Learning have unlocked a wide range of new opportunities across various software domains. Among these, robotic systems represent one of the most complex forms of such ML-enabled software systems, integrating multiple ML models and operating within safety-critical environments.
Despite the significant progress in ML technology, software developers often struggle to incorporate ML components into their software systems. This challenge arises because ML development originates from data science, relying on different tools and workflows compared to traditional Software Engineering.
In this talk I will discuss these challenges in detail and present some preliminary results from an empirical study of 3,000 open-source ML-enabled software systems on GitHub. In this study we extracted architectural patterns for the integration of ML models and gathered insights about code reuse and testing practices, as well as particularities for robotic systems.
Henriette Knopp, Ruhr-University Bochum
ML for Robotics: Model Training on a System LevelAbstract: The rise of machine learning (ML) has led to the widespread adoption of ML technology in a number of domains, including robotics. Robotics represents one of the most complex form of applied ML, where multiple models are combined to solve complex tasks such as sensor fusion.
The development of new ML technology typically focuses on the performance of models on a training and test data set. These models are then integrated into a system. This is often difficult and results in challenges and anti patterns as both fields, data science and traditional software engineering employ different processes and tooling. One concrete challenge is how the performance of ML models can be evaluated, validated, and even trained or fine tuned when integrated into such a robotics system.
This talk will explore how ML models can be integrated into a robotic system. I will focus on the training of ML models in a system context and how we can use knowledge from the application domain to improve ML for a specific task. This includes preliminary results and challenges.
-
Round Table Discussions
-
Social Event
Tour of the city of Bremen
Meeting point: Roland
Tram stop: Domsheide Google Maps
Engineering Security Features - Practices, Challenges, and Insights from Industry ProfessionalsAbstract: Cyber-physical systems process a substantial amount of sensitive data and operate in safety-critical environments. As such, they are valuable assets for adversaries who threaten the security of such systems. Implementing Security Features - functionalities that mitigate these threats - is of utmost importance to reduce potential costs and harm. Consequently, the research community invests substantial effort into developing new security technologies that can keep pace with adversarial actors who continuously discover and exploit new vulnerabilities. We therefore significantly rely on assumptions that are largely based on common sense or individual examples, since we currently lack an empirical understanding of how security features are engineered in practice. However, common sense or intuition is not enough to better target our research efforts and make them more effective in practice.
Developers need to systematically select, plan, design, implement, and especially maintain and evolve security features. While there have been plenty of studies on the use of libraries (e.g., APIs of cryptography libraries), surprisingly little is known about how developers engineer security features. How do they select what security feature to implement in a given system? What is the process to design and implement security features? What challenges arise and what must be done to overcome them?
We interviewed knowledgeable industrial professionals involved in the engineering process of security features, exploring how they select and engineer them in practice. We identified key challenges and practices they use to overcome them.
In this talk, I will dive into the practices and challenges voiced by practitioners to give an overview over the current state of the engineering of security features in the industry while highlighting differences identified in the engineering of cyber-physical systems.
Sara Pettinari, Gran Sasso Science Institute
Process-driven Development and Analysis of Multi-Robot SystemsAbstract: Multi-robot systems are emerging to perform complex tasks in different application domains, by supporting or automatizing human tasks. The behavioral workflow of these systems, referred to as mission, can be seen as a sequence of tasks enabling both robots’ actions and inter-robot interactions. Hence, considering robots’ behavior at a high level allows for conceptualizing their missions as process models. In the context of process models, Business Process Management (BPM) is the reference discipline that enables their usage to describe the overall workflow of a system, including its design, enactment, monitoring, analysis, and refinement.
Using process models in the robotics domain can leverage techniques from the BPM discipline. This talk will present a top-down approach, which involves designing processes to model robotic missions and facilitate their execution according to the planned sequence. Additionally, it will discuss a bottom-up approach, which uses data generated during robot operations to discover and analyze mission executions by applying process mining techniques to automatically extract insights related to multi-robot execution.
Matteo Morelli, CEA LIST
A tooled approach to programming and execution of skill-based robotic behaviorsAbstract: In the current practice, roboticists use manual programming to develop, integrate and orchestrate navigation, manipulation and vision capabilities (skills), which is tedious, error prone and implies maintenance efforts. This talk presents my group's research on a tooled approach to programming and execution of skill-based robotic behaviors. I will describe our tool Papyrus for Robotics (https://www.eclipse.org/papyrus/components/robotics/), an open-source, Eclipse-based, low-code environment that supports code generation for and reverse engineering of ROS 2 based software systems. In addition to the tool, I will discuss the current developments towards supporting a low-code approach for the design, deployment and re-configuration of software stacks bringing situation understanding and deliberation capabilities to future intelligent robots.
Thorsten Berger, Ruhr University Bochum
TBAAbstract: In the current practice, roboticists use manual programming to develop, integrate and orchestrate navigation, manipulation and vision capabilities (skills), which is tedious, error prone and implies maintenance efforts. This talk presents my group's research on a tooled approach to programming and execution of skill-based robotic behaviors. I will describe our tool Papyrus for Robotics (https://www.eclipse.org/papyrus/components/robotics/), an open-source, Eclipse-based, low-code environment that supports code generation for and reverse engineering of ROS 2 based software systems. In addition to the tool, I will discuss the current developments towards supporting a low-code approach for the design, deployment and re-configuration of software stacks bringing situation understanding and deliberation capabilities to future intelligent robots.
-
Coffee Break
-
Sessions & Discussions
Christoph Reichenbach, Lund University
Finding Launch-Time Bugs with EzSkiROSAbstract: When we develop general-purpose robot software components, we rarely know the full context that they will execute in. This limits our ability to make predictions, including our ability to detect program bugs early. We propose a strategy for bug detection at launch time that can incorporate configuration and world-model information to identify bugs in (mostly) declarative parts of robot code, including code implemented in dynamic languages like Python. We have implemented this strategy in EzSkiROS, a checker tool that sits on top of the skill-based knowledge integration tool SkiROS2.
In this talk, I will present a number of design patterns that we have identified as part of EzSkiROS and discuss their capabilities and limitations, as well as practical considerations on robot software engineering and launch-time bug detection.
Tobias John, University of Oslo
Mutation-Based Integration Testing of Knowledge Graph ApplicationsAbstract: Robots operating in complex domains rely on knowledge about this domain. One representation for this knowledge are knowledge graphs (KGs). We present a novel testing approach for the integration of control software with KGs.
As the KGs are expected to change during the run- and lifetime of the control software, we must ensure the robustness of the whole system w.r.t. changes in the KG. Starting with a singular KG, we mutate its content and test the unchanged software with the original test oracle. To address the specific challenges of KGs, we introduce two additional concepts. First, as generic mutations on single triples are too fine-grained to reliably generate a KG describing a different, consistent KG, we employ domain-specific mutation operators, that manipulate subgraphs in a domain-adherent way. Second, we need to specify the parts of the knowledge that the control software relies on for correctness. We introduce the notion of a robustness mask as graph shapes that the KG must conform to. We evaluate our approach on two software applications from the robotic and simulation domain that tightly integrate with their respective KG.
Minh Nguyen, University of Bremen
Automated Behaviour-Driven Acceptance Testing of Robotic SystemsAbstract: The specification and validation process of robotics applications requires bridging the gap between the formulation of requirements and their systematic testing. This process frequently entails manual work known to be prone to errors, which becomes increasingly complex as the requirements, system design, and implementation evolve during development. To address this challenge systematically, we propose to extend behavior-driven development (BDD) as an effective method to define and verify acceptance criteria for robotic systems. To be effective in the robotics context, we employ domain-specific modeling and represent composable BDD models as knowledge graphs for robust model querying and manipulation, supporting the generation of executable testing models. For solution builders, a domain-specific language facilitates the efficient specification of robotic acceptance criteria. We discuss the potential for automated generation and execution of acceptance tests using a software architecture combining a BDD framework, Isaac Sim, and accompanying model transformations along the acceptance criteria for pick-and-place applications. This research paves the way for more rigorous and automated evaluation of robotic systems, ensuring they meet user-defined acceptance criteria.
-
Closing Remarks
-
Lunch
Venue
Directions
The meeting will be held in the Mehrzweckhochhaus building, more commonly known as (and easier to pronounce) MZH, which is centrally located in the university.
You can download a map of the university here.
The sessions will take place in room MZH 1090, in the first floor of the MZH building:
Travel Information
Transportation
Plane
Bremen has an airport about 3.5km away from the city center.
Other relatively closeby airports include Hamburg and Hannover.
Trains
You can reach Bremen’s Central Station (Bremen Hauptbahnhof in German) very easily by train. Check timetables and book tickets for long distance and regional trains at Deutsche Bahn.
Local transportation
Once in Bremen, you can move around the city via buses and trams.
Tickets and time tables are available on the VBN website or its mobile apps.
Bremen is a city well-known for its biking infrastructure! Bike sharing is available with companies like WK-Bike or you can rent bikes by the hour, day or week with MyFiets (in German) right at the central station.
Deutschland Ticket
If you plan to arrive in Hamburg or Hannover, or are staying a few days before or after the meeting, you should consider buying a Deutschland Ticket, which will allow you to use local transportation and regional trains all over Germany.
Accomodation
Hotels near the university:
7 things, starting at 89€ double/queen room, approx. 5min walking from the university.
You can also choose a hotel near the city centre. The meeting venue is easily reachable with the tram line 6 in ca. 20min.
Some hotels include:
Motel One Bremen, starting at 89€ double/queen room, approx. 20min by public transport and 15min by bike
ibis Bremen City starting at 149€ double/queen room, approx. 20 min by public transport and 15min by bike.
Select Hotel City Bremen, starting at 93€ single room, approx. 20min by public transport and 16min by bike
InterCityHotel, starting at 133€ double/queen room, approx 12min by public transport and 12min by bike.
Prizeotel Bremen City starting at 74€ double/queen room, aprox. 15min by public transport, 12min by bike.
B&B Bremen City, starting at 71€ double/queen room, approx. 20min by public transport and 12min by bike
Make sure you use your institution’s address for your invoice and indicate that you are travelling for business to avoid paying the 5% city tax on your accomodation.
Contact
For any questions or suggestions about the meeting, please email us at rsemeeting@gmail.com.