Robotics and Software Engineering 2025 (RSE’25) is the fourth edition of an annual meeting to promote discussion and interaction between researchers. The main objective is to strengthen cooperation and dialogue, bringing together the communities of Robotics and Software Engineering. It is an ideal opportunity to exchange ideas on topics of interest, including (but not limited to): development of systems, software architecture, dependability, software reuse, software validation and verification, robot modeling, robot control architectures, autonomous systems, and multi-robot systems. For this reason, we encourage anyone to participate, particularly early career researchers, including master students doing research.
The format of RSE’25 meeting consists of short presentations from each participant with enough time for discussion. Researchers at different career stages are wellcome to present their research, provide and get feedback from peers, engage into discussions and establish new collaborations. RSE is not a publication venue. Participants can present previously published work as well as unpublished work, including early ideas and work in progress, can be a published paper, an idea, a master thesis, etc. The main point is to encourage discussions, to give and receive feedback, and to create a network for new collaborations.
RSE’25 will take place at the University of Southern Denmark, campus Odense, located in the island of Funen in Denmark, from September 9 until September 11, 2025.
RSE’25 does not require any paper submission, however, an abstract about the research that is to be presented is required to apply.
Application
Please note that every participant needs to give a talk, this is the event rule.
Important Dates
Meeting
September 9th - 11th 2025
Application
March 31st 2025 Extended deadline: May 4th 2025
announcement Applications are now closed. Please send an email to the organizers.
Due to organizational reasons, there is a limit of 50 participants. We will notify the selected participants shortly after the deadline. A registration fee of 1800DKK/participant (ca. €250/participant) will be required to confirm your participation. The registration fee includes lunch, coffee and snacks during the coffee breaks, the RSE dinner, and the visits to our partners on Wednesday.
Schedule
Overview
Tuesday 9th
Check-in
Keynote
Coffee Break
Sessions & Discussions
Lunch
I40Lab Tour
Coffee Break
Sessions & Discussions
Coffee Break
Sessions & Discussions
Round Table Discussions
Social Event
Wednesday 10th
Airport Transportation
Welcome / Overview
Keynote
Tour
Coffee Break
Sessions & Discussions
Round Table Discussions
Lunch
UR/MIR Transportation
Executive Welcome
Keynote
Keynote
Coffee Break and meet the experts from UR
Showroom and Quality gate tour
Networking
Transportation back to city center
Dinner
Thursday 11th
Sessions & Discussions
Coffee Break
Sessions & Discussions
Lunch
Sessions & Discussions
Coffee Break
Sessions & Discussions
Round Table Discussions / Closing Remarks
Schedule
Tuesday 9th
-
Check-in
-
Keynote
Mikkel Baun Kjærgaard & Miguel Campusano
,
SDU
Bio: Mikkel Baun Kjærgaard is a professor at the Software Engineering Section at the University of Southern Denmark. His research focuses on software engineering and data-science with applications in ubiquitous computing and cyber-physical systems. He has contributed with new methods for human sensing and data analytics and methods for the optimization of software qualities, including performance, usability, integration, energy efficiency, and privacy protection. This has, among others, led to important contributions to data science methods within robotics, logistics, and activity recognition.
Miguel Campusano is an associate professor at the Software Engineering Section at the University of Southern Denmark. He conducts research in the area of software engineering for robotic development, specifically focusing on programming languages, model-driven development, and programming experience. He has developed languages and tools to program mobile robots, drones, and cobots. The focus of these tools is to increase both developers' productivity and their understanding of different robotic systems.
Welcome Talk / SDU Keynote Abstract: Software Engineering in robotics system from SDU perspective
-
Coffee Break
-
Sessions & Discussions
Mirgita Frasheri, Aarhus University
Building a Digital Twin for the Desktop RobottiAbstract: This talk will provide an overview of the current state of the Digital Twin built for a mobile robot
Andreas Wiedholz, XITASO GmbH
Self-adaptive systems based on ROS2: Current research planAbstract: Self adaptive systems enable modifications of a system's behavior at runtime which leads to improved abilities to react to changes in the environment and failures of the system. Human involvement necessitates systems that adapt to actions while maintaining transparent decision-making. With more humans collaborating with robots over the last years
Till Schallau, TU Dortmund University
STARS: Classification of Scenarios and Checking of Functional Requirements of Automated Robotic SystemsAbstract: Automated robotic systems are designed to operate correctly in complex environments, making their testing a critical aspect of development. However, statistically systematic testing is infeasible due to the immense number of test cases required. Early-stage validation of these systems often relies on scenario-based testing combined with simulations. While simulations provide controlled environments for testing, real-world deployments can reveal unforeseen situations and scenarios that were not considered during development.
To address this challenge, it is crucial to detect encountered scenarios in real-world data and calculate the extent to which all possible scenarios are covered. Additionally, verifying the correct execution of system functionalities under various conditions is essential for ensuring safety and reliability.
Our approach leverages tree-based scenario classification (TSC) combined with temporal logics to systematically analyze recorded data from automated robotic systems. The hierarchical TSC structure organizes features into semantic layers, enabling efficient classification of observed scenarios while reducing combinatorial complexity. By computing metrics such as scenario class coverage, feature occurrence distributions, and identifying missing feature combinations, our method provides detailed insights into gaps in tested and observed scenarios. Furthermore, we integrate requirement monitoring through formalized predicates to validate system behavior against predefined functional requirements.
This framework has been applied successfully both in simulation environments - such as CARLA - and using real-world experimental setups involving scaled vehicle platooning controllers. Our method not only identifies failed requirements but also traces them back to specific triggering conditions within classified scenarios. This enables targeted debugging and refinement of the system behavior while enhancing dependability across diverse operational design domains (ODDs). By bridging simulated and real-world analyses, our approach contributes significantly to improving robustness and safety assurance for autonomous systems operating across dynamic environments.
Kevin Hermann, Ruhr University Bochum
A Taxonomy of Functional Security Features and How They Can Be LocatedAbstract: Security must be considered in almost every software system, including cyber-physical systems. Unfortunately, selecting and implementing security features remains a challenge due to the wide variety of security threats and possible countermeasures. While security standards are intended to help developers, they are usually too abstract and vague to help implementing security features for robotic systems, or they merely help configuring such. A resource that describes security features at an abstraction level that lies between high-level (i.e., rather too general) and low-level (i.e., rather too specific) security standards could facilitate secure systems development. This resource should support the selection of appropriate security features to achieve high-level security goals, allow easy retrieval of relevant low-level details, and provide pointers to suitable ways to realize the security features. To realize security features, developers typically use external security libraries or frameworks, to minimize implementation mistakes. Even when using libraries, developers still make mistakes when writing code to integrate them, often resulting in security vulnerabilities. When security incidents occur or the system needs to be audited or maintained, it is essential to know what security features have been implemented and, more importantly, where they are located. This task, commonly referred to as feature location, is often tedious and error-prone. While dedicated feature location techniques exist, they require significant manual effort or adherence to strict development processes, preventing their use. Therefore, we have to support long-term tracking of implemented security features.
We present a study of security features presented in the literature and their coverage in popular security frameworks. We contribute (1) a taxonomy of 68 functional implementation-level security features including a mapping to widely used security standards, (2) an examination of 21 popular security frameworks concerning which of these security features they provide, and (3) a discussion on the representation of security features in source code. Our taxonomy aims to aid developers in selecting appropriate security features and security frameworks, as well as relating them to security standards when they need to choose and implement security features for a software system.
-
Lunch
-
I40Lab Tour
-
Coffee Break
-
Sessions & Discussions
Nils Chur, Ruhr University Bochum
Beyond the Control Equations: An Artifact Study of Implementation Quality in Robot Control SoftwareAbstract: Robotic systems tightly integrate software with physical hardware, and often operate in safety-critical domains such as transportation, healthcare, and manufacturing. A key component in these systems is the controller, a software component responsible for managing hardware behavior to ensure properties like stability and robustness. While control theory provides guarantees about system behavior, the practical implementation of controllers in software introduces complexities that are often overlooked. Control theory typically assumes that controllers operate in a continuous domain, whereas software operates in a discrete domain, requiring careful discretization to maintain theoretical guarantees. Despite extensive research on control theory and controller modeling, little attention has been given to the actual implementation of controllers and how their theoretical guarantees are ensured in real-world software systems.
In this study, we investigate 184 real-world controller implementations in open-source robot software to bridge this gap. We examine the application of these controllers, their implementation characteristics, and the testing techniques employed to ensure correctness. Our analysis reveals that many controller implementations handle discretization in an ad hoc manner, leading to potential issues with real-time reliability. Additionally, challenges such as timing inconsistencies, lack of proper error handling, and inadequate consideration of real-time constraints further complicate implementations. Furthermore, testing practices are superficial and often lack systematic verification of theoretical guarantees, leaving possible inconsistencies between expected and actual system behavior. Our findings highlight the need for improved implementation guidelines and rigorous verification techniques to ensure the reliability and safety of robotic controllers in practice.
Felix Reuter, University of Southern Denmark
Simulation and Analysis of Vision-Based Welding Operations of Micro Panels in ShipbuildingAbstract: As part of the ShipWeldFlow project, two robot programming techniques are combined to improve the welding of so called micro panels, a common type of subassembly in shipbuilding. While CAD-based programming offers reliable, easy to simulate robot programs, it also requires more human interaction and a system for data exchange between the engineering and production departments. Vision-based programming on the other hand is simpler and more flexible since robot programs are generated on the fly, but the procedure is less predictable.
In this talk I will present a simulation-based approach that aims to increase the reliability of vision-based welding robots by executing the process steps on a digital twin beforehand. When combining this with planning information from a ship design database intended for CAD-based programming, the behavior of the vision-based approach can be compared and validated.
To provide realistic sensor and robot simulations, a co-simulation approach is implemented in a visual programming interface. The created zero-programming workflow allows for automatic processing of whole ship blocks. The operator is provided with automatic reports, as well as data for manual reviewing and more detailed planning of the production scheduling.
Ricardo Diniz Caldas, Gran Sasso Science Institute
Runtime Verification and Field-based Testing for ROS-based Robotic SystemsAbstract: Robotic systems are becoming pervasive and adopted in increasingly many domains
Ola Rønning, University of Copenhagen
Real-Time Bayesian Filtering for Pose Estimation with Stein MixturesAbstract: Robots operating in unstructured environments must reliably determine their pose (position and orientation) from diverse
sensors such as lidar, radar, sonar, and cameras—much like humans rely on sight, hearing, and touch. Achieving high
accuracy and real-time performance is critical for navigation and mapping. However, in real-world modeling, accuracy
and uncertainty are inherently in tension. Traditional approaches like Kalman and particle filters can quantify
uncertainty but often rely on assumptions about sensor errors and dimensionality that undermine both accuracy and
real-time feasibility. Deep-learning-based methods, while efficient and accurate, can become dangerously overconfident
in unexpected scenarios. As an alternative, Stein mixture inference strikes a balance between accuracy and uncertainty,
and it has already demonstrated scalability for six-dimensional problems in other domains—precisely the dimensionality
of a standard robot pose. Building on this, we propose a transport map approach to Bayesian filtering using Stein mixture
inference to propagate the current pose belief. Although constructing a transport map requires solving an optimization
problem at every sensor reading, we emphasize rapid convergence at low cost to ensure real-time performance. By
effectively managing both uncertainty and speed, this approach aims to improve the safety and reliability of robotic
navigation and mapping in dynamic, unpredictable environments.
-
Coffee Break
-
Sessions & Discussions
Mehran Rostamnia, Gran Sasso Science Institute (GSSI)
Towards Adaptable and Uncertainty-aware Behavior TreesAbstract: Space robotic missions are taken on a highly uncertain ground
Sune Lundø Sørensen, SDU Software Engineering
A Wearable Real-Time 2D/3D Eye-Gaze Interface to Realize Robot Assistance
Henriette Knopp, Ruhr-University Bochum
On Developing ML-based systems: Thinking Machine Learning on a Software ScaleAbstract: The rise of machine learning (ML) has led to the widespread adoption of these technologies in a number of domains, including robotics, which represents one of the most complex forms of applied ML, where multiple models are combined to solve complex tasks such as perception and sensor fusion. However, integrating models into systems, and managing the many different artifacts involved, is far from trivial and still remains an open challenge.
This talk will summarize results from an empirical study on 3,000 open-source ML-enabled systems. It will discuss findings on the integration of ML-models from that study and ongoing case studies on best practices and patterns for integrating and training ML models in a system context.
-
Round Table Discussions
-
Social Event
Storms Pakhus
We reserved a table in the local street food market Storm Pakhus, where we can have dinner and hang out together.
Be aware that this event is not included in the registration fee.
Keynote 2: Odense Robotics Abstract: Odense Robotics – Denmark's national cluster for robot automation and drone technology. Learn about Odense Robotics our collaboration with international partners and how we nurture the talent driving tomorrow's innovations.
-
Coffee Break and meet the experts from UR
-
Showroom and Quality gate tour
-
Networking
-
Transportation back to city center
-
Dinner
Oluf Bagers Gaard
Dinner at modern Danish restaurant.
Payment included in the registration fee.
Mashal Afzal Memon and Anjali Santhosh, University of L'Aquila
Adaptive Coordination of Multi Robot Systems: addressing Uncertainty while embracing Ethical-awareness
Vicente Romeiro Moraes, Ruhr Universität Bochum
Property Specification and Verification for Behaviour Trees
Diana Carolina Benjumea Hernandez, The University of Manchester
Instantiating an Architecture for Autonomous Robots in Highly Regulated DomainsAbstract: Deploying autonomous robots in highly regulated domains requires architectures that demonstrably ensure operational effectiveness and compliance with safety requirements. This work presents an architecture that integrates autonomous control systems with a safety oversight mechanism to address this challenge. The architecture consists of two key components: the Safety-Related Autonomous System (SRAS)
Gianluca Filippone, Gran Sasso Science Institute
MULTI-3: Empowering Multi-Mission Multi-Robot and Multi-Instance Task ExecutionAbstract: Multi-Robot
-
Coffee Break
-
Sessions & Discussions
Taiga Suda and Takashi Yoshimi, Shibaura Institute of Technology
Application and evaluation of model predictive control (MPC) to a polishing robot systemAbstract: Many polishing robots use PID control to control the position of the hand tool and the force applied to the workpiece. However, it takes time and effort to adjust the control parameters in order to process workpieces of various shapes and materials with high accuracy in a polishing robot, and the operator is required the experiences to properly adjust the PID control gains.
In this study, we are aiming for realizing a polishing robot control system with high control performance while reducing the burden of adjusting PID control parameters by applying model predictive control (MPC). Conventional MPC requires an accurate model of the controlled object, but it is difficult to accurately model the physical phenomena in an actual robot system, and using a complex model increases the calculation load of the MPC. Therefore, we expressed the relationship between the position and force in the system in which the tool held by the robot is pressed against the workpiece using a simple spring model. Furthermore, we accurately modeled and incorporated response characteristics such as dead time in the robot controller. By applying MPC using our constructed model, we were able to improve the control performance while reducing the time required to adjust the control parameters of the polishing robot.
We will report on the above efforts and the evaluation results of the constructed control system in our presentation.
Thorsten Berger, Ruhr University Bochum
Teaching Autonomous DrivingAbstract: I will present our experiences and future plans on teaching autonomous vehicles
Gianluca Bardaro, Politecnico di Milano
Robot software in the era of large modelsAbstract: In today’s robotics landscape, two distinct paradigms coexist. On one side are classical techniques grounded in control theory. These methods prioritize reliability, robustness, and predictability. They rely on well-established mathematical models, real-time control algorithms, and structured frameworks (such as ROS) to ensure that robots perform consistently and safely in controlled environments. For applications where safety and precision are the priority, these methods provide verifiable performance and are backed by empirical and theoretical validation. However, their major limitation is flexibility. When faced with unforeseen environmental changes or novel tasks, these systems require substantial human intervention or a complete redesign of the control strategy. This rigidity means that while classical approaches excel in stable, predictable conditions, they can be slow to adapt in dynamic or unstructured settings.
In parallel, a second stream of research focuses on end-to-end, data-driven methods. Leveraging recent advancements in deep reinforcement learning, transformers, and large language models (LLMs), this approach learns behaviors directly from data. Such models have demonstrated impressive achievements, exhibiting remarkable flexibility and adaptability to varied and unanticipated scenarios. This paradigm allows robots to generalize from past experiences and quickly adapt to new tasks, which is especially useful in rapidly changing environments or when task specifications evolve. However, the data-driven approach comes with significant drawbacks. Collecting the vast amounts of data required for robust training in robotics is notoriously challenging, as real-world robotic interactions are both expensive and time-consuming to capture. Moreover, the computational power needed for training these models and deploying them in real-time on robots can be challenging, limiting their practical use in many embedded systems and safety-critical applications.
In this divided landscape, what is the role of software? Until now, software has been the fundamental enabler of robot autonomy, ranging from low-level functionalities to complex behaviors. Learned solutions are more flexible and adaptable, achieving results that are impossible to encode algorithmically. However, the downsides and limitations of such approaches cannot be ignored.
An interesting solution is to combine the two approaches. By decoupling low-level robotic operations from higher-level decision-making, engineers can develop systems where control software remains predictable and verifiable, while AI components provide flexibility and adaptability. In practice, this means that while core functions like perception, motion control, and sensor integration are implemented as well-tested modules, AI algorithms select and sequence these modules to perform real-world tasks.
This hybrid philosophy minimizes the risks of opaque, purely black-box models by anchoring them in reusable, atomic actions. In doing so, it offers a pathway toward scalable, intelligent, and resilient robotic systems that can benefit from the strengths of both classical engineering and modern AI. A key factor for success is access to reliable and robust software components that can be managed by an AI orchestrator without causing unexpected side effects. In this context, now more than ever, verification and validation represent indispensable requirements. The role of software is to provide a robust foundation that can be configured and utilized by intelligent agents to enable robots that are not only safe and reliable in predictable conditions but also capable of adapting to the uncertainties and complexities of the real world.
William Appleton Coolidge, SDU Software Engineering
Robot program structure implications from the use of semantic dataAbstract: Ontologies are useful for modeling multiple dimensions of concerns of a robot in its world.
When applied, the dimensional coverage of ontologically defined types results in a graph of semantic data instances.
These graphs of semantic data pervade the system, as the intent is not to lump or reduce the graph at each stage, but rather to exploit the graph of type instances throughout.
This talk is on the implications of programming with semantic data given the requirements of handling composability, errors, events, and states. The use of monads is an obvious candidate for structuring computations of semantic data. This talk will discuss the match between semantic data and monads and how to exploit this in robotics.
-
Lunch
-
Sessions & Discussions
Yoganata Kristanto, SDU Software Engineering
Visual Representation Techniques for Comprehension of Cobotic Program Execution and Future ReuseAbstract: The growing complexity of High Mix Low Volume (HMLV) manufacturing in Small and Medium Enterprises (SMEs) has increased the need for advanced techniques and methodologies to develop, analyze, and optimize Cobotic programs.
In this work, we explored the applications of dynamic visualization methods as a means to enhance the understanding of how Cobotic programs execute in run time. By leveraging these visualization techniques, we aimed to provide deeper insights into the interactions between robots and humans during program execution, enabling users to better comprehend the program flow, identify potential issues, and optimize performance for more efficient and effective Cobotic operations.
These visualizations will also enable users to easily identify and isolate key components of the program, such as specific functions, sequences, or robotic tasks, that can be reused or adapted in future Cobotic programs. By clearly showcasing how different parts of the program interact and contribute to the overall execution, users will be able to recognize reusable parts of the program, streamline the development process, and improve the efficiency and flexibility of designing future Cobotic applications.
Juan Antonio Pinera Garcia, Gran Sasso Science Institute
Adaptation in Heterogeneous Multirobot systems via LLMsAbstract: Uncertainty is an inherent challenge in planning and executing missions within dynamic environments
Jude Gyimah, Ruhr University Bochum
Quality Aspects of Robot Mission Requirements for Mission Modeling and SynthesisAbstract: Task-performing robots
Forough Zamani, Delft University of Technology (TU Delft) Cognitive Robotics Department
A Causal Modeling Approach for Self-Tuning the ROS 2 Navigation StackAbstract: A Causal Modeling Approach for Self-Tuning the ROS 2 Navigation Stack
-
Coffee Break
-
Sessions & Discussions
Yorick Sens, Ruhr University Bochum
Safeguarding ML-enabled SystemsAbstract: Robotic systems increasingly rely on machine learning (ML) for different tasks
Taichi Ishikawa and Takashi Yoshimi, Shibaura Institute of Technology
Consideration and verification of a decoration sticker attachment motion to the candle side surface by a robot armAbstract: Candle manufacturing companies still have manual tasks, and the shortage of workers is becoming serious. In particular, the task of attaching decoration sticker to the side of square prism candle requires accurate attachment of thin, easily deformed stickers in a specified position, making it one of the tasks difficult to automate by a robot. Therefore, we aimed to automate this task with a robot, and first observed and analyzed the motions of workers with different levels of skill, from experts to beginners, in the task of attaching decoration sticker to the flat sides of candles, and considered the possibility of automating the task by a robot. As a result, we found that the working method differs depending on the level of skill of the worker, and that it is possible to automate the task by moving a robot based on the working method of beginners. Based on these results, we proposed the working methods suitable for automation using robots, generated a program to execute them, and realized the automation of the task using a robot. In addition, since there is a high demand for automation of the task of attaching decoration sticker to the ridges of the side of a candle, we also considered a method to automate the task by partially improving the operation of attaching decoration sticker to the flat side of a candle.
We introduce the working motion of applying a sticker to the flat side of a candle using a robot arm, the program that executes this motion, and the automated system for this task that was developed based on this program. We also report on the details and trial results of two types of decoration sticker application motions to the ridge lines of the side of candles, which were made possible through partial improvements of the candle flat side sticker attachment program.
Sara Pettinari, Gran Sasso Science Institute
Enhancing Robotic Mission Analysis via Process Mining and Visual AnalyticsAbstract: Robotic systems are increasingly deployed across various domains to automate complex activities, often operating autonomously and interacting with dynamic environments. Understanding and analyzing their mission execution is crucial for optimizing performance, ensuring reliability, and preventing failures. Process mining has emerged as a promising approach to extract meaningful insights from event logs, uncovering behavioral patterns and mission execution flows. However, the vast amounts of low-level data generated by robotic systems pose challenges for effective mission analysis. To address this, integrating process mining with visual analytics provides a powerful solution, combining automated analysis with interactive visualizations to enhance interpretability.
This talk will discuss the application of process mining and visual analytics to analyze robotic system missions, highlighting current challenges and opportunities.
Mukelabai Mukelabai, Ruhr-University Bochum
Data-Driven Fault Localization in Practice: A SurveyAbstract: Fault localization is a critical yet challenging aspect of software development, essential for identifying the root causes of failures and enhancing software quality. Despite its significance, fault localization remains a complex, time-consuming task, particularly in modern software systems characterized by expansive codebases and complex component interactions. A primary challenge is efficiently extracting relevant information from runtime data-specifically logs-to accurately trace faults back to faulty code.
While substantial research has been conducted on fault localization techniques, there is a significant gap in understanding how developers actually implement and utilize these techniques in practice, especially in large-scale industrial settings in the automotive domain. Existing studies have focused primarily on theoretical approaches, general software systems, or small-scale evaluations, with limited insight into real-world practices, challenges, and collaborative aspects of fault localization in the automotive industry.
To address this gap, we conducted a comprehensive survey of 68 software developers at an automotive company in Germany to investigate current practices, challenges, and strategies employed for logging, log analysis, and fault localization. Our study examines how developers leverage logs for fault localization, particularly when faults stem from interactions between features or components, and how they coordinate with colleagues during this process.
Key findings reveal that developers spend approximately 22% of their time analyzing logs, primarily for root-cause analysis, with significant variation (5-60%) across teams. Most analysis involves collaboration (67%), often with senior-junior pairings or component experts. Major challenges include log volume and noise (68%), format inconsistency (58%), and lack of context (52%). Our study also highlights the gap between logging implementation and analysis needs, with developers mostly using semi-structured log formats, adhoc-processes for analysis, and relying mostly on informal guidelines. These insights provide valuable direction for improving fault localization tools and practices in industrial software development, particularly in the automotive domain.
-
Round Table Discussions / Closing Remarks
Venue
The meeting will be held in University of Southern Denmark.
Discover the charm of Odense, Denmark’s hidden gem! Step into the enchanting city that inspired the timeless fairy tales of Hans Christian Andersen. Wander through picturesque streets, explore historic landmarks, and immerse yourself in a rich cultural scene. Whether you’re strolling through lush parks, visiting world-class museums, or enjoying vibrant cafes, Odense offers a perfect blend of tradition and modernity. Come experience the warmth, creativity, and welcoming spirit of Odense – a city where stories come to life!
Accomodation
You can choose a hotel near the city centre. The meeting venue is easily reachable with the tram in 20 min.
This Privacy Policy explains how Syddansk Universitet (the “Data Controller”) (“we” or “us”) processes your personal data.
1. DATA CONTROLLER
The legal entity responsible for processing your personal information is:
Syddansk Universitet Corp ID: 29283958
Campusvej 55
5230 Odense
Denmark
2. DESCRIPTION OF THE PROCESSING
In connection with RSE 2025, SDU wants to collect information about you. SDU is Data Controller and will ensure that the data is processed in accordance with GDPR.
In connection hereto, SDU is obligated to inform you about the processing of your personal data.
The following personal data is processed: First name, surname, address, company/institution, position, VAT number, food preferences, IP-addresses, payment information, photos and videos.
Purposes of the Processing
We process your personal data to register your participation in RSE 2025, as well as sending you relevant information on important topics, such as changes in program. Furthermore, we process the provided data in order to validate whether you are eligible for any participant specific ticket discounts.
Lawfulness of Processing
The information is to be processed in compliance with the General Data Protection Regulation Art. 6(1)(b) and (e).
This is How we Use Personal Data
SDU is responsible for processing your personal data and will keep your information confidential under existing laws. Your information will only be used for the purpose described above and will not be accessible to unauthorized persons.
SDU will delete the information when it is no longer relevant to keep it. Your information will be deleted at SDU no later than March 11th 2026. Your rights
You have the right to request insight, rectification or deletion of your personal data
You have the right to oppose the processing of your personal data and have the processing of personal data limited
You have an unconditional right to oppose your personal data is used for marketing If the processing of your personal data is based on your approval, you have the right to withdraw that approval at any time. Your withdraw has no influence on the legal basis for processing done prior to your withdraw.
You have the right to receive the personal data you have given in a structured and normally used and machine-readable format (data portability)
Publication
Photos and videos will be published, where you may be able to be identified.
Disclosure
Your data can be disclosed to third parties, including but not limited to RSE organizers, participants, sponsors, exhibitors, restaurants, transportation companies.
Further Information
If you have any questions, you can contact mica@mmmi.sdu.dk at any time.
If you have any question about data protection and your rights, please contact our DPO, Simon Kamber on email: dpo@sdu.dk.
If you want to complain about the processing of personal data, you may contact the Danish Data Protection Agency via www.datatilsynet.dk.
Contact
For any questions or suggestions about the meeting, please email us at rsemeeting@gmail.com.