ReScaLe
Responsible and Scalable Learning for Robots Assisting Humans

Empowering Robots to Learn by Observation
The ReScaLe project (Responsible and Scalable Learning for Robots Assisting Humans) investigates advanced training methodologies for artificial intelligence (AI)-enabled robotic systems. Such systems hold significant potential to contribute to a broad range of societal functions — from providing assistance in daily life to optimising industrial production processes. Despite considerable recent progress in robotics and AI research, their integration into everyday contexts remains limited.
ReScaLe seeks to bridge this gap through a dual focus. On the technical side, the project addresses key challenges in machine learning that currently restrict the applicability of autonomous robotic systems. In parallel, it systematically analyses the social, ethical, and legal dimensions of robotics in order to promote public trust and societal acceptance.
Bridging Disciplines:
Our Holistic Scientific Approach

Part A:
Methods for robot skill learning in human-centered environments through imitation and reinforcement learning
WP1: Active exploration in learning by demonstration
WP2: Continual Learning for Assistive Robots
As part of the broader research initiative, the Part A investigates how autonomous robotic systems can acquire and refine practical skills through observation and interaction within human-centered environments. Using advanced mobile platforms equipped with integrated sensing and manipulation capabilities, the research focuses on representative domestic activities such as table setting, spatial organization, and object sorting. The goal is to develop adaptive learning frameworks that enable robots to efficiently acquire, generalize, and improve task performance, thereby enhancing their capacity to provide effective assistance in everyday life.
Part B:
Generalization of skills across different tasks and environments using meta-learning techniques
WP3: Federated learning in diverse environments
WP4: Meta-learning with meta-features and augmentation
WP5: Meta-learning with dynamic algorithm configuration for reinforcement learning
In Part B, we investigate methods enabling robots to transfer learned skills across multiple, distinct real-world settings. Several additional kitchen environments will be established, each with variations in objects, contextual conditions and task goals, while overall task types remain comparable. Identical robotic platforms, as described in Part A, will operate in parallel across these environments, acquiring and refining skills while sharing experimential knowledge. The research aims to develop meta-learning approaches that produce robust, generalizable robot policies capable of adapting to new environments without the need for retraining, thereby enhancing overall learning efficiency and adaptability.
Part C:
Assessment of uncertainty in learned models, risk management, ethical and legal basis and implications
WP6: Estimating uncertainty in learned models
WP7: Risk management and safety in human-robot interaction
WP8: Governing responsible AI-based robot behavior
In Part C we focus on establishing the foundations for safe and responsible interaction between humans and AI-driven robotic systems by enhancing uncertainty estimation in deep learning models. By accurately quantifying what these models do not know—particularly in the context of sequential decision-making—the research aims to improve risk management strategies that govern robot behavior in human-centered environments. Alongside these technical developments, the project systematically examines legal and ethical requirements to ensure a responsible research and innovation approach. Empirical data gathered through participatory ethics research with diverse stakeholders will inform the creation of a multi-level framework integrating technical, legal, and ethical perspectives. This framework is designed to guide the development of AI-based robotic systems that align with human rights, prevent unforeseen risks, and enable responsible operation.
Part D:
Technology acceptance
research and outreach
WP9: NEXUS Experiments
A platform of mutual knowledge transfer
This subproject promotes a bidirectional exchange of knowledge between scientific and non-scientific stakeholders. Preliminary and ongoing research outcomes from all work packages will be synthesized and disseminated through innovative formats of public outreach and knowledge transfer. The objective is to foster open and constructive dialogue between researchers and the public on the societal and ethical dimensions of assistive AI systems, while also enabling feedback and perspectives from society to inform academic research. This participatory process will serve as the foundation for developing a novel model of technology acceptance in the context of human–robot interaction, advancing technology acceptance research as a participatory and transdisciplinary practice.
Principal Investigators
Project Management
External Cooperation partners:
Bosch Center for Artificial Intelligence, Franka Emika, Siemens, Sick AG, Toyota Motor Europe