We are hiring now Postdocs and PhD students!
Define the Algorithmic Foundations of Intelligent Human-centered Robots!
Join the PEARL Lab to bridge the gap between Foundation Models, Robot Learning, Reasoning and Real-World Bimanual Mobile Manipulation
At PEARL, we are building the principles, representations, and learning algorithms that will enable holistic robotic embodied intelligence. Our goal is not just to train larger models, but to develop structured, physically grounded agents that understand, act, and adapt in the real world.
We are looking for researchers excited to push theory and systems forward at the intersection of perception, action, learning, and reasoning. Our research agenda includes:
Interactive Mobile Manipulation & Whole-Body Intelligence
Robots must move through and act within the world as a unified body—not as separate navigation and manipulation modules.
You will develop algorithms that integrate reachability-aware control, task-space reasoning, and whole-body optimization, enabling robots to perform fluent, human-centered tasks in dynamic environments.
This work directly advances SIREN’s vision of embodied intelligence driven by structure-aware perception and action models.
Structured Planning & Long-Horizon Decision Making
Generalist robots must reason beyond immediate actions.
You will create hierarchical and structured planning frameworks that combine the semantic abstraction capabilities of LLMs with the predictive guarantees of geometric and trajectory optimization.
Your work will contribute to the SIREN ambition of robust, interpretable long-horizon autonomy anchored in physical law.
Multimodal Perception & 3D Representation Learning
Robots need representations that support action, not just recognition.
You will advance self-supervised 3D perception, fusing vision, touch, proprioception, and interaction signals to learn task-relevant geometry, affordances, and contact opportunities.
This is central to SIREN’s mission of interactive perception as the driver of generalization.
Grounding Foundation Models in the Physical World
Foundation models bring semantic knowledge but lack physical grounding.
You will develop algorithms that anchor Foundation Models like Vision–Language–Action models (VLAs) to constraints of physics, kinematics, and embodiment—ensuring that high-level reasoning translates into safe, feasible, and precise actions.
This advances SIREN’s goal of bridging semantic intelligence and robotic feasibility.
Efficient Reinforcement Learning with Structure & Priors
Generalization cannot rely on scale alone.
You will design sample-efficient RL algorithms that leverage symmetry, geometry, contact structure, and other physical priors to learn skilled behaviors with orders of magnitude less data.
This aligns with SIREN’s core objective of structured learning and supports the Krupp narrative of developing algorithmically principled AI methods.
Dexterous Bimanual Manipulation & Coordinated Embodiment
Real-world tasks require coordinated, adaptive embodiment.
You will develop algorithms for dexterous, symmetric, and cooperative control of dual-arm systems such as the TIAGo++, enabling human-level bimanual tasks that require precise coordination, compliant interaction, and dynamic skill composition.
This area exemplifies SIREN’s overarching philosophy: structured embodiment as the foundation of general robotic skill.
Safe and Trustworthy Human–Robot Interaction
Embodied intelligence must be human-aware.
You will build models that predict and respond to human motion, intent, and uncertainty using structured interactive perception, probabilistic modeling, and safety-constrained control.
Your work ensures robots can collaborate, assist, and coordinate with people safely and transparently in homes, workplaces, and public environments.
This is central to the SIREN ambition of structured, human-centric interaction and essential to the vision of AI systems that benefit society.
