Offered topics

Disclaimer: If you want to propose any thesis topic that falls in the general area of the listed research directions, contact Prof. Chalvatzaki directly, and discuss a topic that suits your interests:

  • Perception/Computer Vision for robot manipulation and interaction;
  • Robot learning for embodied agents such as mobile manipulators;
  • Graph Neural Network representations for Robot Learning;
  • Uncertainty-aware robot learning for (mobile) manipulation;
  • Human behavior understanding and uncertainty quantification;
  • Human-Robot Interaction;
  • Long-horizon reasoning, with a focus on combining Graph-based Reinforcement Learning and Planning.

We offer these current topics directly for Bachelor and Master students at TU Darmstadt. Note that we cannot provide funding for any of these theses projects.

We highly recommend that you do robotics and machine learning lectures (Robot Learning, Statistical Machine Learning, Reinforcement Learning, Grundlagen der Robotik, Probabilistic Graphical Models, and/or Deep Learning) before applying for a thesis. Even more important to us is that you followed our Intelligent Robotic Manipulation Seminar and our Project Lab, or that you followed both Robot Learning: Integrated Project, Part 1 (Literature Review and Simulation Studies) and Part 2 (Evaluation and Submission to a Conference), before doing a thesis with us.

When you contact the advisor, it would be nice if you could mention (1) WHY you are interested in the topic (dreams, parts of the problem, etc.), and (2) WHAT makes you special for the projects (e.g., class work, project experience, special programming or math skills, prior work, etc.). Supplementary materials (CV, grades, etc.) are highly appreciated. Of course, such materials are not mandatory, but they help the advisor to see whether the topic is too easy, just about right, or too hard for you.

FOR FB16+FB18 STUDENTS: Students from other departments at TU Darmstadt (e.g., ME, EE, IST), you need an additional formal supervisor who officially issues the topic. Please do not try to arrange your home dept advisor by yourself but let the supervisor contact that person instead!

Thema 1: Large Vision-Language Neural Networks for Open-Vocabulary Robotic Manipulation

Scope: Master thesis
Advisor: Snehal Jauhri, Ali Younes
Added: June 28, 2023
Start: ASAP
Topic (in detail): Attach:Theses/OpenTopics/irosa_master_thesis_doc.pdf

Topic (in brief): Robots are expected to soon leave their factory/laboratory enclosures and operate autonomously in everyday unstructured environments such as households. Semantic information is especially important when considering real-world robotic applications where the robot needs to re-arrange objects as per a set of language instructions or human inputs (as shown in the figure).
Many sophisticated semantic segmentation networks exist [1]. However, a challenge when using such methods in the real world is that the semantic classes rarely align perfectly with the language input received by the robot. For instance, a human language instruction might request a ‘glass’ or ‘water’, but the semantic classes detected might be ‘cup’ or ‘drink’.
Nevertheless, with the rise of large language and vision-language models, we now have capable segmentation models that do not directly predict semantic classes but use learned associations between language queries and classes to give us ’open-vocabulary’ segmentation [2]. Some models are especially powerful since they can be used with arbitrary language queries.
In this thesis, we aim to build on advances in 3D vision-based robot manipulation and large open-vocabulary vision models [2] to build a full pick-and-place pipeline for real-world manipulation. We also aim to find synergies between scene reconstruction and semantic segmentation to determine if knowing the object semantics can aid the reconstruction of the objects and, in turn, aid manipulation.

Highly motivated students can apply by sending an e-mail expressing their interest to Snehal Jauhri (email: snehal.jauhri@tu-darmstadt.de) or Ali Younes (email: ali.younes@tu-darmstadt.de), attaching your letter of motivation and possibly your CV.

Requirements: Enthusiasm, ambition, and a curious mind go a long way. There will be ample supervision provided to help the student understand basic as well as advanced concepts. However, prior knowledge of computer vision, robotics, and Python programming would be a plus.

References:
[1] Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2”, https://github.com/facebookresearch/detectron2, 2019.
[2] F. Liang, B. Wu, X. Dai, K. Li, Y. Zhao, H. Zhang, P. Zhang, P. Vajda, and D. Marculescu, “Open-vocabulary semantic segmentation with mask-adapted clip,” in CVPR, 2023, pp. 7061–7070. [Online]. Available: https://github.com/facebookresearch/ov-seg

Thema 2: On designing reward functions for robotic tasks

Scope: Bachelor’s thesis/Master’s thesis
Advisor: Davide TateoGeorgia Chalvatzaki
Start: ASAP

Topic: Defining a proper reward function to solve a robotic task is a complex and time-consuming process. Reinforcement Learning algorithms are sensitive to reward function definitions, and an improper design of the reward function may lead to suboptimal performance of the robotic agent, even in simple low-dimensional environments. This issue makes it complex to design novel reinforcement learning environments, as the reward tuning procedure takes too much time and leads to overcomplicated and algorithm-specific reward functions.

The objective of this thesis is to study and develop a set of guidelines for building Reinforcement Learning environments representing robotics simulated tasks. We will analyze in-depth the impact of different types of reward functions on very simple tasks for continuous control such as navigation, manipulation, and locomotion. We will consider how the state space affects learning (e.g., dealing with rotations) and how we should deal with these issues in a standard Reinforcement Learning setting. Furthermore, we will verify how to design a reward function that leads to a policy producing smooth actions, to minimize the issues of the sim-to-real transfer of the learned behavior.

Requirements

Curriculum Vitae (CV);

A motivation letter explaining the reason for applying for this thesis and academic/career objectives.

Minimum knowledge

Good Python programming skills

Basic knowledge of Reinforcement Learning.

Preferred knowledge

Knowledge of the PyTorch library;

Knowledge of the MuJoCo and PyBullet libraries.

Knowledge of the MushroomRL library.

Accepted candidate will

Port some classical MuJoCo/PyBullet locomotion environments into MushroomRL;

Design a set of simple manipulation tasks using PyBullet;

Design a set of simple navigation tasks;

Analyze the impact of different reward function definitions in these environments;

Verify if the insights coming from the simple tasks still holds in more complex (already existing) environments;

(Optionally) Port some classical MuJoCo/PyBullet locomotion environments into MushroomRL.

Thema 3: Adaptive Human-Robot Interactions with human Trust maximization

Scope: Master thesis
Advisor: Kay Hansel,Georgia Chalvatzaki
Start: April 2023

Topic: Building trust between humans and robots is a major goal of Human-Robot Interaction (HRI). Usually, trust in HRI has been associated with risk aversion: a robot is trustworthy when its actions do not put the human at risk. However, we believe that trust is a bilateral concept that governs the behavior and participation in the collaborative tasks of both interacting parties. On the one hand, the human has to trust the robot about its actions, e.g., delivering the requested object, acting safely, and interacting in a reasonable time horizon. On the other hand, the robot should trust the human regarding their actions, e.g., have a reliable belief about the human’s next action that would not lead to task failure; a certainty in the requested task. However, providing a computational model of trust is extremely challenging.

Therefore, this thesis explores trust maximization as a partially observable problem, where trust is considered as a latent variable that needs to be inferred. This consideration results in a dual optimization problem for two reasons: (i) the robot behavior must be optimized to maximize the human’s latent trust distribution; (ii) an optimization of the human’s prediction model must be performed to maximize the robot’s trust. To address this challenging optimization problem, we will rely on variational inference and metrics like Mutual Information for optimization.
Highly motivated students can apply by sending an e-mail expressing your interest to kay.hansel@tu-darmstadt.de, attaching a letter of motivation and possibly your CV.

Requirements:

  • Good knowledge of Python and/or C++;
  • Good knowledge in Robotics and Machine Learning;
  • Good knowledge of Deep Learning frameworks, e.g, PyTorch

References:
[1] Xu, Anqi, and Gregory Dudek. “Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations.” ACM/IEEE HRI, IEEE, 2015;
[2] Kwon, Minae, et al. “When humans aren’t optimal: Robots that collaborate with risk-aware humans.” ACM/IEEE HRI, IEEE, 2020;
[3] Chen, Min, et al. “Planning with trust for human-robot collaboration.” ACM/IEEE HRI, IEEE, 2018;
[4] Poole, Ben et al. “On variational bounds of mutual information”. ICML, PMLR, 2019.

%d