Vignesh Prasad

Vignesh Prasad joined the PEARL group as a Postdoc in February 2024 under Prof. Georgia Chalvatzaki. His research interests span in the fields of Robot Learning, Computer Vision, and Human-Robot Interaction.

Vignesh defened his Ph.D. in Computer Science at the TU Darmstadt under the supervision of Prof. Jan Peters (Ph.D) and Prof. Dr. Dr. Ruth Stock Homburg with a thesis titled “Learning Human-Robot Interaction: A Case Study on Human-Robot Handshaking”. His PhD was done in collaboration with Prof. Georgia Chalvatzaki and Dr.-Ing Dorothea Koert

Prior to this, Vignesh worked as a researcher in the Machine Vision Group at TCS Innovation Labs, Kolkata under Dr. Brojeshwar Bhowmick, where he worked on Deep Learning for Monocular 3D Reconstruction and Computer Vision. During this time, Vignesh’s work won the Best Paper Award at the 2018 Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP). Vignesh pursued his Bachelors and Masters in Computer Science and Engineering from IIIT Hyderabad, India. His Master’s thesis titled “Learning Effective Navigational Strategies for Active Monocular Simultaneous Localization and Mapping” was done at the Robotics Research Center under Dr. K. Madhava Krishna in collaboration with Prof. Balaraman Ravindran.

Research Interests

Robot Learning from Demonstration, Computer Vision, Human-Robot Interaction and Machine Learning

News

July 2025 – 2 Papers accepted at ICCV 2025:
2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos
6DOPE-GS: Online 6D Object Pose Estimation using Gaussian Splatting

July 2025 – 2 Papers accepted at Humanoids 2025:
The Role of Embodiment in Intuitive Whole-Body Teleoperation for Mobile Manipulation
Neural Multi-Axis Grasp Generation for Industrial Objects

June 2025 – Papper accepted for publication at IEEE T-RO
Learning Multimodal Latent Dynamics for Human-Robot Interaction

Oct 2024 – Presented & won a Best Workshop Paper Award 🏆 for MoVEInt at the Workshop on Nonverbal Cues for Human-Robot Cooperative Intelligence at IROS 2024!

July 2024 – Won a Best Workshop Paper Award 🏆 for “ActionFlow: Equivariant, Accurate, and Efficient Policies with Spatially Symmetric Flow Matching” at the Workshop on Structural Priors as Inductive Biases for Learning Robot Dynamics at R:SS 2024

July 2024 – Co-Organizing the Workshop on Generative Modeling Meets Human-Robot Interaction at R:SS 2024

April 2024 – Paper accepted for publication at IEEE RA-L & to be presented at IROS 2024
MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions from Demonstrations

February 2024 – Joined PEARL Lab as a Post Doc

December 2023 – Successfully defended my PhD titled “Learning Human-Robot Interaction: A Case Study on Human-Robot Handshaking

Key References

6DOPE-GS: Online 6D Object Pose Estimation using Gaussian Splatting, Y. Jin, V. Prasad, S. Jauhri, M. Franzius, G. Chalvatzaki,
IEEE/CVF International Conference on Computer Vision (ICCV) 2025.
[Webpage] [Code] (Coming soon)

The Role of Embodiment in Intuitive Whole-Body Teleoperation for Mobile Manipulation, S. B. Moyen, R. Krohn, S. C. Lueth, K. Pompetzki, J. Peters, V. Prasad, G. Chalvatzaki, IEEE International Conference on Humanoid Robots (Humanoids) 2025.

Neural Multi-Axis Grasp Generation for Industrial Objects, S. Đonlić, V. Prasad, M. Rupp, G. Chalvatzaki, IEEE International Conference on Humanoid Robots (Humanoids) 2025.

Learning Multimodal Latent Dynamics for Human-Robot Interaction, V. Prasad, L. Heitlinger, D. Koert, R. Stock-Homburg, J. Peters, G. Chalvatzaki, IEEE Transactions on Robotics (T-RO) 2025.
MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction, V. Prasad, D. Koert, R. Stock-Homburg, J. Peters, G. Chalvatzaki, IEEE International Conference on Humanoid Robots (Humanoids) 2022.
[Website] [Code]

I3: Interactive Iterative Improvement for Few-Shot Action Segmentation, M. Gassen, F. Metzler, E. Prescher, V. Prasad, L. Scherf, F. Kaiser, D. Koert, IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) 2023.

Learning Human-like Hand Reaching for Human-Robot Handshaking, V. Prasad, R. Stock-Homburg, J. Peters, IEEE International Conference on Robotics & Automation (ICRA) 2021.
[Code]

Human-Robot Handshaking: A Review, V. Prasad, R. Stock-Homburg, J. Peters, International Journal of Social Robotics (IJSR) 2022.
Advances in Human-Robot Handshaking, V. Prasad, R. Stock-Homburg, J. Peters, International Conference on Social Robotics (ICSR) 2020.

Variational Clustering: Leveraging Variational Autoencoders for Image Clustering, V. Prasad, D. Das, B. Bhowmick IEEE International Joint Conference on Neural Networks (IJCNN) 2020.

SfMLearner++: Learning Monocular Depth & Ego-Motion using Meaningful Geometric Constraints, V. Prasad, B. Bhowmick, IEEE Winter Conference on Applications of Computer Vision (WACV) 2019.
Epipolar Geometry based Learning of Multi-view Depth and Ego-Motion from Monocular Sequences, V. Prasad, D. Das, B. Bhowmick, Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP) 2018.
Best Paper Award at ICVGIP 2018

Learning to Prevent Monocular SLAM Failure using Reinforcement Learning, V. Prasad, K. Yadav, R. S. Saurabh, S. Daga, N. Pareekutty, K. M. Krishna, B. Ravindran, B. Bhowmick, Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP) 2018.


Data driven strategies for Active Monocular SLAM using Inverse Reinforcement Learning, V. Prasad, R. Jangir, B. Ravindran, K. M. Krishna, International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2017
[Code]

Contact

> Email: vignesh.prasad@tu-darmstadt.de
> Address:
Vignesh Prasad
TU Darmstadt,
Landwehrstr. 50A, 64293 Darmstadt
Office. Room 108, Building S4|2