Best Workshop Paper Finalist @ CoRL 2025 Workshop

1–2 minutes

One of our ICCV Papers, “2HandedAfforder“, was a best paper finalist at the Human to Robot (H2R) Workshop on Sensorizing, Modeling, and Learning from Humans at CoRL 2025

TL;DR: We propose (i) a framework for extracting bimanual affordance data from human activity video datasets and (ii) a novel VLM-based bimanual affordance prediction model, that predicts actionable bimanual affordance regions from task-related text prompts.

~ 2HandedAfforder

Marvin Heidinger, Snehal Jauhri, Vignesh Prasad, Georgia Chalvatzaki, 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos“, IEEE/CVF International Conference on Computer Vision (2025).

Discover more from Interactive Robot Perception & Learning

Subscribe now to keep reading and get access to the full archive.

Continue reading