ICRA 2020 ViTac Workshop

ViTac 2020: Closing the Perception-Action Loop with Vision and Tactile Sensing

The workshop will take place online (live) on Sunday 31 May 2020 09:00-18:30 (Paris Time).

Detailed program available here.

Accepted papers available here.

The workshop will take place online (live) via Zoom.

To participate the workshop, please register here for the Zoom meeting.

The participation to the workshop will be free of charge.

Workshop Slack channel to ask questions to speakers in addition to live discussions.

Recordings of the workshop have been uploaded to our YouTube channel.We had around 500 registrations!

The chat texts in the meeting: morning session and afternoon session.

This workshop will encompass recent progress in the area of combining vision and touch sensing for integrated perception- action loop. The full-day workshop aims to enhance active collaboration, discussion of methods for the fusion of vision and touch in both perception and action, challenges for this important topic and applications.

Scope

The workshop will be divided into three themes: development of touch sensors for perception-action tasks (hardware focused); multimodal robot perception and action using vision and tactile sensing; and inspirations from how humans integrate visual and haptic information for perception-action tasks.

Topics of Interest

  • trends in combining visual and tactile sensing for robot perception and actions
  • development of optical tactile sensors (using visual cameras or optical fibres)
  • integration of tactile sensing and vision for robot tasks, e.g., manipulation and grasping
  • roles of vision and touch sensing in different tasks, e.g., object recognition, localization, object exploration, planning, learning and action selection
  • interplay between perception and actions with touch sensing and vision
  • bio-inspired approaches for fusion of vision and touch sensing in perception and actions
  • psychophysics and neuroscience of combining vision and tactile sensing in humans and animals
  • computational methods for processing vision and touch data in robot learning
  • deep learning for optical tactile sensing and relation/interaction with deep learning for robot vision
  • the use of vision and touch for safe human-robot interaction/collaboration

Invited Speakers

  • Peter Allen (Columbia University) – recognised for his multiple pioneering works on integration of vision and tactile sensing, especially applied in robot grasping;
  • Dieter Fox (University of Washington and Nvidia) – world-renowned roboticist, there have been quite a few research works on using vision and tactile sensing for different tasks at his prestigious research group;
  • Vincent Hayward (UPMC Univ Paris) – distinguished and well known for his research on human perception, especially human touch and haptics;
  • Alberto Rodriguez (MIT) – an expert in grasping and manipulation, won several Amazon picking challenges, with the assistance of both vision and tactile sensing;
  • Roberto Calandra (Facebook) – an expert in machine learning and reinforcement learning, with applications in tactile sensing and dynamics modeling;
  • Kaspar Althoefer (Queen Mary University of London) – an expert in developing optical (fibre) based tactile sensors and soft robotics for different applications;
  • Huaping Liu (Tsinghua University) – recognised for his contributions to the multimodal perception with vision and tactile sensing;
  • Robert Haschke (Bielefeld University) – a renowned expert for the works on tactile-servoing based manipulation
  • Lorenzo Natale (Istituto Italiano di Tecnologia) – recognised for his pioneering work in active touch and visual perception on the iCub robot

Key Dates

Posters and live demonstrations will be selected from call for extended abstracts, reviewed by the organizers and invited reviewers. The best posters will be invited to talk at the workshop. All submissions will be reviewed using a single-blind review process. Accepted contributions will be presented during the workshop as posters. Submissions must be sent in pdf, following the IEEE conference style (two-columns and must not exceed two pages), to the EasyChair system (submission link).

Organizers

Shan Luo (University of Liverpool)

Nathan Lepora (Univ Bristol & Bristol Robotics Lab)

Wenzhen Yuan (Carnegie Mellon University)

Gordon Cheng (Technische Universität München)

We are grateful for the support of the following organisations