Overview

Computer vision research has recently seen impressive success in creating implicit neural representations of scenes and objects, such as neural radiance fields (NeRFs) and deep Signed Distance Functions (DeepSDFs). Similarly, implicit representations (not necessarily learned) such as SDFs and Riemannian Motion Policies (RMPs) have been used with great success in robotic motion planning, manipulation, and perception. In this workshop, we seek to explore the future of implicit neural representations (INRs) in robotics. In such a quickly-changing space, our aim is to provide the robotics community with a cohesive and united event to discuss the impacts and possibilities of such implicit neural representations.

Through invited talks and a poster session, we hope to explore:

  • how to quickly learn INRs online from sensor data,
  • how to tractably reason about uncertainty in these representations,
  • how to leverage INRs for robotic localization, motion planning, manipulation, and locomotion tasks,
  • how to embed semantic understanding in such representations, and
  • how to adapt INRs to simulate and predict dynamic and uncertain scenes.

We hope that this workshop will fuel interest and promote research in learning implicit scene representations for applications in robotics.


Key Details

  • Workshop date: Friday, May 27, 2022.
  • Workshop format: The workshop will be in a hybrid format, with the in-person session held in Room 115C, from 8:30am-5:45pm EST. Virtual participants can join the session remotely, using the Zoom link provided through Infovaya.

Workshop Recordings

All talks at the worksop were recorded on Zoom; we are working with our speakers and the conference organizers to get permission to post the recordings to YouTube to be viewed indefinitely. We will post links to the workshop recordings here, as they are available.


Speakers

Photo of Andrew Davison.
Andrew Davison

Imperial College London

Photo of Chad Jenkins.
Chad Jenkins

University of Michigan

Photo of Jiajun Wu.
Jiajun Wu

Stanford University

Photo of Ransalu Senanayake.
Ransalu Senanayake

Stanford University

Photo of Shuran Song.
Shuran Song

Columbia University


Workshop Schedule

Time (EST) Event
08:30 - 08:45 Intro by organizers
08:45 - 09:15 Invited Talk (Andrew Davison)
09:15 - 09:45 Invited Talk (Chad Jenkins)
09:45 - 10:15 Young Researcher Spotlights
10:15 - 11:00 Poster Session + Coffee Break
11:00 - 11:30 Invited Talk (Jiajun Wu)
11:30 - 12:00 Invited Talk (Aleksandra Faust)
12:00 - 12:30 Morning Panel Discussion
12:30 - 14:00 Lunch Break
14:00 - 14:30 Invited Talk (Shuran Song)
14:30 - 15:00 Invited Talk (Angjoo Kanazawa)
15:00 - 15:30 Invited Talk (Ransalu Senanayake)
15:30 - 16:00 Afternoon Panel Discussion

Accepted Papers

Simultaneously Learning Contact and Continuous Dynamics, Bibit Bianchini and Michael Posa.
SDF-based RGB-D Camera Tracking in Neural Scene Representations, Leonard Bruns, Fereidoon Zangeneh, and Patric Jensfelt.
Learning Multi-Object Dynamics with Compositional NeRFs, Danny Driess, Zhiao Huang, Yunzhu Li, Russ Tedrake, and Marc Toussaint.
Neural Cost-to-Go Function Representation for High Dimensional Motion Planning, Jinwook Huh, Daniel D. Lee, and Volkan Isler.
Learning-Based Motion Planning for High-Speed Quadrotor Flight, Elia Kaufmann and Davide Scaramuzza.
Learning Prior Mean Function for Gaussian Process Implicit Surfaces, Tarik Kelestemur, Taskin Padir, Robert Platt, and David Rosen.
Implicit Distance Functions: Learning and Applications in Control, Mikhail Koptev, Nadia Figueroa, and Aude Billard.
NeRF-ysics: A Differentiable Pipeline for Enriching NeRF-Represented Objects with Dynamical Properties, Simon Le Cleac’h, Taylor Howell, Mac Schwager, and Zachary Manchester.
ReDSDF: Regularized Deep Signed Distance Fields for Robotics, Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Jan Peters, and Georgia Chalvatzaki.
iSDF: Real-Time Neural Signed Distance Fields for Robot Perception, Joseph Ortiz, Alexander Clegg, Jing Dong, Edgar Sucar, David Novotny, Michael Zollhoefer, and Mustafa Mukadam.
Sampling-free obstacle gradients and reactive planning in Neural Radiance Fields, Michael Pantic, Cesar Cadena, Roland Siegwart, and Lionel Ott.
Self-supervised implicit shape reconstruction and pose estimation for video prediction, Diego Patiño*, Karl Schmeckpeper∗, Hita Gupta, Georgios Georgakis, and Kostas Daniilidis.
Improving Sample-based MPC with Normalizing Flows & Out-of-distribution Projection, Thomas Power and Dmitry Berenson.
VIRDO: Visio-tactile Implicit Representations of Deformable Objects, Youngsun Wi, Pete Florence, Andy Zeng, and Nima Fazeli.

Organizers

Photo of Shreyas Kousik.
Shreyas Kousik

Stanford University

Photo of Preston Culbertson.
Preston Culbertson

Stanford University

Photo of Mac Schwager.
Mac Schwager

Stanford University

Photo of Jeannette Bohg.
Jeannette Bohg

Stanford University


Contact Information

If you have any questions, please do not hesitate to contact Preston Culbertson or Shreyas Kousik for more information.