Overview

The workshop will be held on 4st of November 2019 in The Venetian Macao, Macau, China in the context of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), one of the largest and highest impact robotics research conferences worldwide.

Manipulating objects autonomously and in unstructured environments is one of the basic skills for robots to support people during everyday life outside industrial cages. The study of autonomous manipulation in robotics aims at transferring human-like perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects.

The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing a big interest in the community in the recent years. Conventional approaches attempt to reconstruct the scene using 3D vision and compute agrasping pose that attain force closure constraints, or by querying a database of precomputed or learned poses. More recently, grasping has been addressed using end-to-end learning methods showing great performance. However, these methods require robots to perform thousands of trials. For this reason their application is often focused on simple grippers and scenarios in which images are acquired from a top down view. Manipulation with multi finger hands and mobile robots is unfortunately still out of the scope of these techniques due to the problem complexity.


Objectives

The aim of this workshop is to discuss and present different techniques proposed for addressing the same problem: object manipulation.

More than a comparison, this workshop is designed to encourage people belonging to different research fields such as robotics and deep learning to share their approaches, ideas and problems regarding autonomous manipulation.


Topics of interest

The following list provides a set of topics (keywords) addressed in the workshop.

 


Workshop Program

Time Activity
9:00 - 9:15 Introduction
09:15 – 09:45 Speaker 1: Lorenzo Natale
09:45 – 10:15 Speaker 2: Oliver Brock
10:15 – 10:45 Spotlight Presentations
10:45 – 11:15 Coffee Break: Poster Session 1
11:15 – 11:45 Speaker 3: Tamim Asfour
11:45 – 12:15 Speaker 4: Robert Platt
12:15 – 12:45 Speaker 5: Markus Vincze
13:00 – 14:00 Lunch Break
14:15 – 14:45 Speaker 8: Juxi Leitner
14:45 – 15:15 Speaker 9: Abhishek Gupta
15:15 – 15:45 Speaker 10: Lorenzo Jamone
15:45 – 16:15 Coffee Break: Poster session 2
16:15 – 17:15 Q&A

Q&A Session

The Q&A Session provides the opportunity to people from the audience to ask questions to the speakers about the topics addressed by the workshop.

In order to facilitate the process, the questions will be collected on a Q&A and polling platform. The most popular questions will be selected for discussion during the workshop.

Here is the 3-step process to ask your question:

Invited Speakers

Lorenzo Natale, Istituto Italiano di Tecnologia (IIT)

Lorenzo Natale is Tenured Senior Researcher at the Italian Institute of Technology. He received his degree in Electronic Engineering (with honours) and Ph.D. in Robotics from the University of Genoa. He was later postdoctoral researcher at the MIT Computer Science and Artificial Intelligence Laboratory. He was invited professor at the University of Genova where he taught the courses of Natural and Artificial Systems and Antropomorphic Robotics for students.

Grasping and benchmarking on the iCub humanoid robot

Grasping of unknown objects or objects whose pose is uncertain is still an open problem in robotics. The missing or noisy information on object models and poses strongly affects manipulation performance. On the other hand research on grasping is made difficult by the fact that research there are no methodologies for comparing results obtained using different robotic platforms. In the past few years we have developed a framework for grasping unknown objects of various shapes using superquadric models. We initially proposed to model objects using single superquadric function, and more recently, extended this approach to use multi-superquadrics to model objects with finer details. In the first part of this talk I will revise our work, showing experiments with the iCub humanoid robot on the YCB dataset. In the second part of the talk I will describe a benchmarking protocol and software called GRASPA, which is specifically devised to test effectiveness of grasp planners on real robots, proposing various metrics to take into account of features and limits of the specific platform.

Oliver Brock, TU Berlin

Oliver Brock is the Alexander-von-Humboldt Professor of Robotics in the School of Electrical Engineering and Computer Science at the Technische Universität Berlin in Germany, which is a German University of Excellence. He received his Ph.D. from Stanford University in the year 2000 and held post-doctoral positions at Rice University and Stanford University. He was an Assistant Professor and Associate Professor in the Department of Computer Science at the University of Massachusetts Amherst, before to moving back to the Technische Universität Berlin in 2009. The research of Brock's lab, the Robotics and Biology Laboratory, focuses on robot intelligence, mobile manipulation, interactive perception, grasping, manipulation, soft material robotics, interactive machine learning, deep learning, motion generation, and the application of algorithms and concepts from robotics to computational problems in structural molecular biology. Oliver Brock is the coordinator of the Research Center of Excellence "Science of Intelligence". He is an IEEE Fellow and was president of the Robotics: Science and Systems Foundation from 2012 until 2019.

Everything is Better With Understanding, Even Learning Manipulation

The most appropriate methodology for solving a problem follows from the problem itself. This, of course, also holds for manipulation. Consequently, the discussion about the most appropriate methodology for manipulation, whether it be learning-free or learning-based (supervised, imitation, or deep) – should be informed by an understanding of manipulation. In this talk, I will discuss what I believe to be our current and relevant understanding of manipulation, and how we can make manipulation versatile, dexterous, and robust. I will also attempt to infer from this which of the possible methodologies might be most appropriate for particular aspect of manipulation. I hope to convince you that making progress in manipulation is not a question of mutual exclusivity of methodologies but one of clever combination.

Tamim Asfour, Karlsruhe Institute of technology (KIT)

Tamim Asfour is full Professor of Humanoid Robotics at the Institute for Anthropomatics and Robotics at the Karlsruhe Institute of Technology (KIT). His research focuses on the engineering of high performance 24/7 humanoid robotics as well as on the mechano-informatics of humanoids as the synergetic integration of informatics, artificial intelligence and mechatronics into humanoid robot systems, which are able to predict, act and interact in the real world. In his research, he is reaching out and connecting to neighboring areas in large-scale national and European interdisciplinary projects in the area of robotics in combination with machine learning and computer vision. Tamim is the developer of the ARMAR humanoid robot family. He is scientific spokesperson of the KIT Center “Information · Systems · Technologies (KCIST)”, president of the Executive Board of the German Robotics Society (DGR), the Founding Editor-in-Chief of the IEEE-RAS Humanoids Conference Editorial Board, Deputy Editor-in-Chief and Editor of the Robotics and Automation Letters.

Manipulation: The Grand Challenge of Robotics

The ability to manipulate objects is fundamental and requires a holistic consideration of different approaches in perception, learning, knowledge representation, control and robot design. Despite the progress in the last years, a breakthrough in robotics manipulation has not been achieved yet. Solutions are often limited to sunshine environments and rarely transfer to different tasks. In this talk, I will discuss our current progress and address the most relevant open problems towards versatile robotics manipulation systems. I will show the underlying grasping and manipulation system of the ARMAR humanoid robots and experimental results on single and bimanual grasping and manipulation tasks in kitchen and industrial environments as well as in hazardous environments.

Robert Platt, Northeastern University

Rob Platt is an associate professor in the Khoury College of Computer Sciences at Northeastern University. He is interested in developing robots that can perform complex manipulation tasks alongside humans in the uncertain everyday world. Much of his work is at the intersection of robotic policy learning, planning, and perception. Prior to coming to Northeastern, he was a Research Scientist at MIT and a technical lead at NASA Johnson Space Center, where he lead the development of the control and autonomy subsystems of Robonaut 2, the first humanoid robot in space. Professor Platt is an inventor on more than 21 US patents or patent applications and is an author on more than 45 papers.

Five different approaches to solving pick and place

I was recently surprised to discover that members of my research group were working on five different approaches to exactly the same problem: prehensile pick and place. I think this reflects the fact that robotics is changing quickly in response to new machine learning methods. There are a lot of new algorithms out there and it's not yet clear which ones will turn out to be most helpful to robotics. In this talk, I will (briefly) describe the five different approaches to the prehensile pick and place problem currently in progress in our lab. Interestingly, most of these approaches seek to use elements from more traditional planning to speed up policy learning. This could be the key to successful application of policy learning to robotics. The question is how best to accomplish this.

Markus Vincze, TU Wien

Markus founded the V4R group in 1996 with the intention to make robots see. We are still working to improve robot perception. Markus received his diploma in mechanical engineering from Technical University Wien (TUW) in 1988 and a M.Sc. from Rensselaer Polytechnic Institute, USA, 1990. He finished his PhD at TUW in 1993. With a grant from the Austrian Academy of Sciences he worked at HelpMate Robotics Inc. and at the Vision Laboratory of Gregory Hager at Yale University. In 2004, he obtained his habilitation in robotics. Presently he leads the “Vision for Robotics” laboratory at TUW with the vision to make robots see. V4R regularly coordinates EU (e.g., ActIPret, robots@home, HOBBIT, Squirrel) and national research projects (e.g, vision@home) and contributes to research (e.g., CogX, STRANDS, ALOOF) and innovation proejcts (e.g., Redux, FloBot). With Gregory Hager he edited a book on Robust Vision for IEEE and is (co-)author of 42 peer reviewed journal articles and over 300 reviewed other publications. He was the program chair of ICRA 2013 in Karlsruhe and organised HRI 2017 in Vienna together with Astrid Weiss and Manfred Tscheligi. Markus’ special interests are cognitive computer vision techniques for robotics solutions situated in real-world environments and especially homes

Detecting and Handling Objects for Future Service and Industrial Robots

In the near future robots will operate more and more next to humans. Robots will be expected to know about all the objects in the domain where they are working, homes as well as industry. This will require methods to rapidly learn new objects, recognise and manipulate them. The talk will review recent advances such as learning objects from CAD models, learning object relations and parts, the use of semantic knowledge related to objects, and the detection of learned object classes from mobile robots.

Juxi Leitner, Australian Centre of Excellence for Robotic Vision (ACRV)

Juxi Leitner is co-founder of LYRO Robotics and leads the Robotic Manipulation efforts at the Australian Centre for Robotic Vision (ACRV) in Brisbane, Australia. In 2017, he was leading Team ACRV to win the Amazon Robotics Challenge with their purpose-built robot Cartman. He is interested in combining computer vision, machine learning, and robotics to create embodied agents that can robustly interact with the real world. He holds a PhD from IDSIA (the Swiss AI lab), a Joint European Master in Space Science and Technology, and a BSc in Software and Information Engineering from the TU Wien.

Integrating Intent into Grasp Learning

This talk will look at how we recently looked at extending our reactive grasping research to, what we refer to as, grasping with intent (GwI). Breakthroughs in deep learning, specifically around object classification and feature tracking, have also increased the capabilities of robotic systems. This can be seen in the increase of capabilities during the three years of the Amazon Robotics/Picking Challenge (ARC). The progress though is so far limited to mainly grasping, ie. focusing on how to create a robotic system that is able to pick up an object. In most cases these systems are highly optimised for top-down grasps. Yet do not include information about what task is to be performed in the grasp selection. We see GwI as the necessary intermediate step between current grasping solutions and object manipulation. What are good metrics to be used at that task level? And how can we create more formalised approaches to manipulation that allow better comparison but also can be used to focus our research as manipulation community.

Abhishek Gupta, Berkeley Artificial Intelligence Research (BAIR) Lab, UC Berkeley

Abhishek Gupta is a 5th year PhD student at UC Berkeley working with Pieter Abbeel and Sergey Levine, where he is interested in algorithms that can leverage reinforcement learning algorithms to solve real world robotics manipulation tasks. Currently he has been pursuing the directions of effective reward supervision in reinforcement learning, learning from demonstrations, meta-reinforcement learning and multi-task reinforcement learning. In the past, he has explored ideas from model-based reinforcement learning, hierarchical reinforcement learning, dexterous manipulation, multi-task learning and meta-reinforcement learning. He has spent time at Google Brain working on multi-task hierarchical reinforcement learning. He is also the recipient of the NDSEG and NSF graduate research fellowships, and several of his works have been presented as spotlight presentations at top-tier machine learning and robotics conferences such as NeurIPS, ICML, ICLR, AAAI, RSS, CoRL, IROS and ICRA. His work has been covered by multiple popular news outlets such as the New York Times and VentureBeat. Speaker's website.

Towards real world deep reinforcement learning for manipulation

Reinforcement learning has the potential to be a general purpose, powerful way to acquire complex robotic manipulation behaviors. However, the applications of reinforcement learning in robotic manipulation have largely been limited to very simple lab settings or simulation. In this talk, we will discuss the paradigm of real world reinforcement learning and discuss the challenges in making this learning paradigm work. I will discuss some of our efforts towards making these challenges more tractable and how this might be applicable in robotic manipulation. In addition I will discuss how this paradigm of real world RL can benefit from other approaches to robotic manipulation.

Lorenzo Jamone, Queen Mary University of London

Lorenzo Jamone is Lecturer in Robotics and Director of the CRISP group at the School of Electronic Engineering and Computer Science (EECS) of the Queen Mary University of London (QMUL). The CRISP group is part of ARQ (Advanced Robotics at Queen Mary). Lorenzo's main interest is in Cognitive Robotics: building intelligent robots by taking inspiration from humans, and validating theories of human cognition by testing computational models on robots. Topics include: dexterous manipulation, visuo-haptic perception and exploration, object affordances, tool use, body schema, eye-hand coordination, human-robot interaction and collaboration, tactile and force sensing.

Some useful ingredients for robotic manipulation

Robot, can you make a sandwich? Unfortunately, the answer is: no. In the talk I will discuss some of the challenges of robotic manipulation in unstructured environments, and some of the possible "ingredients" that can be used to solve some of the many open problems.


Selected Contributions

All the contributions will be presented during the workshop in two different poster sessions. The authors will provide an overview of their works during the spotlight presentation session.

Poster Session 1 - Morning

Poster Session 2 - Afternoon


Important Dates


© 2020 Different Approaches, the Same Goal: Autonomous Object Manipulation