Human-to-Robot Handovers
(Essential Skills Sub-Track 4)
The competition was held in Yokohoma on Thursday May 16. Teams need to execute the human-to-robot handover configurations with unknown objects that organizers will provide on-site only once.
Objects
The set of objects for the competition consists of 6 containers that are unknown to the teams to test the generalisation capabilities of the system to adapt to novel and unseen objects. These containers may range between drinking cups and food containers. These containers may be filled with a content different from rice and with a different amount from the set used for the preparation phase.
Logistics
-
Organizers will provide a table (dimensions: W1800xD600xH700 mm; W: width, D: depth, H: height; thickness: 15 mm).
-
Two UR5 robotics arms will be available on-site (only for teams that declared cannot bringing their own robotic arm).
Only one robotic arm is equipped with a 2-parallel finger Robotiq gripper.
-
Teams are expected to bring their own setup. This includes robotic arm, end effector (e.g., 2-finger gripper, such as Robotiq),
laptop/workstation (tower PC with monitor), cameras (with tripods, USB-C cables), digital scale, calibration pattern,
sockets extension lead, ethernet cable connecting PCs and robotic arm, socket adapter.
-
Teams are expected to mount their own setup configuration.
-
For safety, a second person (e.g., from the team) must be ready to stop the robot in case of an anomalous behaviour.
Setting up instructions
The setup includes a robotic arm with at least 6 degrees of freedom (e.g., UR5, KUKA) and equipped with a 2-finger parallel gripper (e.g., Robotiq 2F-85); a table where the handover is happening as well as where the robot is placed; selected containers and contents; up to two cameras (e.g., Intel RealSense D435i); and a digital scale to weigh the container. The two cameras should be placed at 40 cm from the robotic arm, e.g. using tripods, and oriented in such a way that they both view the centre of the table. The illustration below represents the layout in 3D of the setup within a space of 4x4 meters.
- Teams must prepare the sensing setup such that the cameras are synchronised, calibrated and localised with respect to a calibration board. We recommend the cameras recording RGB sequences at 30 Hz with a resolution of 1280 × 720 pixels (based on the setup used in the CORSMAL Benchmark).
- Teams should verify the behaviour of the robotic arm prior to the execution of the task (e.g., end-effector, speed, kinematics, etc.)
- Teams must weigh the mass of the container and content, if any, for each configuration before and after executing the handover to the robot, using a weight scale.
- Teams will execute each handover configuration only once.
- A volunteer from another team will be the person who will hand the container over to the robot using a random/natural grasp for each configuration.
- Any initial robot pose can be chosen with respect to the environment setup; however, the volunteeer is expected to stand on the opposite side of the table with respect to the robot.
These instructions have been revised from the CORSMAL Human-to-Robot Handover Benchmark document.
Procedure
For each configuration:
- Prepare the container either empty or filled with its predefined content type and level
- Weight the (filled) container before the execution of the task
- Place the container at the centre of the table, at a distance not reachable by the robotic arm (safety)
- The volunteer grasps the container from its location with a natural grasp
- The volunteer carries the container with the intention of handing it over to the robot
- The robot should track and predict the pose of the container to move the arm towards the handover area
- The volunteer hands the container over to the robot
- The robot closes the end effector and grasps the container
- The robot delivers the container upright within the predefined area.
- Measure the distance between the initial (e.g. centre of the table) and the delivery location of the container (if not failed)
- Weight the (filled) container after the execution of the task
Note that the volunteer should avoid assisting the robot (i.e., remaining still at a location until the robot can pick up the container) or assuming an adversarial behaviour (i.e., making it harder for the robot to reach the object).
This procedure has been revised from the CORSMAL Human-to-Robot Handover Protocol document.
Submission
After completing the set of configurations, teams should send a .csv file with the results for each configuration by email to
Alessio Xompero.
Template for submitting the results:
here.
The .csv file must be named according to the following format:
rgmc2024_est4_phase2_teamname_submission_v1.csv (replace teamname with the name of your team, and X with the submission number).
For each configuration, the file must provide the results for the following columns:
-
Target location [mm]: the position (x,y) where the robot should deliver the object. We recommed to provide a picture clearly showing the location of this point.
-
Final location [mm]: the position (x,y) of the centre of the base of the container at the end of the task.
-
Handover time [ms]: the total execution time from the moment the person is instructed to grasp the container to the moment the robot releases the gripper at the delivery location to place the container after the handover (unless the handover failed)
-
Initial mass [g]: the measured mass of the (filled) container before the execution of the configuration.
-
Final mass [g]: the measured mass of the (filled) container after the execution of the configuration.