Code

The project has just started and we will update this page with our software.

In the meantime, you will find below recent software from the CORSMAL team that are related to the project.


Coordinated multi-arm motion planning

Code libraries for a unified framework designed to perform a coordinated multi-arm motion planning: Code

The code is a centralised inverse kinematics solver under self-collision avoidance as described in S. S. Mirrazavi Salehian, N. Figueroa, A. Billard, A unified framework for coordinated multi-arm motion planning, The International Journal of Robotics Research, Vol. 37, Issue 10, pp. 1205-1232, April 2018.

Load share estimation

This ROS package is used to compute the load share of an object being supported by a robot and a third party (such as a person): Code

This package is an open-source implementation of the load share estimation module in the following paper: J. Medina, F. Duvallet, M. Karnam, A. Billard, A human-inspired controller for fluid human-robot handovers, Proc. of Int. Conference on Humanoid Robots, Cancun, Mexico, 15-17 November 2016.

Kuka-lwr-ros

A ROS package to control the KUKA LWR 4 (both simulation and physical robot): Code


First person vision activities

A long short-term memory convolutional neural network for first-person vision activity recognition: Code

The code classifies the activities from first person videos as described in G. Abebe, A. Cavallaro, A long short-term memory convolutional neural network for first-person vision activity recognition, Proc. of ICCV workshop on Assistive Computer Vision and Robotics (ACVR), Venice, October 28, 2017

Inertial-Vision

Inertial-Vision: cross-domain knowledge transfer for wearable sensors: Code

The code classifies the activities from first person videos as described in G. Abebe, A. Cavallaro, Inertial-Vision: cross-domain knowledge transfer for wearable sensors, Proc. of ICCV workshop on Assistive Computer Vision and Robotics (ACVR), Venice, October 28, 2017

Multi-agent visual tracking

Active visual tracking in multi-agent scenarios: Code

The code performs the active target tracking described in Y. Wang, A. Cavallaro, Active visual tracking in multi-agent scenarios, in Proc. of IEEE Int. Conference on Advanced Signal and Video based Surveillance (AVSS), Lecce, 29 August - 1 September, 2017



Code