At the Department of Robotics we’ve spent several weeks testing the new ROS-Industrial driver for SDA10F since it’s announcement on Dec 10th 2014. As mentioned in the original post, the driver was developed by Fraunhofer IPA in cooperation with Yaskawa Smart Robotics Center in Japan, Yaskawa Motoman Robotics and is designated to control dual arm Motoman robots. Even though only the hydro version of driver has officially been released so far, we have also scucessfully managed to test the current indigo branch in combination with Ubuntu 14.04 LTS.
Motoman SDA10F Support and Moveit Config packages follow standard ROS-Industrial naming convention so all config and xacro files are located as usual. Roslaunching „test_sda10f.launch„ from „motoman_sda10f_support„ folder provides simple interface to check basic robot’s model behaviour and orientation of particular axes.
Simple test of SDA10F URDF model
„motoman_sda10f_moveit_config„ includes all important features as traditionally in ROS-I moveit packages. The main difference compared to single arm robots is of course the possibility to change planning group. You can choose from: arm_left, arm_right, torso, arms or sda10f. MoveiT config uses standard KDL plugin, which in combination with seven arm joints provides suitable and fast IK solutions. We have also tested OpenRave „ikfast plugin„ , but the results were not as good as with KDL.
Simple test of SDA10F moveit_config package
Getting the new ROS-Industrial driver running on real robot wasn’t completely straightforward, but with helpful support of Ted Miller from Yaskawa Motoman Robotics, we finally managed to run it. As shown in the following video, arms can be controlled by 6D interactive markers directly, but the torso movement requires some predefined poses. The trajectory computation gap between „Plan & Execute“ click and real robot movement takes approximately 2 seconds on our hardware.
Testing on real SDA10F robot
To complete the initial driver testing, we prepared simple workplace for basic pick&place operation using Kinect and single suction cup. The goal was to move the objects from random planar positions to blue bin using MoveIT and pointcloud processing pipeline from basic ROS-Industrial tutorial. We used Cartesian planning, so right after the object centroids were detected, the complete trajectory for all reachable objects was generated and performed in a single shot. In order to collect remaining objects, the process was repeated with a slight torso rotation.
First simple pick&place application
Six months ago, we managed to control single SDA10F arm using prior motoman driver without the dual arm support, so we really appreciate current release which provides full control of all 15 joints. What is still missing though, is MoveIT support for synchronized movement of arms/arms+torso respectively. It works fine in simulation, but due to some kind of MoveIT related error, the application fails when running on real robot. The only way to perform synchronized movements with current implementation is therefore by not using MoveIT library, which on the other hand complicates whole programming considerably.
We have also noticed missing support for dual arm configurations in current release of Extrinsic Calibration Toolbox. The toolbox wasn’t able to read extended /joint_states message coming from FS100 controller. It is most probably related to simple_message implementation inside toolbox, but from the application point of view, it wasn’t a critical issue and we calibrated Kinect position using live point cloud by hand in several steps.