Skip to content

Latest commit

 

History

History
54 lines (38 loc) · 1.93 KB

README-localization.md

File metadata and controls

54 lines (38 loc) · 1.93 KB

Launch the camera node

  1. In terminal 1, run the following command to start realsense with ros, with the option to publish the pointcloud:
roslaunch realsense2_camera rs_aligned_depth.launch filters:=pointcloud

Check that the package react_inria is installed and go to the folder react_inria/launch.

Running react_inria localization node in simulation

  1. To provide the initial pose of the part in simulation, in terminal 2, while in the react_inria/launch folder:
roslaunch ./react_learning.launch use_sim_time:=true

And follow the instructions to generate a new pose and save it.

  1. Run the localization node in simulation, in terminal 2:
roslaunch ./react_localizer.launch use_wMo_filename:=$HOME/.ros/transform_wMo.yaml use_oMo_filename:=$HOME/.ros/transform_oMo.yaml use_aligned_depth:=false use_sim_time:=true
  1. In terminal 3, launch the script publishing the frame used in the demo, in agimus-demos/script folder:
./publish_frames.py 

Running react_inria localization node on the robot

  1. To provide the initial pose of the part on the real robot, in terminal 2, while in the react_inria/launch folder:
roslaunch ./react_learning.launch

And follow the instructions to generate a new pose and save it. This needs to be done every time the part is moved in regards to the robot base.

  1. Run the localization node, in terminal 2:
roslaunch ./react_localizer.launch use_wMo_filename:=$HOME/.ros/transform_wMo.yaml use_oMo_filename:=$HOME/.ros/transform_oMo.yaml use_aligned_depth:=true

To use the neural network version of the localizer :

roslaunch ./react_localizer.launch use_wMo_filename:=$HOME/.ros/transform_wMo.yaml use_oMo_filename:=$HOME/.ros/transform_oMo.yaml use_dnn_localizer:=true
  1. In terminal 3, launch the script publishing the frame used in the demo, in agimus-demos/script folder:
./publish_frames.py