LASA-ros-pkg/SEDSros
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|
Repository files navigation
Seds Notes
J. Stober
D. Grollman
Last updated: 8.10.2011
== What's New ==
Wam and PR2 drivers both inherit from a generic driver.py module. See
driver.py for some more documentation. Both wam and pr2 modules scale
the velocities given by SEDS to account for the difference in time
between samples used for the model and the rate of the control
loop. This is the tscale parameter in each driver.
tscale = dt_control / dt_sample.
[If seds assumes
dT = 1 and x_t + dx_t = x_{t+1}
and dT = 0.5 when controlling the robot
then we want x_t + 0.5 dx_t = x_{t+0.5}
where 0.5 = dt_control / dt_sample in this example.
In fact we also have a velocity multiplier hack so
x_t + vm * tscale * dx = x_{t+1}]
PR2 driver translates between the model source frame (like an object's
frame) and the source frame for the controller. See the section on
Object Frames below.
== Running Seds (PR2) ==
The main "walk-through" script is in pr2/master_pr2.sh. This script
prompts the user to record example trajectories, performs optimization
using SEDS, then drives the robot using pr2_driver.py and the learned
parameters. For running everything manually (without prompts) see below.
== Running Seds (WAM) ==
There is now a wam/master_wam.sh script for running SEDS using the WAM
robot. While both seds_node and ds_node are written to be generic
(preforming operations on SedsMessages and SedsModels) the wam2seds.py
and wam_driver.py scripts needed robot specific customization (compare
with tf2seds.py and pr2_driver.py).
tf2seds.py does not work with the current WAMros node since that node
does not publish to /tf. In other respects, this part of the toolchain
is fairly generic as well. The robot driver requires some
customization of the control topic and is otherwise the same.
=== Short Version (Manual) ===
1. Collect some sample bagfiles of tf data in a directory.
2. In the pr2 directory run:
./train.sh <location of directory containing bagfiles>
(e.g. ./train.sh ../data/bags/pr2/20110615/ )
3. In the pr2 directory run: *WILL MOVE THE ROBOT*
./drive.sh
(e.g. ./run.sh)
(Alt.) To run the ds_server and record the results in a bagfile instead of sending them to the robot:
./run_test.sh <location of bagfile to write>
(e.g. ./run_test.sh ../data/bags/pr2/seds/seds_model.bag)
The bagfile can then be opened in rxbag to give a sense of how the trajectories would look from a few sample starting position.
=== Long Version ===
There are three major components to this demonstration
project:
1. Collecting sample trajectories.
The easiest way to collect data on the PR2 seems to be to turn off
all arm controllers, run "rosbag record /tf", then move the arm
manually. Doing this several times from different starting
positions (and recording separate /tf bagfiles) will create the set
of trajectories that SEDS will use as samples for learning.
The trajectories should be stored in a single directory and should
be the only bagfiles in that directory.
If using vision to train an object-relative motion, please make sure the robot sees the object before collecting data.
1a. Translating rosbags into a usable form.
- nodes/tf2seds.py
Rosbags with tf messages need to be translated into a form that
SEDS can understand. This translation process requires iterating
through the set of tf messages, loading the tf transforms, then
computing the desired transform. In the case of the PR2 robot, the
(source,target) for computing the desired transform is
(torso_lift_link, r_gripper_tool_frame). The result of this
computation is a set of end effector Cartesian coordinates and
orientations. This data is then written to a new bag in as a
sequence of SedsMessage.
A SedsMessage contains:
# trajectory poses
float64[] x
# trajectory derivatives
float64[] dx
# time span for dx calculation
duration dt
# time of the current x value
time t
# trajectory number
uint32 index
Note that multiple rosbags of tf information need to be written
into a single SedsMessage bag. The python script nodes/tf2seds.py
does this for bags collected from real or simulated PR2 robots. It
is not set up as a ros node.
Example usage:
rosrun seds tf2seds.py -b <path_to_tf_bag_files> -o <path_to_seds_bagfile> -s <source_frame_id> -t <target_frame_id>
NOTE: source_frame_id can be something perceptual like objectXXXX!
target_frame_id should be set to whatever is used for JTTeleop
(r_gripper_*).
Note that now you can set the source and target frames to something
else. This will be recorded in the resulting bagfile as
seds/source_fid and seds/target_fid messages. The seds optimization
process will read these frame_ids and record them as part of the
model parameters.
The source and target frames must, of course, be present in the recorded bagfiles. Currently, tf2seds is set up to treat the source frame (object) as more slowly changing than the target frame (the hand). As such, it performs some tf-based time-traveling to compute the position of the target in source-frame coordinates, and uses the torso_lift_link as an unchanging base.
2. Running SEDS to learn model parameters.
Running SEDS is as easy as "rosrun seds seds_node". This starts the
seds_node which makes available a single service "seds/optimize"
which takes arguments of the following form:
rosservice call seds/optimize seds.bag
You probably want to save this with a call to:
rosservice call seds/save_file FILE
3. Driving the robot using ADS.
- ds_node
To actually drive the robot requires running two nodes. The first
is ds_node:
rosrun seds ds_node
The ds_node provides two services. The first is load_model which
loads parameters from a running seds_node.
rosservice call /ds_node/load_model
You can also load from a file (generated using seds_node seds/savefile).
rosservice call /ds_node/load_file <file>
The next service is /ds_node/ds_server and is meant to be called from within
the main loop of a controlling node. This service takes as
arguments an array of floats x (representing the current Cartesian
coordinates) and returns another array of floats dx. See the DSSrv
srv file for further details.
For debugging purposes, this service can be called on the command
line with an array of sample coordinates:
rosservice call /ds_node/ds_server "[0.1, 0.1, 0.1]"
- pr2/pr2_driver.py
pr2_driver.py is the script that actually drives the robot. The
mainloop does three things: 1. Reads the current coordinates using a
TF Listener. 2. Sends these to /ds_node/ds_server and reads the
response. 3. Composes new coordinates x+dx and publishes these to the
r_cart/command_pose topic.
If the model is correct, this loop should drive the robot in much
the same way as the example trajectories.
You can run this manually or use a launch script. The pr2_driver.py
will correctly transform source frames if seds uses a different
source frame. Note that the target frame for seds_node and
pr2_driver.py must be the same.
rosrun seds pr2_driver.py
Make sure you have a ds_node running with the proper parameters
already loaded.
Parameters ---
There are several parameters in pr2_driver, which can be set on the command line or changed during operation. Important ones are:
Velocity multiplier: To overcome friction/sticktion, and timing errors, you may want to multiply all computed velocities by a constant. Set this by --vm=# or a call to pr2_driver/set_vm #
Source and Target fids: To apply a known motion to a new object/arm,
you can change the source and target with --source=FID --target=FID,
or while running, call pr2_driver/change_object FID, or
pr2_driver/change_hand (which will automatically swap
r_gripper_tool_frame for l_gripper_tool_frame, and vice versa.
Feedback: Just running the full ('hard') feedback controller tends
not to work, as friction prevents the robot from moving in between
timesteps. Setting the feedback to 'none' will instead assume that
the robot has performed exactly as commanded. 'adaptive' feedback
merges these two, and uses no feedback as long as the robot is
within some threshold (athresh) of where we think it should be, and
uses feedback when perturbations drag the robot outside of that
area. Feedback is set with --feedback or a call to
pr2_driver/change_feedback [none,hard,adaptive], and the threshold
is set with --athresh or a call to pr2_driver/set_threshold
== Current Issues ==
There's some funkiness in the handlng of time-traveling tfs in tf2seds
and driver. This is partially due to my learning about it while I
coded this. I believe the link to torso_lift_link can be removed, by
judicious use of rospy.Time(0) instead of now (or t in tf2seds).
== Other Tools ==
Some other code that might be useful:
1. pr2/run_tf_bag.py -- This script reads /tf_old topic and published to r_cart/command_pose. The idea is that you can replay bags using this by running
rosbag play bag_with_tf.bag tf:=tf_old
To get this to work in simulation usually requires running the back at a slower frequency than it was recorded.
2. pr2/publish_rcart.py -- Takes Cartesian coordinates on the command line and publishes them continually to r_cart/command_position. Useful for moving the simulated robot to a set pose.
3. nodes/run_test.sh -- Continually calls /ds_node/ds_server from various starting points and records the resulting trajectories in a bag file that you can analyze using rxbag. This is like running the system but without the robot.
Ex. ./run_test.sh ../data/bags/pr2/seds/seds_model.bag
== TODO ==
* Smooth dx values.
* Incorporate new SEDS optimization code.
* Friction trap prevention.
* Publish wam positions to tf.
* Fix errors in tracking perceptual objects.
== References ==
=== PR2 Remote Lab URLs ===
* http://mjpeg.pr2-remotelab.com:8080/stream?topic=/remote_lab_cam1/image_raw
* http://pr2-remotelab.com/demo/wviz.html
* http://pr2-remotelab.com/demo/wviz_vizonly.html