The Next Big Push

MoveIt

Starting Stances, loaded from config files – DONE

KickIt states (or function mechanism) – Idea is that MoveIt often needs to be asked twice in order to do something correctly (like compute a trajectory), so we need a way to retry on abort x number of times. – See MoveIt3

Navigation Realignment SuperStates, if pose not reached (ABORT), then navigate backwards .1m, – This needs a superstate with it’s own loops.

  • case too close (collision),
  • case too far (no valid path)

Starting Stance to Cartesian Motion Motif.

The Wait Problem. – In Moveit, we need to wait in order to get a plan back (see cb_move_cartesian_relative.cpp) because we’re in onEntry()

We can use either a new thread, or an update function. – We decided to go with the update function method first, so that we can keep tighter control of our threads…


Next example. 6 tables, 6 cube colors on one table, separating the cube to all the other tables…

Next example after that, Picking the cubes up off the floor (searching via a navigation plan)…

Panda example

Goal is to make the examples bulletproof.


We need to be able to take messages from MoveIt, (Collision, Plan not found, Controller loses track of trajectory), and then respond (via a transition) to a state. Using moveit_msgs/MoveItErrorCodes?


VisionClient/MoveEyes

Server node goes here…https://github.com/reelrbtx/MoveEyes, client stuff goes in SMACC Client Library

Big idea is to be able to import TensorFlow models, like… https://www.tensorflow.org/lite/models, or https://sthalles.github.io/deep_segmentation_network/, with deeplab as the default model.

Built on top of cppflow https://github.com/serizba/cppflow – Not really a library, more like the design pattern we’ll use, see TensorFlow C API.

Server also needs to supply an API so that the client can have it load different models, kind of like ros_control, move_group_interface.

Need to support 3 default image sources…
Point Clouds, Stereo Camera, rgb cameras: Color, Depth & Three point

Each model in the Server should have a corresponding component in the client, that contains code that translates the output of the MoveEyes model into information that can be used to make decisions (If you see a blue blob (a car), then run away). Component reads the names of the input, shape of the input, name the output, and shape of the output…


PerceptionClient

Any Occupancy Map needs to be interoperable with the MoveIt occupancy map, see Occupancy Map Updater for MoveIt perception client…

https://github.com/ros-planning/moveit/blob/master/moveit_ros/occupancy_map_monitor/include/moveit/occupancy_map_monitor/occupancy_map_updater.h

Two costmap like structures..

  • Occupancy Map
  • Object Map
  • Constraint Map

Pose estimation is a major goal for the Vision library. I think we’ll have two flavors of components for this…

  • One that just uses the camera information and maybe does a table lookup for size (max/min height)
  • One that can incorporate some type of range finder.

The plan for a DARPA SubT Challenge team, would be to train a tensorflow model using the Gazebo models….

And then we would import the model into MoveEyes, and write a corresponding component for the SmaccMoveEyesClient.


Head Orthogonal

There is also a need for an orthogonal that controls the movement of the head.

Autocentering Functionality that keeps target in the center of the screen
Pan Scan
Tilt Scan

Direct control
in Joint Trajectory. Create a Trajectory Action Server Goal, put a couple of vectors with the state in time,
vector that goes, 10,11,12….90 degrees

control_msgs:FollowJointTrajectoryAction

with New Client. Direct controller client for ros_control.

Fetch head_controller client


Rangefinder Orthogonal

Radar, Lidar,