ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Dimitar Simeonov's profile - activity

2018-11-09 02:42:11 -0500 received badge  Favorite Question (source)
2018-05-11 08:39:03 -0500 received badge  Great Question (source)
2015-04-11 22:50:06 -0500 received badge  Great Question (source)
2014-07-16 22:29:57 -0500 received badge  Good Question (source)
2014-04-20 06:48:06 -0500 marked best answer object detection with color

Hello,

I am using the tabletop object detector to give the robot a semantic map of the environment it is in. http://www.ros.org/wiki/tabletop_obje...

I would like to be able to somehow also extract color information about the objects - for example if the robot detects a coca-cola can, the tag of the object should look something like ['can','red','white','metallic'] or ['can','red'] for example.

Does anybody know of a tool that does anything similar, maybe using the object detector positions as a part of the input?

2014-04-20 06:48:04 -0500 marked best answer Is the gazebo simulator deterministic

Can anybody confirm whether the gazebo simulator, and the sensors in simulation as used in ROS are deterministic?

For example, is the PR2 guaranteed to get the same sensor measurements if instantiated the same way.

Edit: As a concrete example - suppose the PR2 is instantiated and in front of it a table and few objects on the table (all created using spawn_model from the same launch file). Example sensor use is the pr2_tabletop_object_detector and example motor control is the pr2_pick_and_place_manager. Assume that the last two are deterministic given the same information from the basic robot sensors. Can we expect the sensor information, and the effect of robot actions to be the same?

2014-04-20 06:48:03 -0500 marked best answer initialize pr2 posture in gazebo

Does anybody know of a way to initialize the PR2 posture in gazebo to something specific? Currently the robot is initialized with the two hands pointing to the front and I move the arms and the head of the robot, but this is quite slow in simulation, and this will be repeated many times automatically.

Any suggestions about how to save the robot position? If I save the gazebo world, do you think I will be able to re-summon the PR2?

2014-04-20 06:47:59 -0500 marked best answer how to make pr2 grasping more robust

Hello, I am using pr2_pick_and_place_manager explained here, in order to pick up objects. I'm calling it from code.

Object detection works very robustly, and I am using objects from the household database. However, grasping often returns UNFEASIBLE - which means it couldn't find a feasible grasp for the object. Sometimes grasping works if the object is centered on a table in front of the PR2, but not always.

Do you have any suggestions about how to reduce the number of times that no feasible grasp is found, so that the robot could pick more robustly the objects?

Turning the robot towards the table? Moving the arm closer to the object before attempting a grasp?

EDIT: This part of the output:

  [INFO] 1311008555.087295: tabletop detection reports success
  [INFO] 1311008602.329595: detection finished, finding bounding boxes for 
  clusters and sorting objects
  [INFO] 1311008602.330844: object 0, red: recognized object with id 18724
  [INFO] 1311009564.034749: attempting to grasp an object, whicharm = 0
  sending object collision name of: graspable_object_0
  [INFO] 1311008668.999249: grasp result: UNFEASIBLE
  [INFO] 1311008668.999630: no feasible grasp for this object with the r arm
  [INFO] 1311008668.999930: pick-up failed.

The object marker seemed to be in the right place:object markers were correctly recognized

EDIT2: I copied the grasp goal and result into this text file http://web.mit.edu/mitko/Public/rosde...

EDIT3: Indeed if I try a different object there I see a bunch of grasps evaluated for feasibility: image description

UPDATE: When I send the grasp goal to

 rosservice call /objects_database_node/database_grasp_planning

I get

 grasps: []
 error_code: 
   value: 0

Value 0 corresponds to success according to http://www.ros.org/doc/api/object_man... so I guess for some reason the grasp planner returns no grasps. Is there a way check why?

2014-04-20 06:47:54 -0500 marked best answer interactive launch

Does ROS have a way to interactively start-stop nodes? For example - taking a bunch of rosnodes and having a start-stop button next to each and showing their output?

2014-04-20 06:47:53 -0500 marked best answer shadows of the household objects in gazebo

Hello, does anybody know how to spawn complex objects in gazebo and still render the shadows nicely?

I am using the meshes from the household database and spawning urdfs in gazebo. However, I get the following error:

gazebo: /tmp/buildd/ros-cturtle-visualization-common-1.2.0/debian/ros-cturtle-
visualization-common/opt/ros/cturtle/stacks/visualization_common/ogre/build
/ogre_src_v1-7-1/OgreMain/src/OgreShadowCaster.cpp:348: 
virtual void Ogre::ShadowCaster::generateShadowVolume(Ogre::EdgeData*, const   
Ogre::HardwareIndexBufferSharedPtr&, const Ogre::Light*,  
std::vector<Ogre::ShadowRenderable*, Ogre::STLAllocator<Ogre::ShadowRenderable*,  
 Ogre::CategorisedAllocPolicy<(Ogre::MemoryCategory)0u> > >&, long unsigned int): Assertion 
`numIndices <= indexBuffer->getNumIndexes() && "Index buffer overrun while generating 
shadow volume!! " "You must increase the size of the shadow index buffer."' failed.
[gazebo-1] process has died [pid 2190, exit code -6].
log files: /afs/athena.mit.edu/user/m/i/mitko/.ros/log/7bb452c2-ad85-11e0-9e9f-0025900a1f7d
/gazebo-1*.log

By asking around I found that if I turn the shadows off in the world file this is fixed (and it is- I tried), but then the rendering is quite ugly and I actually need the rendering to be good (in order to export some videos). Does anybody know how I can increase the shadow index buffer, or use different shadowing technique? I tried:

<shadowTechnique>textureAdditive</shadowTechnique>

but the error persists.

Also, any ideas if I can turn the shadows off of only the household objects?

Thank you.

2014-04-20 06:47:48 -0500 marked best answer Programatically set the camera pose in gazebo or rviz.

Hello, I am trying to automatically generate a bunch of videos of a robot manipulating objects. I figured out a way to record the screen from gazebo or rviz, but I can't find any tutorial about how to programatically move the camera around the robot.

Do you know if in either gazebo or rviz the viewpoint can be set programatically? Gazebo shows XYZ and RPY as you move with the mouse so it seems reasonable to think that one can set those. Maybe in rviz, I should set the fixed frame to be a frame offset of the head of the robot?

2014-04-20 06:47:41 -0500 marked best answer custom subset of the household objects database

Is it possible to use the household objects database with a custom objects set? What I mean is to use the same objects already in the database but instead of loading all of them to be able to specify which subset I want (by giving a list of model ids). Right now the only options that I know of are "REDUCED_MODEL_SET" and "" which gives me all the objects in the database

Here is an example of a part of a launch file where this is set.

  <include file="$(find tabletop_object_detector)/launch/tabletop_node.launch"/>
  <param name="/tabletop_node/use_database" value="true"/>
  <param name="/tabletop_node/model_set" value="REDUCED_MODEL_SET" />
  <param name="/tabletop_node/get_model_list_srv" value="/objects_database_node/get_model_list" />
  <param name="/tabletop_node/get_model_mesh_srv" value="/objects_database_node/get_model_mesh" />

  <node pkg="tabletop_collision_map_processing" name="tabletop_collision_map_processing"
        type="tabletop_collision_map_processing_node" respawn="false" output="screen"/>
  <param name="tabletop_collision_map_processing/get_model_mesh_srv"
         value="/objects_database_node/get_model_mesh" />
2014-04-20 06:47:38 -0500 marked best answer need topological map online, graph_slam doesn't work

Has anybody had success using the graph_slam package described at http://www.ros.org/wiki/graph_slam I tried checking out the source and building it but the dependencies do not build.

Or do you have any suggestions how I can build a topological map out of a SLAM gridmap fairly efficiently on the fly?

Edit: I am using the cturtle-pr2all stack

2014-04-20 06:47:37 -0500 marked best answer tf spits a lookup exception

I'm using the tf package and specifically the tf.TransformerROS class in Python. If I try calling its methods transformPose or transformPointCloud it returns a LookupException

tf.LookupException: Frame id /map does not exist! When trying to transform between /base_link and /map.

However if I try running the tf_monitor right after that I see that tf has knowledge of the frame

~>rosrun tf tf_monitor base_link map 
Waiting for transform chain to become available between /base_link and /map 

RESULTS: for /base_link to /map 
Chain is: /base_link -> /base_footprint -> /odom_combined -> /map 
Net delay     avg = 0.00684423: max = 0.054 

Frames: 
Frame: /base_footprint published by /robot_pose_ekf Average Delay: 0.0185 Max Delay: 0.026 
Frame: /base_link published by /robot_state_publisher Average Delay: 0.0075 Max Delay: 0.008 
Frame: /odom_combined published by /slam_gmapping Average Delay: 0 Max Delay: 0 

All Broadcasters: 
Node: /robot_pose_ekf 33.5196 Hz, Average Delay: 0.0185 Max Delay: 0.026 
Node: /robot_state_publisher 55.5556 Hz, Average Delay: 0.0075 Max Delay: 0.008 
Node: /slam_gmapping 25 Hz, Average Delay: 0 Max Delay: 0

I'm running gazebo simulation and have ROBOT=sim set. Any ideas how to get transformPointCloud to work?

Edit: Here is (simplified) the part of the code that deals with tf

import roslib
roslib.load_manifest(PKG)
import rospy
import tf

class Class():
    def __init__(self):
        #...
        self.tfTrasformer = tf.TransformerROS()
        #...


    # somewhere in the methods
    obj.object.cluster = self.tfTrasformer.transformPointCloud("/map",obj.object.cluster)
2014-04-20 06:47:33 -0500 marked best answer how to decrease move_arm speed during execution

When I tell the PR2 to perform some tabletop manipulation, it moves its arm to the side very fast - beyond the speed I consider safe.

I looked at the info here http://answers.ros.org/question/641/h... which describes how to set up the joint limits ( through a global YAML file )

However, I'm sharing the use of the robot with a lot of people and would not like to mess with the global config file. Is there any way to set these joint limits during execution? Maybe through a ROS message or setting a constant in the ROS Core?

Also, any ideas of what are good joint limits will be really appreciated.

The safety speed I'm looking for is for handling household objects.

Edit: I am using a high-level function for moving the arm. From pr2_pick_and_place_demo the python class PickAndPlaceManager (papm) I am using papm.move_arm_to_side(). Some people in the lab mentioned that this looks like a zero-time movement - the robot is trying to move the arm as fast as it can, moving the object too fast. I'd prefer not to mess with the internals of that for portability reasons.

2014-04-20 06:47:32 -0500 marked best answer quaternions orientation representation

Hi, I'm trying to understand the geometry_msgs/Pose.msg Is there a way to non-ambiguously explain how quaternions are used to represent orientation? I understand how quaternions represent rotation. You take some initial orientation, and then rotate it around vector (x,y,z) by acos(w). However, when I'm trying to imagine orientation I can't understand what is the initial orientation. Is it set to some constant in ROS?

More specifically, if I have a orientation (w, x,y,z) in 3d to what orientation this corresponds in 2d?

Thanks in advance!

2014-04-20 06:47:24 -0500 marked best answer run gazebo remotely

Hello, I am experimenting with the pick and place demo: http://www.ros.org/wiki/pr2_simulator... I can run it on two machines : a dual-core and a 16-core machines. However if I try to connect to the 16-core via SSH and run it the Gazebo dies. I can see the gazebo window for a couple of seconds and then it fails.

I am connecting via "ssh -X" and "ssh -Y" neither works. Also, I do not have direct root access to the 16-core but know s.o. who has.

Edit: They told me that I can run gazebo headless, but that is not going to be enough.