ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

jdorich's profile - activity

2022-02-26 12:16:36 -0500 received badge  Guru (source)
2022-02-26 12:16:36 -0500 received badge  Great Answer (source)
2017-02-07 19:57:16 -0500 received badge  Good Answer (source)
2017-02-07 19:57:16 -0500 received badge  Enlightened (source)
2016-03-14 08:28:10 -0500 received badge  Nice Answer (source)
2013-10-30 11:55:44 -0500 commented question How to create a header declaration for the following function

Sorry, I meant the " #include <ros ros.h="">" and those

2013-10-30 10:24:39 -0500 commented question How to create a header declaration for the following function

Where do you include the ROS headers?

2013-10-16 16:07:25 -0500 commented answer nao control rate

you only need to use the spin() or spinOnce() if you have subscribed to a topic. The loop_rate.sleep() performs the delay you require in this situation.

2013-10-10 02:58:37 -0500 commented answer extract SensorMessages/LaserScan from SensorMessages/PointCloud2 created in simulation

Great! See updated answer. I forgot that you could set standalone as an arg rather than loading a standalone nodelet manager.

2013-10-08 13:46:50 -0500 commented answer Storing kinect depth data from turtlebot in an array.

Yes, if you just need the depth at a single coordinate in the depth image, this will work fine.

2013-10-08 13:45:00 -0500 received badge  Commentator
2013-10-08 13:45:00 -0500 commented question Unused argument problem when launching /etc/ros/openni_head.launch on PR2 Hydro

Is there a reason you are not using "openni_launch openni.launch" similar to here: https://github.com/turtlebot/turtlebot/blob/hydro-devel/turtlebot_bringup/launch/3dsensor.launch

2013-10-08 13:21:46 -0500 answered a question Storing kinect depth data from turtlebot in an array.

This function should do the trick and accounts for many cases, such as endianness and the raw and rectified depth image.

Of course, if you know the endianness of your system (and the image endianness doesn't change) as well as which depth image you are subscribed to (raw or rect) then this can be simplified a lot.

Note: the function returns the depth in millimeters.

typedef union U_FloatParse {
    float float_data;
    unsigned char byte_data[4];
} U_FloatConvert;

int ReadDepthData(unsigned int height_pos, unsigned int width_pos, sensor_msgs::ImageConstPtr depth_image)
{
    // If position is invalid
    if ((height_pos >= depth_image->height) || (width_pos >= depth_image->width))
        return -1;
    int index = (height_pos*depth_image->step) + (width_pos*(depth_image->step/depth_image->width));
    // If data is 4 byte floats (rectified depth image)
    if ((depth_image->step/depth_image->width) == 4) {
        U_FloatConvert depth_data;
        int i, endian_check = 1;
        // If big endian
        if ((depth_image->is_bigendian && (*(char*)&endian_check != 1)) ||  // Both big endian
           ((!depth_image->is_bigendian) && (*(char*)&endian_check == 1))) { // Both lil endian
            for (i = 0; i < 4; i++)
                depth_data.byte_data[i] = depth_image->data[index + i];
            // Make sure data is valid (check if NaN)
            if (depth_data.float_data == depth_data.float_data)
                return int(depth_data.float_data*1000);
            return -1;  // If depth data invalid
        }
        // else, one little endian, one big endian
        for (i = 0; i < 4; i++) 
            depth_data.byte_data[i] = depth_image->data[3 + index - i];
        // Make sure data is valid (check if NaN)
        if (depth_data.float_data == depth_data.float_data)
            return int(depth_data.float_data*1000);
        return -1;  // If depth data invalid
    }
    // Otherwise, data is 2 byte integers (raw depth image)
   int temp_val;
   // If big endian
   if (depth_image->is_bigendian)
       temp_val = (depth_image->data[index] << 8) + depth_image->data[index + 1];
   // If little endian
   else
       temp_val = depth_image->data[index] + (depth_image->data[index + 1] << 8);
   // Make sure data is valid (check if NaN)
   if (temp_val == temp_val)
       return temp_val;
   return -1;  // If depth data invalid
}

Hopefully this helps.

2013-10-08 13:04:11 -0500 answered a question extract SensorMessages/LaserScan from SensorMessages/PointCloud2 created in simulation

I believe you can run the pointcloud_to_laserscan for any Sensor_msgs/PointCloud2 data by the following (in this case I've matched what you have put above for the vrep):

<launch>
  <!-- Fake Laser Standalone nodelet-->
  <node pkg="nodelet" type="nodelet" name="vrep_laser" args="standalone pointcloud_to_laserscan/CloudToScan" respawn="true">
    <param name="output_frame_id" value="/vrep_frame"/>
    <param name="min_height" value="-0.15"/>
    <param name="max_height" value="0.15"/>
    <remap from="cloud" to="/vrep/PointCloud2"/>
  </node> 

</launch>

Adjust the "output_frame_id", "min_height", and "max_height" as necessary for the vrep Sensor_msgs/PointCloud2 data.

This tutorial has information on nodelets and running standalone nodelets and this turtlebot file shows an example for how to run the pointcloud_to_laserscan nodelet. The code I have above follow these.

Hopefully this helps.

2013-10-07 16:04:10 -0500 commented answer How to retrieve data from Kinect using rviz

Agreed, lsusb probably wouldn't have returned that Kinect's were connected if that was the issue. Take a look at the bottom half of the answer by dan in this answer: http://answers.ros.org/question/9252/openni_camera-compiling-problem/ The USB permissions might be wrong or libusb issue.

2013-10-07 13:45:23 -0500 commented question extract SensorMessages/LaserScan from SensorMessages/PointCloud2 created in simulation

you may just need to adjust some parameters. The pointcloud_to_laserscan essentially just takes a horizontal slice from the point cloud and makes that the "laser scan". The points in the point cloud could be out of that height range.

2013-10-07 13:14:28 -0500 answered a question How can I subscribe joint_controller/state from dynamixel

In the callback

current_position = data.current_pos

should do the trick. Also note, that this:

rospy.Subscriber("/joint6_controller/state", current_pos, callback)

should be this:

rospy.Subscriber("/joint6_controller/state", JointState, callback)

The dynamixel controller Joint State message documentation is here. If you want to access any other member, just call "data.[member name goes here]".

Hope this helps.

2013-10-07 12:59:41 -0500 answered a question nao control rate

You could use ros::Rate()

ros::NodeHandle n;
ros::Publisher cmd_pub = n.advertise<geometry_msgs::twist>("/cmd_vel", 1000);
ros::Rate loop_rate(2);

while (ros::ok()) {
    cmd_pub.publish(msg);
    loop_rate.sleep();
}

The "loop_rate" input is a frequency in Hertz so you can adjust that as needed. See this tutorial for more details.

2013-10-06 17:13:34 -0500 answered a question how to edit the ~/.bashrc file

To edit it:

vi ~/.bashrc

and add that line. You could also use (as SaiHV answered)

gedit ~/.bashrc".

Make sure to

source ~/.bashrc

after editing it.

2013-10-06 14:28:47 -0500 commented answer How to retrieve data from Kinect using rviz

I'm out of ideas. Those are all the issues that I've ran into with the Kinect. The issue is ROS detecting the Kinect it seems though.

2013-10-06 14:11:06 -0500 commented answer How to retrieve data from Kinect using rviz

Try: "killall XnSensorServer" then "roslaunch openni_launch openni.launch" and see if you get the same error as before.

2013-10-06 12:35:35 -0500 commented answer How to retrieve data from Kinect using rviz

Try running the command "lsusb" on the computer that the Kinect is plugged into. There should be three USB devices listed that are associated with the Kinect.

2013-10-06 11:38:41 -0500 received badge  Teacher (source)
2013-10-06 11:23:17 -0500 received badge  Editor (source)
2013-10-06 11:22:31 -0500 answered a question Any one knows how to apply the camera calibration matrix to OpenNI

If you are using "openni_launch"

roslaunch openni_launch openni_launch

with rgb_camera_info_url and depth_camera_info_url arguments. See openni_launch documentation: http:// wiki.ros.org/openni_launch#Arguments

2013-10-06 10:42:04 -0500 commented question Using two roscreate-pkg packages and being able to read the src file code
2013-10-06 10:29:28 -0500 commented answer How to retrieve data from Kinect using rviz

When you launch "turtlebot_bringup 3dsensor.launch", does it report any errors

2013-10-06 10:25:03 -0500 commented answer How to retrieve data from Kinect using rviz

I believe in ROS fuerte a breaker (breaker 1 possibly?) had to be enabled to have the turtlebot base power the kinect, but I'm not sure if this is the case in groovy and later. Also, the Kinect doesn't work with USB3.0 so make sure it is plugged into a 2.0 port.

2013-10-06 10:22:29 -0500 commented question Using two roscreate-pkg packages and being able to read the src file code

See this ROS answer. You have to export cflags and lflags in package a in your case mentioned.

2013-10-06 10:14:56 -0500 received badge  Supporter (source)
2013-10-06 09:33:59 -0500 commented question I use this syntax "roscreate-pkg pkgname depend1 depend2 depend3" ...but the depends are not getting created

Do you mean that the depends are not put into the manifest?