ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

sudhanshu_mittal's profile - activity

2018-01-02 14:11:49 -0500 received badge  Necromancer (source)
2016-08-08 01:43:59 -0500 marked best answer Is gmapping possible with tilted kinect using turtlebot ?

I am trying to use turtlebot for both gmapping for autonomous exploration as well as capture point cloud of ground. We know in Turtlebot, gmapping using kinect is implemented by the package "turtlebot_navagation" along with "turltebot_bringup". The gmap is constructed using the laser_scan data, which is actually fetched from the point_cloud data by selecting the particular plane of points. When we mount the turtlebot at greater height with zero tilt/Pitch angle, the point cloud of nearby ground upto first 2m is not observed. So my question is : IS it possible to construct a gmap with the downward tilted kinect ? I mean selecting a different plane of points (at some angle) from point_cloud to use it for laser scan ?

2016-04-27 02:04:54 -0500 marked best answer Not able to visualize or record point cloud using openni_viewer

I am not able to visualize the point cloud data using the openni_viewer and further not able to capture .pcd image. There is no error message anywhere. Although, point cloud stream is getting displayed using rviz. I am using kinect sensor. I am using pcl17 library. And trying to view Point cloud live stream using the inbuilt binary called ./pcl_openni_viewer and to capture/record i am using the ./pcl_openni_pcd_recorder . Any possible suggestion, what am i missing ?

2015-06-11 20:42:19 -0500 answered a question map_server error when loading map

In the launch file, instead of <node pkg="map_server" name="map_server" type="map_server" args="``rospack find follow_me_2dnav``/launch/mylaserdata_1503201800.yaml" />

use this <node pkg="map_server" name="map_server" type="map_server" args="$(find follow_me_2dnav)/launch/mylaserdata_1503201800.yaml" />

For more details : link text
$(find pkg) is the right way to use the override behaviour.

2015-06-01 06:43:05 -0500 received badge  Famous Question (source)
2015-03-11 17:14:16 -0500 answered a question Image subscriber - window popup does not appear

The code given in the tutorial seems to be working perfectly fine.

This is some configuration problem. I hope you tried this. http://stackoverflow.com/questions/12...

2015-03-11 16:33:48 -0500 commented question depth and RGB registration error in Kinect pointcloud data

Check this link. The colors are shifted towards left even in depth_registered point clouds.

2015-03-11 16:00:43 -0500 received badge  Notable Question (source)
2015-02-18 22:44:32 -0500 received badge  Popular Question (source)
2015-02-11 14:37:54 -0500 asked a question depth and RGB registration error in Kinect pointcloud data

I have set the parameter 'depth_registration' = TRUE, available in the openni.launch file in the package openni_launch. Still the point cloud stream shows a mismatch in depth and RGB data. I observed the point cloud data using rviz.

I am using Ubuntu 12.04 with ROS fuerte.

I wish somebody could give me some tips about how to correct this mismatch error. One way is to change the transformation manually by hit and trial. But it would be really helpful, if some kind of automatic procedure is available in ROS.

2014-05-25 21:53:16 -0500 received badge  Nice Answer (source)
2014-04-23 19:52:35 -0500 received badge  Famous Question (source)
2014-01-08 05:42:39 -0500 received badge  Notable Question (source)
2013-12-14 06:28:37 -0500 answered a question kinect extrinsic parameters

The extrinsic parameters for tf related to kinect position on the turtlebot are present in turtlebot_calibration.xacro file. I use ROS fuerte, in my system this file is present at /opt/ros/fuerte/stacks/turtlebot/turtlebot_description/urdf/. Locate this file correspondingly for your system.

Using this file- x,y,z, yaw, pitch and roll of the kinect can be set.

2013-12-12 12:56:58 -0500 commented answer Turtlebot gmapping demo not generating map, any help?

when you launch gmapping_demo, do you get anything on rviz from kinect. Is it just a boundary problem or you are getting nothing from kinect ? hope you already checked the topics published in rviz

2013-12-12 12:43:14 -0500 answered a question Turtlebot gmapping demo not generating map, any help?

There is a problem with depth_registration topic. For using kinect as a laser for autonomous exploration application, you just need depth data.

To solve the above problem, you need to change the parameter depth_registration to false in the openni.launch file. I use ROS fuerte, in my system this file "openni.launch" is found at /opt/ros/fuerte/stacks/openni_launch/launch$. Check correspondingly for your system.

If this is not working for you. Change the parameter depth_registration to false in the kinect.launch file also and launch kinect.launch separately.

Thank you

2013-11-24 19:50:21 -0500 received badge  Popular Question (source)
2013-11-20 19:00:01 -0500 answered a question Improving the map or odometry for slam_gmapping

I would like to suggest a few measures which you should try out once:

1.One of the possible problems could be stalling of CPU cycles, Firstly, check the CPU processing of your laptop while gmapping package is running on turtlebot and simultaneously observe the turtlebot movement in rviz. If you observe CPU cycle striking near 100% or if rviz hangs or goes offline for short time interval every time the turtlebot moves, then one of possible solution is to increase the parameter value of "map_update_interval" in gmapping package to around 25-30.

2.If still the problem exists, the other solution is increase the the parameter value "maxUrange" in gmapping package according to the environment. According to my experience if your room is about 10m by 5m , then your range shoud lie around 5m atleast.

3.Other solution might be decreasing the angular acceleration and angular velocity parameters of the robot. You may find these parameters in "base_local_planner_params" file in gmapping package.

4.According to me, last you can try is decreasing the values of parameters "linearupdate" and "angularupdate" and increase no of particle value for better robot localization in gmapping launch file.

This worked out for me.

2013-11-19 11:14:37 -0500 answered a question Where is directory of the pcl source codes?

It will be there in /opt/ros/groovy/share/ if its not there in /opt/ros/groovy/stacks/.

Or wherever it is, just locate it using command locate extract_indices.cpp in ubuntu.

2013-11-19 10:09:31 -0500 answered a question Local Planner does not plan through inflated costs?

There is ROS package called costmap_2d. The costmap_2d package provides a configurable structure that maintains complete info about all position in the map in the form of a occupancy grid, which helps robot navigate.

Particularly for your problem, there is a parameter called "inflation_radius": default set as 0.55m. I am using ROS fuerte. In my system this parameter is found at: /opt/ros/fuerte/stacks/navigation/costmap_2d/cfg/cpp/costmap_2d/ in Costmap2DConfig.h

For more info, you should refer to the costmap_2d wiki page.

2013-11-19 09:53:50 -0500 answered a question In PointCloud2 what is Position Transformer property

Position Transformer property allows you to set the position of the points in the XYZ coordinate system in order to be viewed in 3D sensibly. Its similar to Color Transformer, which allows you to set different ways of color patterns to the points for sensible perception.

2013-11-14 06:40:47 -0500 asked a question mismatch in pointcloud2 color and depth data

I am using a ROS fuerte with Ubuntu 12.04

I am capturing the registered colored point cloud.In the captured pcd file, depth data is coming great but color is of the complete point cloud is shifted downwards by some distance. Therefore top part of any object contains background color and some part of the object color overlaps on the floor surface, in case the object is kept on the floor.

I first ran roslauch openni_launch openni.launch and then I observed the topic /camera/depth_registered/points in rviz to view the live stream. The same error is still there.

Am I using the incorrect topic ? May be some frame transformation is going wrong.. I am not able to figure out the error. Please Help !

Thank you

2013-11-12 09:00:17 -0500 answered a question Moving Turtlebot to a goal without a map

For just moving the robot, without any information of the environment(map): Run the following command:

Topic Name: "/move_base_simple/goal"

Message Type: "geometry_msgs/PoseStamped"

rostopic pub /move_base_simple/goal geometry_msgs/PoseStamped '{ header: { frame_id: "base_link" }, pose: { position: { x: -1.0, y: 0 }, orientation: { x: 0, y: 0, z: 0, w: 1.0}}}'
2013-11-12 07:34:53 -0500 answered a question rviz - storing point cloud selection

There is no such direct method for point cloud selection in the RVIZ itself as far as I know. But, there is a open source annotation tool called "LabelMe" available for labeling the point cloud. This toolbox is created by CSAIL MIT lab and widely used by many people for providing ground truth values for segmentation problem.

LabelMe3D Toolbox link to website

2013-11-12 02:20:34 -0500 received badge  Necromancer (source)
2013-11-11 23:20:44 -0500 commented answer Problem with gmapping

http://wiki.ros.org/gmapping

2013-11-11 11:59:36 -0500 commented answer How to combine a camera image and a laser pointcloud to create a color pointcloud?

There are no such standard instructions available to calibrate as far as I know. I can explain how I would do such task: Suppose i have image of size 640X480 and laser pointcloud of the same size in xy direction, (min. Requirement). It is same in kinect data. . . The depth pointcloud is available as a matrix of dimension [cloud.SIZE()] x 1. i.e. height is 1 and length is equal to the total no. of points. . . . Through manual matrix handling (converting the pointcloud matrix to 640X480 matrix) I can trace the XYZ values from laser pointcloud and RGB values from image to the output color pointcloud of format PointXYZRGB.

2013-11-10 04:52:22 -0500 received badge  Commentator
2013-11-10 04:52:22 -0500 commented question Navigation by kinect, Point Cloud to Laser Scan

did you check your rostopic list after running the launch file ? Does it show "/scan" topic ?

2013-11-10 04:44:48 -0500 commented question Navigation by kinect, Point Cloud to Laser Scan

which version of ROS are you using ?

2013-11-09 10:37:41 -0500 answered a question How to combine a camera image and a laser pointcloud to create a color pointcloud?

Yes, it is very much possible to combine DEPTH data from laser and RGB data from image to form a rgbd/color pointcloud using pcl library.

I hope you can define a x and y correspondence between each voxel of laser pointcloud and pixel from the image using tf. Given that laser pointcloud size and image size are same in x,y direction ..like in case of kinect it is 640X480. While writing the combined pointcloud, use PointXYZRGB structure Following is a small example code:

  pcl::PointCloud<pcl::PointXYZRGB> cloud;
  uint8_t r = 255, g = 0, b = 0;    // Example: Red color
  uint32_t rgb = ((uint32_t)r << 16 | (uint32_t)g << 8 | (uint32_t)b);
  cloud.points[0].rgb = *reinterpret_cast<float*>(&rgb);

About how to write a simple pcd file, you can find it here

2013-10-30 15:08:51 -0500 answered a question Unable to check Turtlebot Cable

There is a set of instructions you need to follow:

1.Check that initially no USB is connected to the system

2.First USB connected to your system should be Turtlebot cable, which will occupy the ttyUSB0 position

3.'After reading your above problem', you need to provide root rights to your USB0 port, using the command "sudo chmod 0777 /dev/ttyUSB0"

4.Now run roscore and check if the problem still exists.

2013-10-30 14:48:56 -0500 commented answer Which technique for 2D SLAM is easy to learn?

@sarkar: First of all, if you have a ROS supported robot like turtlebot,..then you just have to launch a few files. If you are working with a self made robot, then a few extra steps need to be performed before using the inbuilt packages of ROS. Let me know, which robot you are using ?

2013-10-30 14:43:38 -0500 commented answer Which technique for 2D SLAM is easy to learn?

@sarkar: I have no experience with Beaglebone Black, but i can definitely say that its not possible to perform 2D slam/ gmapping from simple IR proximity sensor, since it gives only single point distance plus the data measured is raw.

2013-10-29 13:34:08 -0500 received badge  Necromancer (source)
2013-10-29 12:20:37 -0500 commented answer Which technique for 2D SLAM is easy to learn?

Definitely 2D SLAM is much more easier to implement that 3D SLAM, I have implemented 2D SLAM with turtlebot as well as seen people implement 2D SLAM on self built robots using navigation stack. What i believe from my experience is, 3D SLAM is harder to implement and computationally very expensive.

2013-10-29 12:08:50 -0500 answered a question PCL openni_grabber to ROS fuerte

Hi If you are really looking for a way to watch the live point cloud stream in ROS. Then, best and simplest way in ROS is through rviz. Follow the following instructions:

1.roscore

2.roslaunch openni_launch openni.launch (other tab)

3.rosrun rviz rviz (another tab)

Once the rviz is open, change the Fixed Frame topic in Global Options to "/camera_depth_optical_frame" Next Add PointCloud2 from and change the topic to "points (sensor_msgs/PointCloud2)".

2013-10-29 11:49:47 -0500 answered a question Which technique for 2D SLAM is easy to learn?

Hi 2D SLAM gives a 2D map of the envirnment called GMAP.(Refer to http://wiki.ros.org/gmapping) GMAP gives you detailed information about the object occupancy in that 2D plane. This occupancy grid generated from gmapping gives a accurate scaled information about the actual real world. Each x and y coordinate in this 2D grid is accurate just scaled to some particular resolution, whereas in image captured from camera is a perspective image hence there is no possible transformation possible from image to real world coordinates.

2D SLAM is technique, which is quite easily be performed using lasers. Since you are a "newbie", I would suggest you to use Kinect, which is cheapest multi-sensor available.

I am not saying that it is impossible to do 2D SLAM from a webcam, but it would require a lot of knowledge and effort.

2013-09-26 22:59:33 -0500 commented answer gmapping resolution

can you once check your following parameter values in the gmapping package launch file ?

2013-09-26 21:02:28 -0500 received badge  Supporter (source)