ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Sudeep's profile - activity

2019-08-14 03:28:34 -0500 received badge  Nice Answer (source)
2018-05-08 13:27:05 -0500 received badge  Nice Answer (source)
2016-06-14 20:51:40 -0500 received badge  Guru (source)
2016-06-14 20:51:40 -0500 received badge  Great Answer (source)
2016-03-13 05:05:04 -0500 commented answer robot_localization/ekf_localization when working with namespaces

As mentioned in Answer, major changes would be odom0, imu0, odom_frame, base_link_frame, world_frame @Omer

2016-03-13 05:01:56 -0500 commented answer help installing pcl on ROS Indigo

Sorry my bad, i thought if he could install it from source instead of rosdep ..

2016-03-12 13:01:00 -0500 answered a question rosdep init failed

This may happens when network needs some particular environment variable to be set. try with sudo -E rosdep init

2016-03-12 12:55:32 -0500 answered a question help installing pcl on ROS Indigo

one way is to install it from source,

I assume that you want to use newer version of PCL (which was not installed with ROS). You would need to install PCL from source. Download desired PCL repo from this link. Then, compile it from source while setting USE_ROS param 'true'.

mkdir build
cd build 
cmake .. -DUSE_ROS=1
sudo make install -j

next is to include standalone-pcl directory in CMakeLists.txt file of your package using find_package(PCL 1.7 REQUIRED) include_directories(${PCL_INCLUDE_DIRS} ) and linking executable with target_link_libraries(exec_name ${catkin_LIBRARIES} ${PCL_LIBRARIES})

hope this helps

2016-03-12 12:45:04 -0500 answered a question perception_pcl compile issue

Hey,

I assume that you want to use newer version of PCL (which was not installed with ROS). You would need to install PCL from source. Download desired PCL repo from this link. Then, compile it from source while setting USE_ROS param 'true'.

mkdir build
cd build 
cmake .. -DUSE_ROS=1
sudo make install -j

another way is to include standalone-pcl directory in CMakeLists.txt file of your package using include_directories(${PCL_INCLUDE_DIRS} ) and linking executable with target_link_libraries(exec_name ${catkin_LIBRARIES} ${PCL_LIBRARIES})

hope this helps

2016-03-12 12:16:46 -0500 answered a question robot_localization/ekf_localization when working with namespaces

Hi Omer,

What I understand from your question is that you want to launch two instances of robot_localization for two different robots by launching then under different name_spaces.

To launch the robot_localization node under a name_space you can use <group> tag or you can pass argument __ns (args="__ns=name_space"). After this your launch file would include:

<node ns="bot1" pkg="robot_localization" type="ekf_localization_node" name="ekf_localization">
  <rosparam command="load" file="$(find jackal_control)/config/robot_localization_bot1.yaml" />
</node>   
<node ns="bot2" pkg="robot_localization" type="ekf_localization_node" name="ekf_localization">
 <rosparam command="load" file="$(find jackal_control)/config/robot_localization_bot2.yaml" />
</node>

after this you would need to modify your yaml files accordingly,

frequency: 50
odom0: /jackal_velocity_controller/odom   #odom source for respective robot
odom0_config: [false, false, false,
false, false, false,
true, true, true,
false, false, true,
false, false, false]
odom0_differential: false
imu0: /imu/data #imu source for respective bot
imu0_config: [false, false, false,
true, true, true,
false, false, false,
true, true, true,
false, false, false]
imu0_differential: false
odom_frame: odom  # fixed frame for robot (this can be same for both)
base_link_frame: base_link #fixed frame for robot (this should be diffenret for both)
world_frame: odom #again same is fine

make sure that you publish static transforms between IMU frame and baselink for both the robots

Hope this helps.

2015-07-03 09:04:25 -0500 commented answer unable to extract images from bag file

run >> rosbag play bagfile.bag in another terminal window

2015-07-03 02:44:56 -0500 answered a question unable to extract images from bag file

There are several ways to do this, and the best and the simplest one is to write your own package for this, as per your requirement; Here you need two exactly synchronized images at any given moment so subscribe to both the topics using msg_filter (see the section 7.2). once you are in call-back function convert both the images to cv::Mat and save them using imwrite("filenam",cv::Mat var) function. this way you can be sure about their time stamp.

another way is, using image_view package which comes with image_pipeline.

rosrun image_view extract_images _sec_per_frame:=0.01 image:=(image_topic_name) tLink

you will have to run this package for both the topics separately, but be sure that both the time you get same number of images (in your case 171 images). (otherwise they arent stereo-images). one issue here is, this package save the images in jpg format which can have some data loss due to jpg compression.

2015-06-03 04:21:55 -0500 commented answer ROS stereo disparity issues

Hi, Publish them under your namespace as /(your_nameSpace)/left/camera_info and (namespace)/right/camera_info as camera_info msgs link

2015-06-03 04:18:26 -0500 commented answer viso2 stereo disparity

But it will not improve the performance of VISO2. Viso2 computes its own sparse feature based correspondence. :P

2015-06-03 04:17:08 -0500 commented answer viso2 stereo disparity

@^^ if you are not planning to mount the stereo camera on some fast moving robot like car then approximate_sync is fine. put min disparity as 0 and range as 128. To get better disparity use SGBM algorithm instead of BM(the one you are currently using) this will give you dense reconstruction.

2015-06-01 03:05:14 -0500 received badge  Good Answer (source)
2015-06-01 01:11:02 -0500 received badge  Nice Answer (source)
2015-06-01 00:44:22 -0500 answered a question ROS stereo disparity issues

Hi,

It seems that your cameras are not calibrated properly. any stereo matching code assumes that epi-polar lines are horizontal in left and right images. which means that if you take any point in the left(u,v) image, the corresponding point should lie on the same row i.e., (u1,v). To make sure that this happen, you need to calibrate the cameras, once you calibrate them, you will get these parameters M1,M2(camera matrix) R,T,P1,P2(projection matrix) etc. From these params you will need to rectify the input images using image_proc.

If you have already done the calibration, redo it again, and make sure that your calibration and rectification is accurate. If calibration/rectification fails again then use OpenCV's calibration procedure,(It produces a bit different results than ROS).

To check the rectification, just save rectified images and run SIFT matching code on stereo pairs, plot the lines between two images and these line should come strait and perfectly horizontal. Links, 1. ROS calibration : wiki.ros.org/camera_calibration 2. Opencv calibration : http://docs.opencv.org/modules/calib3... 3. plot sIFT : http://www.mathworks.com/matlabcentra...

Tell me in case of any doubt.

Hope I helped Sudeep

2015-06-01 00:30:52 -0500 answered a question Implemention and doubt regarding base controller for a custom robot

hi, If you have a code which can generate motor-command(PWM for left-right) then there is no need to use base_controller. Available base_controller uses PID approach to generate these commands efficiently.

Hope I helped, Sudeep

2015-06-01 00:12:51 -0500 answered a question viso2 stereo disparity

Hi,

Does the distance between the cameras matter much?

Yes it does matte, as you increase the baselength B, your ability to reconstruct distant points increases, for indoor purposes baseline should be around ~12/13 cm. (this depends on the camera lenses(zoom level) and the focal length too). for out-door purposes use 25 to 30 cm.

Is this setup good enough?

the set up is good enough. make sure that cameras provide data of the same time, i mean there should not me more 30ms of delay between two camera(left-right) frames(again this depends on your speed of camera)

The above are my calibration results.Is that fine?

Your calibration looks fine! and disparity map too.

Are the disparity maps ok? Any suggestions on improving them? Is it okay to have grey background?

You are getting grey parts in back ground because algorithm is not able to find the disparity values for that region. if you want to improve it, you should use SGBM (again speed is the deciding factor here)

I didn't get any reprojection error as mention in this post. It just showed ? under epi . Is that okay.Around 180 samples were used in calibration.

After the calibration, when you bring your board in the frame the '?' will be replaced by the reprojection error. Try it again :) and 180 samples are more than enough. usually 20 samples will do the job :P

Now the actual question,

Viso2 is very good algorithm for pose estimation, it computes sparse features to get the changes in pose, the code is meant for high resolution and wide-baseline cameras and for out door purposes, though it should work on the low resolution cameras too. Your TF tree and other system level things are fine as far as i can see but the code is not able to find sufficient amount of feature matches thats the reason for "odom lost".

Try running this package outdoor and see how is the performance once. Number of matches will improve. Make sure that you experiment with this package in an environment where number of features are high. Try not to perform in-place rotation of camera, in that case Visual odometry algorithm might not work properly.

Hope I helped Sudeep

2015-05-07 17:19:38 -0500 received badge  Guru (source)
2015-05-07 17:19:38 -0500 received badge  Great Answer (source)
2014-05-27 22:21:02 -0500 received badge  Enlightened (source)
2014-05-27 05:17:55 -0500 commented answer How can I remap topic cloud_in to my sensor's PointCloud2 ???

It would be batter if you can post more error details. where is this xml file located? Post a new question! :)

2014-05-27 05:13:12 -0500 commented answer How to understand the depth of a pixel with the kinect

luke, code is bit unclear, first if depth is a vector (std::vactor<float> depth) then use > depth.push_back((float)depth_value). and also update the value of call_count while printing. It should work

2014-05-26 06:58:21 -0500 answered a question how to extract a topic data from a text file using matlab?

this will help : http://www.cs.utah.edu/~germain/PPS/T... For this i dont think ROS is required !

2014-05-26 06:41:49 -0500 answered a question hokuyo_node. Why does Fixed Frame [laser] does not exist?

Hi, I faced similar issue, couldn't find any permanent solution,(I think issue is with rviz) here is a temporary one if /map is set as your default fixed frame , Then publish a static transformation between /map and /laser you can use rosrun tf static_transform_publisher x y z yaw pitch roll frame_id child_frame_id period_in_ms

the idea is to publish transformation between default frame and sensor frame .. its fine if its not static!!

NOTE : But you wrote that you are launching navigation stack, transformation between /map and /laser should be getting published in this case, THIS IS TEMP SOLUTION (works for me) :P

2014-05-26 06:32:28 -0500 commented answer How to understand the depth of a pixel with the kinect

This should work, can you attach your function to get depth! Is RGB image part working?

2014-05-26 04:15:02 -0500 received badge  Good Answer (source)
2014-05-26 01:16:02 -0500 commented answer How to understand the depth of a pixel with the kinect

if you subscribe to pointcloud topic by "ros::Subscriber sub = nh.subscribe<pcl::pointcloud<pcl::pointxyzrgb> >("/points", 1, callback);" then in callback function call, use pointcloud ptr, void callback(const pcl::PointCloud<pcl::pointxyzrgb>::ConstPtr& input) this will work :)

2014-05-26 01:16:02 -0500 received badge  Commentator
2014-05-25 02:05:41 -0500 received badge  Citizen Patrol (source)
2014-05-24 05:55:35 -0500 answered a question How can I remap topic cloud_in to my sensor's PointCloud2 ???

you can remap "your pointCloud2 input" topic to cloud_in, by including following lines to your launch file,

e.g., if you are using this file: octomap_mapping / octomap_server / launch / octomap_mapping.launch write,

  <remap from="cloud_in" to="/(point cloud topic) />

under <node> </node>

(RGBD slam is also a nice option for mapping using kinect.)

2014-05-24 05:45:14 -0500 received badge  Nice Answer (source)
2014-05-24 01:39:05 -0500 answered a question How to understand the depth of a pixel with the kinect

Hi luke,

To get RGB image from cloud data ,

   cv::Mat imageFrame;
if (cloud->isOrganized()) 
{
    imageFrame = cv::Mat(cloud->height, cloud->width, CV_8UC3); 
    {

        for (int h=0; h<imageFrame.rows; h++) 
        {
            for (int w=0; w<imageFrame.cols; w++) 
            {
                pcl::PointXYZRGBA point = cloud->at(w, h);

                Eigen::Vector3i rgb = point.getRGBVector3i();

                imageFrame.at<cv::Vec3b>(h,w)[0] = rgb[2];
                imageFrame.at<cv::Vec3b>(h,w)[1] = rgb[1];
                imageFrame.at<cv::Vec3b>(h,w)[2] = rgb[0];
            }
        }
    }
}

But first you might need to convert cloud data into pcl supported form, for this you can use,

pcl::PointCloud<pcl::pointxyzrgba>::Ptr cloud (new pcl::PointCloud<pcl::pointxyzrgba>);

pcl::fromROSMsg (*input, *cloud);

Once you get the image, you can apply your detection algorithm using opencv libs,

To get depth of point x,y in image

int i = x + y*cloud->width; float depth = (float)cloud->points[i].z;

Hope this helped :)

2014-05-18 03:02:06 -0500 commented question Sending people_msgs from Kinect

Hi, Have you found any robust solution? It would be great if you can share :)

2014-05-16 16:08:37 -0500 commented question Navigation: Can I make the robot rotate around another point?

Have you solved this? I am looking for something similar to this! :)

2014-05-16 15:40:46 -0500 answered a question Depth image from a Disparity image

Hi, As you are looking for the depth of each point you can use nodelets of package stereo_image_proc, It computes point cloud2 and point cloud, using these data depth of each point can be obtained (Look for PCL examples for this).

Link : Look at the point 5.2/5.3 : stereo_image_proc/point_cloud2 here

-Sudeep

2014-05-16 10:10:55 -0500 received badge  Nice Answer (source)
2014-05-16 10:07:15 -0500 received badge  Necromancer (source)
2014-05-16 10:07:03 -0500 received badge  Nice Answer (source)
2014-05-16 10:06:48 -0500 received badge  Nice Answer (source)
2014-05-16 10:03:43 -0500 answered a question SBPL Lattice planner goal problem

Hi, If you fill that's the issue then, you can increase the value of goal tolerance in this file

navigation_experimental / sbpl_lattice_planner / launch / move_base / base_local_planner_params_close.yaml

Also try changing other parameters: This tutorial will help.

-Sudeep

2014-05-14 11:27:26 -0500 received badge  Critic (source)