ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

krishna43's profile - activity

2020-09-30 05:31:00 -0500 received badge  Teacher (source)
2020-09-30 05:31:00 -0500 received badge  Necromancer (source)
2020-05-03 05:45:36 -0500 received badge  Famous Question (source)
2019-01-11 12:42:33 -0500 marked best answer realsense depthimage to laserscan

Hello all,

I recently got a realsense R200 sensor. I was able to follow this tutorial and able to get data from the realsense. But I want to get laserscan data from the realsense. So I followed this tutorial.

This is the command I used to convert depthimage to laserscan

rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw camera_info:/camera/depth/camera_info

I am able to see the scan data, but there are way too many NaN values in the data. I tried to figure out why it was happening, but I was not able to. Can someone please help me?

2018-09-19 14:32:08 -0500 marked best answer conversion of PointCloud2 to pcl pointcloud using python

Hello ROS gurus,

I am working on python pcl and point clouds. Is there a way where i can convert sensor_msgs.msg/PointCloud2 type to PCL point cloud using python, so i can use PCL functions on the data.

Any help is appreciated.

Thanks

2018-09-19 14:19:01 -0500 marked best answer velocity smoother

Hello,

I am working on my navigation project, where i am using turtlebot, Ubuntu 14.04 and ROS Indigo. Everything is working fine, but sometimes my robot does some jerky movements which i want to avoid. I came across velocity_smoother in ROS but i am not sure how i can use to make my robots' velocity smooth. How can i integrate it in my launch file or should i remap my velocity topics?

Can anyone help me with this?

Thanks, Sai Krishna Allani

2018-09-19 14:17:09 -0500 marked best answer How can i save data into bag file so that i can perform gmapping?

I am trying to save data into bag file to run gmapping later. I don't want to do it through command line using rosbag, rather i want to write a python script to save the data into bag file.

I tried saving just Laserscan messages into bag file by following this tutorial and it worked like charm. :)

But in order to simulate gmapping i have to save data from multiple topics i.e /tf and /scan ,so i tried using message filters to scan data from both tf and scan, but it always throwing me errors.

I think most people saved the data into bag file before, to simulate gmapping. Can someone tell me how they did it?

Thanks.

2018-09-18 19:48:11 -0500 marked best answer ROS maps generation from pdf files

Hello all,

I have a blue print of my of my building in a pdf file. How can i generate a map from this pdf file so i can use it for navigation?

Thanks

2018-09-14 17:42:08 -0500 marked best answer Trying to understand PointCloud2 msg

Hello,

Can someone please post a clear explanation of how to understand pointcloud2 message? I want to get x,y,z point from the message and I found a solution to do that, but I am not able to understand why there are so many numbers in data field?

Can someone please explain?

Thanks.

2018-07-25 14:07:50 -0500 marked best answer Open question on autonomous navigation/mapping.

Hello everyone,

I am trying to solve a problem where i want my robot to go around our office space without hitting anything and create a map of our office which i can use it in future. One way of create a map is to teleoperate the robot and create a map, which is straight forward. But this process becomes very tedious if your office is very large. I am trying to automate the process.

The main difficulty i am facing is i don't want my robot to go into any cubical's while mapping, i just want it to stay it in the aisles. If i use LaserScan data and write a simple algorithm for obstacle avoidance, the robot goes into the cubical's and hit gets stuck or hits small objects or disturbs the people who are working, i don't want that to happen.

I thought i can use point cloud data to make my robot stay in the aisles and not go into cubical's. At present, i am able to get pointclouds from kinect and use plane model segmentation algorithm to find planes.

But i have no idea what to do next or how to use these planes to make the robot move/navigate so that they just stay in aisles.

Any ideas how i can proceed the problem?

Thanks.

2018-07-10 04:40:43 -0500 received badge  Famous Question (source)
2018-05-08 05:43:36 -0500 received badge  Notable Question (source)
2018-05-08 05:43:36 -0500 received badge  Famous Question (source)
2018-01-30 21:35:49 -0500 marked best answer hydro and indigo in parallel.

Hello all,

At present i'm having Ubuntu 14.04 with ROS indigo on it. Can i run ROS hydro at the same time. If yes, can anyone please tell me how to do it?

Thanks.

2018-01-25 11:09:36 -0500 received badge  Famous Question (source)
2018-01-08 16:52:31 -0500 received badge  Famous Question (source)
2017-12-31 13:10:04 -0500 received badge  Famous Question (source)
2017-11-23 06:38:06 -0500 received badge  Famous Question (source)
2017-11-04 11:51:27 -0500 received badge  Notable Question (source)
2017-10-26 08:03:41 -0500 received badge  Notable Question (source)
2017-10-26 08:03:41 -0500 received badge  Famous Question (source)
2017-10-18 15:55:26 -0500 commented question Trying to understand PointCloud2 msg

@jayess I need an explanation of PointCloud2 data format. I got the result i want by blindly copying the code snippet, b

2017-10-18 15:45:36 -0500 commented question Trying to understand PointCloud2 msg

https://answers.ros.org/question/202787/using-pointcloud2-data-getting-xy-points-in-python/ look at that question. tha

2017-10-17 01:32:01 -0500 received badge  Popular Question (source)
2017-10-16 13:33:59 -0500 asked a question Trying to understand PointCloud2 msg

Trying to understand PointCloud2 msg Hello, Can someone please post a clear explanation of how to understand pointcloud

2017-10-13 17:50:13 -0500 received badge  Notable Question (source)
2017-10-11 19:26:12 -0500 received badge  Popular Question (source)
2017-10-11 12:17:53 -0500 asked a question How to merge two laserscans?

How to merge two laserscans? Hello all, I have two sensors (KInect and asus) and I am able to get laserscan data from b

2017-10-08 05:10:35 -0500 received badge  Notable Question (source)
2017-09-18 03:20:27 -0500 marked best answer How is odometry used in gmapping?

Hello,

I working on gmapping to map a huge area. I have basic knowledge of ROS i.e how to publish,subscribe,writing launch files. As i have to map a huge area i don't want to drive the robot manually, so i wrote a simple obstacle avoidance code to move around the space without hitting anything. And when i robot moves i save the laserscan data and tf data into a bag file. and ofter than i follow this tutorial to make a map. As a trail run, i just made the robot to go around a small cubical (rectangle in space)and mapped the environment offline. You can see the output below:

image description

As you can see the map is REALLY REALLY bad :( .

I did some research and played with gmapping parameters, i was able to improve the mapping a little bit. Output after changes gmapping parameters:

image description

As you can see it improve a little bit. :) . But i am not satisfied with the result. I want to make it more perfect, so i kept on reading tutorials and watching videos on gmapping but most of the information doesn't make any sense to me especially on tf frames. I have following question:

1) when i record data into a bag file i just record laserscan and tf. So, how is odom data is considered in mapping the environment?

2) Is there any other high level way (rather than play with gmapping parameters) to improve gmapping?

3) whenever i collect data from my robot i go around with my robot and i can see that sometimes it bumps into things and make sudden movements. i think these movements effect the quality of data. Is there anyway i can make my robot go smoother?

4) And damn what doe this means map -> odom -> base_link . I know it's a transformation from one frame to another frame, like in low level how is map frame different from odom and base_link and which frame is used in gmapping?

Can someone please throw some light on these questions?

P.S: why understanding about odom and base_link is obom drifts over time, but base_link isn't. Please correct me if i am wrong.

Thanks.

2017-09-18 03:18:44 -0500 marked best answer help with leg_detection

Hello,

I am working on leg_detection using this. I am using Ubuntu 14.04, ros-Indigo, with a turtlebot. whenever i run:

roslaunch leg_detector leg_detector.launch

I am getting the following error:

[ WARN] [1466630966.002926837]: MessageFilter [target=odom_combined ]: Dropped 100.00% of messages so far. Please turn the [ros.leg_detector.message_notifier] rosconsole logger to DEBUG for more information.

I did go through the old questions related to this error and changes my fixed_frame from:

odom_comined to odom

when i changed my fixed frame and ran:

rostopic echo /camera_tracker_measurements

it outputs nothing. can someone help me with this?

Thanks.

2017-09-03 14:06:25 -0500 received badge  Popular Question (source)
2017-09-01 15:37:29 -0500 commented question realsense depthimage to laserscan

when you guys moved to TK1, are you still using depthto_laserscan node to get the scan data?

2017-09-01 15:00:46 -0500 commented question realsense depthimage to laserscan

oh wow! I thought it was the problem with the depthto_laserscan code I am using. Thanks.

2017-09-01 14:54:35 -0500 commented question realsense depthimage to laserscan

How did you solve it? @jayess

2017-09-01 12:44:53 -0500 asked a question realsense depthimage to laserscan

realsense depthimage to laserscan Hello all, I recently got a realsense R200 sensor. I was able to follow this tutorial

2017-08-08 17:18:29 -0500 received badge  Famous Question (source)
2017-04-03 14:15:25 -0500 received badge  Notable Question (source)
2017-04-03 14:15:25 -0500 received badge  Popular Question (source)
2017-03-02 12:14:34 -0500 asked a question How to use your custom map for turtlebot navigation in simulation?

Hello all,

I have built a custom map of my building and I want to use it in simulation (Rviz) to test my code. In simple words, I want some like turtlebot_stage turtlebot_in_stage.launch with my map and without a stage.

I am trying to do it, but I am getting a lot of error. This is another question I have posted regarding the error I am facing when I launched my custom launch file.

This is my launch file

<launch>
<arg name="base"       value="$(optenv TURTLEBOT_BASE kobuki)"/>  <!-- create, rhoomba -->
  <arg name="stacks"     value="$(optenv TURTLEBOT_STACKS hexagons)"/>  <!-- circles, hexagons -->
  <arg name="3d_sensor"  value="$(optenv TURTLEBOT_3D_SENSOR kinect)"/>  <!-- kinect, asus_xtion_pro -->

  <include file="$(find turtlebot_bringup)/launch/includes/robot.launch.xml">
    <arg name="base" value="$(arg base)" />
    <arg name="stacks" value="$(arg stacks)" />
    <arg name="3d_sensor" value="$(arg 3d_sensor)" />
  </include>

  <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher">
    <param name="use_gui" value="true"/>
  </node>
  <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_map" args="0 0 0 0 0 0 /map /base_footprint 100"/>

  <node name="rviz1" pkg="rviz" type="rviz" args="-d $(find turtlebot_rviz_launchers)/rviz/navigation.rviz"/>
</launch>

also I am running a map with map_server node and amcl using rosrun amcl amcl

In the terminal where I ran amcl node i get an error saying that no messages revieced on /scan topic. So, I am not sure how to publish a simulated /scan messages.

Does anyone used a custom map in simulation ( I am sure a lot of people did it). If yes, can you please help me?

Thanks, Sai Krishna Allani

2017-03-02 11:45:33 -0500 commented question not able to display robot model

@AA test my the navigation code I have written. But it looks like everything is correct, but the robot is not movie when i give it a goal. I know it has something to do with TF, but i don't know how to solve this. Can you throw some light on this?

2017-03-02 11:44:19 -0500 commented question not able to display robot model

@AA when i added static tf i am able to see the robot model, but when i give it a goal pose using Rviz it doesn't move. ( I am running amcl). This is what I am trying to solve: I have a map of my building and I want to bring it up in Rviz ( just like turtlebot_stage but without stage) and (1/2)

2017-03-02 09:18:54 -0500 received badge  Popular Question (source)
2017-03-01 17:58:16 -0500 asked a question not able to display robot model

Hello all,

I am trying to display robot model in RViz and I am always getting the following error

No transform from [base_footprint] to [map]

This is my launch file

<!-- 
  Turtlebot navigation simulation:
  - stage
  - map_server
  - move_base
  - static map
  - amcl
  - rviz view
 -->
<launch>
  <arg name="base"       default="$(optenv TURTLEBOT_BASE kobuki)"/>  <!-- create, rhoomba -->
  <arg name="stacks"     default="$(optenv TURTLEBOT_STACKS hexagons)"/>  <!-- circles, hexagons -->
  <arg name="3d_sensor"  default="$(optenv TURTLEBOT_3D_SENSOR kinect)"/>  <!-- kinect, asus_xtion_pro -->

  <!-- Name of the map to use (without path nor extension) and initial position -->
  <arg name="map_file"       default="$(find hp_simulators)/maps/map.yaml"/> <!-- robopark_plan -->
  <arg name="initial_pose_x" default="2.0"/>
  <arg name="initial_pose_y" default="2.0"/>
  <arg name="initial_pose_a" default="0.0"/>

  <param name="/use_sim_time" value="true"/>
   <param name="robot_description" command="cat $(find hp_autonomous_navigation)/robots/turtlebot.urdf"/>

   <!--  ***************** Robot Model *****************  -->
  <include file="$(find turtlebot_bringup)/launch/includes/robot.launch.xml">
    <arg name="base" value="$(arg base)" />
    <arg name="stacks" value="$(arg stacks)" />
    <arg name="3d_sensor" value="$(arg 3d_sensor)" />
  </include>
  <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher">
    <param name="use_gui" value="true"/>
  </node>

  <!-- Command Velocity multiplexer -->
  <node pkg="nodelet" type="nodelet" name="mobile_base_nodelet_manager" args="manager"/>
  <node pkg="nodelet" type="nodelet" name="cmd_vel_mux" args="load yocs_cmd_vel_mux/CmdVelMuxNodelet mobile_base_nodelet_manager">
    <param name="yaml_cfg_file" value="$(find turtlebot_bringup)/param/mux.yaml"/>
    <remap from="cmd_vel_mux/output" to="mobile_base/commands/velocity"/>
  </node>

  <!--  ************** Navigation  ***************  -->
  <include file="$(find turtlebot_navigation)/launch/includes/move_base.launch.xml"/>

  <!--  ****** Maps ***** -->
  <node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)">
    <param name="frame_id" value="/map"/>
  </node>  
</launch>

Can anyone tell me what's wrong with my launch file?

Thanks.