ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2022-01-21 01:38:54 -0500 | received badge | ● Good Question (source) |
2021-08-19 04:53:59 -0500 | received badge | ● Nice Question (source) |
2020-11-04 16:11:45 -0500 | received badge | ● Good Question (source) |
2020-06-02 04:19:25 -0500 | marked best answer | undefined reference to `ros::console::initializeLogLocation()' in turtlebot_arm_block_manipulation package Hi, I added What shall I add in CMakeLists.txt, in catkin_package, or in package.xml? I use ROS Hydro. Thanks |
2020-06-02 04:19:25 -0500 | received badge | ● Self-Learner (source) |
2018-10-31 01:27:43 -0500 | received badge | ● Nice Answer (source) |
2018-10-31 01:27:13 -0500 | received badge | ● Nice Question (source) |
2018-07-12 09:18:27 -0500 | received badge | ● Taxonomist |
2018-05-21 15:47:43 -0500 | marked best answer | How to transform a PointStamped from a frame to another? Hi, I get a PoseStamped in a specific frame and I want to transform it to the coordinates of the point in my base_link frame. I tried to use the tf::Transformer::transformPoint. Is it correct? I still have errors when trying to use it because of the types of the inputs. So I used tf::pointStampedMsgToTF but it still not compile. What is wrong? Thanks! |
2018-05-21 15:47:43 -0500 | received badge | ● Self-Learner (source) |
2017-07-11 02:05:16 -0500 | received badge | ● Famous Question (source) |
2016-09-20 05:20:34 -0500 | received badge | ● Famous Question (source) |
2016-08-17 10:22:07 -0500 | received badge | ● Famous Question (source) |
2016-07-21 07:48:31 -0500 | commented answer | Unable to connect camera Basler TOF with pylon_camera yes, the camera was selected with the good ID that I set. See: |
2016-07-21 06:48:09 -0500 | commented answer | Unable to connect camera Basler TOF with pylon_camera I get this message (GetDeviceDiscoveryInfo: bFunctionClass = 14, bFunctionSubClass = 3, bFunctionProtocol = 0. Device is not an U3V device.) while running the Pylon viewer too, even when I am connected to the Basler TOF camera |
2016-07-21 06:46:16 -0500 | commented answer | Unable to connect camera Basler TOF with pylon_camera Really? So why did it change with the MTU size? I still don't see the output in the PylonViewerApp... |
2016-07-21 06:23:12 -0500 | commented question | Unable to connect camera Basler TOF with pylon_camera It is a Basler camera, Time Of Light, Es-ToF (Is it the question?) |
2016-07-21 06:20:18 -0500 | received badge | ● Notable Question (source) |
2016-07-21 06:18:11 -0500 | commented question | Unable to connect camera Basler TOF with pylon_camera Hi, I updated the question. I had an error of MTU |
2016-07-21 05:48:12 -0500 | received badge | ● Popular Question (source) |
2016-07-21 03:26:21 -0500 | commented question | Unable to connect camera Basler TOF with pylon_camera Hi, thanks for your answer. I set the device ID in the parameters, but I get the error that I posted in my question |
2016-07-21 02:38:54 -0500 | asked a question | Unable to connect camera Basler TOF with pylon_camera Hi, I am using a Basler camerawith ethernet cable and changed the IP of the camera. In the pylon SDF, I have to set the IP of the camera to find it. How can I do it in the ros driver pylon_camera? Here is what I get when I don't set the IP: Does "Device is not an U3V device" means that the driver works only with USB connection? When I run /opt/pylon5/bin/PylonViewerApp to see the output of the camera, I only get a black screen, and the lights of the Basler camera don't light up. Any idea? What am I doing wrong? EDIT My MTU was too small then the jumbo frames didn't pass. I increased my MTU to 8192 and now I got a screen without image in the PylonViewerApp, and the lights of the Basler camera still don't light up. In the terminal I still get: The output of the Grab is: That seems that the image is received somewhere, right? When trying to launch the ROS driver, I get now: |
2016-07-10 02:26:34 -0500 | answered a question | Can't set up keys - server times out Hi, I got the same issue, but this was due to the internet router securities. Changing the network helps :-) Hope this helps |
2016-06-09 20:38:04 -0500 | received badge | ● Notable Question (source) |
2016-05-08 16:07:45 -0500 | marked best answer | adding "good" IMU data causes troubles to robot_pose_ekf Hi, I am using a MMP for navigation and mapping. I got trouble while rotating with move_base and debugged the robot_pose_ekf data. Here are my strange results :
Can the error be from parameters in move_base? I checked the frames, looks fine too. Here is the husky_ekf.yaml : The gps is disabled And here is the launch file: |
2016-04-14 21:56:15 -0500 | received badge | ● Popular Question (source) |
2016-04-14 21:56:15 -0500 | received badge | ● Notable Question (source) |
2016-04-04 08:09:27 -0500 | commented answer | TF can't look up transform |
2016-03-10 04:08:23 -0500 | received badge | ● Popular Question (source) |
2016-03-08 13:54:02 -0500 | received badge | ● Notable Question (source) |
2016-03-08 13:54:02 -0500 | received badge | ● Famous Question (source) |
2016-02-22 02:27:11 -0500 | commented answer | catkin_make fails weirdly after trying to create a new package Hi, I use ROS Indigo too and I have the same problem, but image-geometry is installed in my computer. It appears only when I want to declare the node as a CPP library (not if executable). Any idea? |
2016-02-15 07:44:51 -0500 | asked a question | publish disparity from depth - Kinect Hi, I want to publish disparity from the depth topic of type sensor_msgs/Image and the camera info topic of type sensor_msgs/CameraInfo, using the nodelet depth_image_proc/disparity. I want then to save it in a png file. When I launch the nodelet depth_image_proc/disparity and the disparity_view, I got only an empty window. Here is the launch file: The depth and the camera info are good. Any idea why I don't see anything in the disparity_view? I want to save the disparity in a png file and use it with opencv in python. Thanks! |
2016-02-09 10:19:47 -0500 | asked a question | kinect2 calcul of depth 0 - luminosity Hi, I get a depth of 0 when there is an object in the distance of 0 to 1 meter, then it begins to grow to "234"or "326" until the distance of 120 cm and then stay about "234" or "326". I get this kind of result with any depth topic from kinect2_bridge. Does anyone gt this type of error? What is it due to? The depth in kinect 2 is inverted (while visualizing with kinect2_bridge and also in Protonect). Thanks! EDIT: While reiterating the test with a different luminosity (darker environment), I get a depth of 0 from about 0 to 50 cm (out of range?), then the depth increases from 0 to about 300 between 50 cm to 1m50, then stays about the same value (out of range?). Any idea? Was it only due to luminosity? What is the optimal luminosity to work with the kinect2? Thanks |
2016-02-09 10:11:36 -0500 | answered a question | How to interpret depth data from sensor_msgs/Image Hi, Here is a link to the documentation of the sensor_msgs/Image. About 16UC1 coding, 16U means that the colour is coded in 16 unsigned bits per channel, and C1 that there is one channel. Then, convert your colors in hexadecimal and see the table with some 16U colors at this link. |
2015-12-09 10:49:50 -0500 | received badge | ● Famous Question (source) |
2015-12-09 10:49:50 -0500 | received badge | ● Notable Question (source) |
2015-10-05 05:10:50 -0500 | received badge | ● Famous Question (source) |
2015-08-24 05:01:25 -0500 | received badge | ● Famous Question (source) |