ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2019-10-21 14:09:22 -0500 | received badge | ● Famous Question (source) |
2019-02-20 15:24:33 -0500 | received badge | ● Notable Question (source) |
2018-07-04 15:24:44 -0500 | commented answer | Naming log folder That sounds like it will work. Seems a shame that there isn't a more elegant (in-built) solution as it's quite an obviou |
2018-07-04 15:23:09 -0500 | marked best answer | Naming log folder Hi everyone; Is there any way to specify in my python program the name of the folder to which logs for respective nodes are saved? As in, when I launch my experiment via My question is whether I can programatically set the name of the folder |
2018-07-04 14:58:01 -0500 | received badge | ● Popular Question (source) |
2018-07-04 05:10:33 -0500 | edited question | Naming log folder Naming log folder name Hi everyone; Is there any way to specify in my python program the name of the folder to which log |
2018-07-04 04:52:53 -0500 | asked a question | Naming log folder Naming log folder name Hi everyone; Is there any way to specify in my python program the name of the folder to which log |
2018-07-01 21:49:40 -0500 | received badge | ● Taxonomist |
2018-05-02 05:05:50 -0500 | marked best answer | Simulated 3-axis camera gimbal for UAV in gazebo Hi there, I'm currently trying to simulate a "perfect" (no friction) 3-axis camera gimbal attached on the bottom of a UAV. I'm using the hector quadrotor gazebo/ROS project to try and achieve this. The ultimate goal is that the simulated image plane is continuously perpendicular with the world plane. In order to achieve this, I've attached a simulated camera to 3 orthogonal, continuous joints via links with no length (see the URDF/xacro below). However, the image feed produced is still as if the camera were permanently affixed to it. I'm fairly new to URDF/xacro/etc. so please excuse me if this is just a basic misunderstanding on my part! Here's the gimbal URDF/xacro file I've created: Which is instantiated in the following file: (more) |
2018-04-11 10:59:03 -0500 | received badge | ● Famous Question (source) |
2018-04-11 10:59:03 -0500 | received badge | ● Notable Question (source) |
2017-09-04 03:48:17 -0500 | received badge | ● Popular Question (source) |
2017-09-01 08:43:55 -0500 | asked a question | Simulated 3-axis camera gimbal for UAV in gazebo Simulated 3-axis camera gimbal for UAV in gazebo Hi there, I'm currently trying to simulate a "perfect" (no friction) |
2016-09-19 16:54:20 -0500 | received badge | ● Famous Question (source) |
2016-08-08 04:45:28 -0500 | received badge | ● Notable Question (source) |
2016-08-04 05:27:15 -0500 | answered a question | py-faster-rcnn network detection runs on CPU through ROS callback function Solved the issue by having the callback function change a global variable which is checked by the main loop/thread as follows: A bit hacky but it does the job. |
2016-08-04 05:23:53 -0500 | commented question | Callback with GPU processing You can ignore my comment/question, I completely forgot that you can do the following |
2016-08-04 04:21:13 -0500 | commented question | Callback with GPU processing Are you achieving this in Python? I'd like to implement the same but I'm not sure how to check a global variable in the main thread once |
2016-08-04 04:13:25 -0500 | received badge | ● Popular Question (source) |
2016-08-03 09:56:38 -0500 | commented question | Callback with GPU processing Hello, did you ever find a solution to this problem as I'm experiencing the same issue |
2016-08-03 06:48:09 -0500 | received badge | ● Editor (source) |
2016-08-03 06:45:58 -0500 | asked a question | py-faster-rcnn network detection runs on CPU through ROS callback function Hi there, I'm trying to test my trained py-faster-rcnn network for object detection through ROS. I have a node running with many similarities to demo.py. Included in the class are commands to execute the network detection on the GPU (given below), however, looking at recall times (~26s) and the system profiler it's fairly clear that the network is running on the CPU. Is there a way to get around this? Interestingly, if I run demo.py normally (not through ROS), it executes on the GPU in ~2s UPDATE: after some more detective work, it appears that it is an issue with calling CNN detection within a callback function. Any suggestions? (Other people seem to be experiencing a similar problem - Callback with GPU processing) Thanks, Will |
2015-07-29 02:57:36 -0500 | received badge | ● Famous Question (source) |
2015-07-07 03:24:02 -0500 | received badge | ● Enthusiast |
2015-07-06 09:23:28 -0500 | commented answer | CvBridge conversion problem: Asus Xtion depth image to OpenCV Works really well, thanks! |
2015-07-06 03:31:12 -0500 | received badge | ● Scholar (source) |
2015-07-06 03:30:12 -0500 | commented answer | CvBridge conversion problem: Asus Xtion depth image to OpenCV "encoding: 32FC1" for the Xtion |
2015-07-04 13:34:44 -0500 | received badge | ● Notable Question (source) |
2015-07-03 05:21:47 -0500 | received badge | ● Popular Question (source) |
2015-07-03 03:43:15 -0500 | commented answer | CvBridge conversion problem: Asus Xtion depth image to OpenCV Thanks! but both of your suggestions didn't work (32FC1 resulted in the same black&white image, 16UC1 was just completely black..) |
2015-07-02 07:52:16 -0500 | commented answer | Format of depth value openni2_camera Thanks! this fixed the issues I've been having |
2015-07-02 06:31:51 -0500 | asked a question | CvBridge conversion problem: Asus Xtion depth image to OpenCV Hi, I have a Asus Xtion sensor being driven by the Openni2_launch package. I subscribe to the "/camera/depth/image" topic and attempt to convert the ImageConstPtr& data type to an OpenCV Mat in a callback function utilising the cv_bridge package. Visualising the "/camera/depth/image" ROS topic in RQT, I get a high quality depth image as expected: However I can't seem to find an image type encoding (e.g. "8UC1") which OpenCV/cv_bridge accepts which results in an image similar to the one above. As an example, with the encoding "32FC1", this is the type of quality I get. I'm not entirely sure what I'm doing wrong! And here's the source code Thanks in advance! |
2015-06-24 07:00:50 -0500 | received badge | ● Supporter (source) |