ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2023-03-26 14:53:09 -0500 | received badge | ● Taxonomist |
2019-03-09 00:14:40 -0500 | received badge | ● Nice Answer (source) |
2018-07-23 07:34:40 -0500 | marked best answer | Transform between /openni_depth_frame and /camera_link Hi I'm a bit confused since I'm new to ROS. I'm using the openni_tracker and ar_kinect packages together. Rviz complains that it can't transform from any of the skeleton topics (e.g. /head1) to /camera_depth_optical_frame (which is the fixed frame). So I'm wondering how do I transform from /openni_depth_frame to /camera_link? /openni_depth_frame is the root of the tf tree for the skeleton produced by openni_tracker and /camera_link is the root of the tf tree for ar_kinect. This is the output from "rosrun tf view_frames" so you can see the tf's that are running. Any help would be much appreciated :) James |
2015-02-25 16:08:22 -0500 | received badge | ● Famous Question (source) |
2014-12-28 23:25:00 -0500 | received badge | ● Notable Question (source) |
2014-11-03 04:14:20 -0500 | received badge | ● Famous Question (source) |
2014-10-28 18:46:26 -0500 | received badge | ● Critic (source) |
2014-10-06 21:23:26 -0500 | commented answer | Kinect openni_launch TF2 exception warning No worries :) |
2014-10-06 17:55:20 -0500 | answered a question | Kinect openni_launch TF2 exception warning Its because the launch file in openni_launch hasn't been updated to remove the slashes from the cameras frame_ids. You can edit the launch file openni.launch and remove the "/" from "/camera_rgb_optical_frame" (I cant remember exactly, but I think that launch file actually includes another file with a whole lot of parameters - which is the one you want to edit). |
2014-09-03 10:23:37 -0500 | commented question | Projecting image coordinates into world coordinates So I'm going to try just with K which has the variables for the raw images that the softkinetic driver provides... |
2014-09-03 10:23:04 -0500 | commented question | Projecting image coordinates into world coordinates Hey Corot! Thanks for your answer. Yeah I was using the equations that you would use to rectify an image (I think) by using the variables from P in camera info, but it always seemed like the transformed (u,v) points didn't match up correctly with the point cloud. |
2014-09-01 07:11:47 -0500 | received badge | ● Popular Question (source) |
2014-08-31 08:43:00 -0500 | asked a question | Projecting image coordinates into world coordinates Hi I'm using a Creative Senz3D for human-robot interaction perception tasks. I want to project an image point (u,v) into world coordinates, but where there is no corresponding point in the point cloud. E.g. the sensed pixel may be further away than the range of the depth sensor, but not the color camera. So I'm using these equations, where DEPTH is the estimated depth of the point. But I'm not sure whether I should be using the unrectifed image or the rectified image? And whether I should be using should be using cx, cy, fx and fy from K or P in CameraInfo. I'm guessing that if I use the rectified image then the image points will be distorted away from their corresponding depth points... And then if I did cloud.at(u,v) on a point that did have a depth value, then it might not actually find the point. Thanks Jamie |
2014-08-28 03:52:36 -0500 | commented answer | Cannot locate launch node of type + Can't locate node Yeah I think its a bit confusing too :S |
2014-08-22 11:46:17 -0500 | received badge | ● Notable Question (source) |
2014-08-22 06:49:13 -0500 | received badge | ● Commentator |
2014-08-22 06:49:13 -0500 | commented answer | Creative senz3d urdf or measurements I'll change the camera xacro link to whatever the offical repo becomes :P |
2014-08-22 06:48:14 -0500 | commented answer | Creative senz3d urdf or measurements Here's how I'm parametrising the xacro for the camera now if you want to see: https://github.com/jdddog/zoidstein_r... |
2014-08-22 04:20:38 -0500 | received badge | ● Popular Question (source) |
2014-08-22 03:41:08 -0500 | commented answer | Creative senz3d urdf or measurements Yeah that's a great idea, it would make it a lot easier if you could just depend on a package and then link to the xacro of the sensor you were using. |
2014-08-22 03:19:53 -0500 | commented answer | Creative senz3d urdf or measurements Awesome! Thanks for answering so fast :) |
2014-08-22 02:40:02 -0500 | asked a question | Creative senz3d urdf or measurements Hi Has anyone made a URDF for the Creative Senz3D or know where to find one? I'm looking for measurements that describe the relationship between different frames of the Creative Senz3D. Its also known as the Softkinetic DS325V2_M. Here's an example of a URDF file describing the Kinect's frames of reference: Thanks! Jamie |
2014-08-01 05:02:35 -0500 | received badge | ● Famous Question (source) |
2014-07-10 18:12:34 -0500 | received badge | ● Famous Question (source) |
2014-07-09 16:53:10 -0500 | received badge | ● Student (source) |
2014-07-09 16:52:14 -0500 | received badge | ● Notable Question (source) |
2014-07-09 13:40:08 -0500 | received badge | ● Popular Question (source) |
2014-07-09 03:07:03 -0500 | asked a question | Package 'ros-indigo-urdfdom-headers' has no installation candidate Hi I'm having trouble installting ros indigo on Ubuntu 14.04. I've followed all of the instructions on this page, but when I run sudo apt-get install ros-indigo-desktop-full I get the following errors: I've narrowed the problem down, it won't install because it can't find the package ros-indigo-urdfdom-headers: Is there a solution to this? Thanks Jamie |
2014-05-09 22:47:53 -0500 | commented answer | leg_detector and kinect You're welcome Ken! |
2014-05-04 20:16:53 -0500 | received badge | ● Enthusiast |
2014-05-02 18:46:40 -0500 | received badge | ● Teacher (source) |
2014-05-02 18:46:40 -0500 | received badge | ● Necromancer (source) |
2014-05-02 17:59:29 -0500 | commented answer | leg_detector and kinect No worries, did you clone the entire directory? e.g. git clone https://github.com/jdddog/people.git And make the stack (not just one package): rosmake people It also needs to be in your rosbuild workspace because I haven't had time to port it to catkin. |
2014-04-28 17:25:38 -0500 | answered a question | leg_detector and kinect Hi Grega I just ported the leg_detector2 package to work with Hydro and the Kinect: https://github.com/jdddog/people Thanks Jamie |
2014-04-20 13:29:43 -0500 | marked best answer | Nao inverse kinematics Hi What options do I have in ROS for controlling a Nao robots body parts using inverse kinematics (e.g. its arms)? Or would I need to write a wrapper so that ROS can use Naoqi's inverse kinematics solver? Kind Regards Jamie |