get world coordinates from image coordinates
hi everyone, I now maybe someone asked this question before but I can't found my answer in this forum.
I have a calibrated virtual single camera in gazebo8 and ROS kinetic, after applying some opencv filters, now an object was detected in x = 79.0, y = 23.0, w = 28.0, h = 127.0
coordinates of image. and was published this info into /robot0/object
topic.
I'm using image_geometry
package to get position of that object in gazebo simulated world, also I know position of that object in this case x = 245 , y = 269
,I'm using gmapping
package and size of map has set to 500 , 500
.
this is my code that get object info from /robot0/object
topic and try to guess object position in gazebo simulated world :
def get_object(array):
global img_geo,robot_odometry
u = array.data[0] + (array.data[2] // 2)
v = array.data[1] + array.data[3]
time = rospy.Time(0)
listener = tf.listener.TransformListener()
camera_point = img_geo.projectPixelTo3dRay((img_geo.rectifyPoint((u, v))))
point_msg.pose.position.x = camera_point[0] + robot_odometry.pose.pose.position.x
point_msg.pose.position.y = camera_point[1] + robot_odometry.pose.pose.position.y
point_msg.pose.position.z = 0
point_msg.pose.orientation.x = 0
point_msg.pose.orientation.y = 0
point_msg.pose.orientation.z = 0
point_msg.pose.orientation.w = 1
point_msg.header.frame_id = img_geo.tfFrame()
point_msg.header.stamp = time
try:
listener.waitForTransform(img_geo.tfFrame(), 'map', time, rospy.Duration(1))
tf_point = listener.transformPose('map', point_msg)
print(convert_from_robot_to_map(tf_point.pose.position.y, tf_point.pose.position.x))
except Exception:
pass
def convert_from_robot_to_map(robot_y, robot_x):
global map_info
map_x = (robot_x - map_info.info.origin.position.x) / map_info.info.resolution
map_y = (robot_y - map_info.info.origin.position.y) / map_info.info.resolution
return map_y, map_x
but unfortunately printed result isn't true. printed info (print(convert_from_robot_to_map(...))
) is y = 250 , x = 249
that is almost robot position in map( robot position in this case is x = 250 , y = 250
), and camera_point
object from projecting pixel to 3d ray in this case is (-0.23228537548412242, 0.0925648488771315, 0.9682330572174006)
. and tf_point
object values is(-2.3449200226706126, 0.910935376894018)
.
I also passed my camera info to img_geo.fromCameraInfo()
.
What is wrong in this code?? what should I do to get true coordinates of object in gazebo world???
Asked by ashkan_abd on 2018-10-11 16:18:07 UTC
Comments
I think you misunderstand something here. When you check the map size, you get
(500,500)
which means you have500
cells (or pixel, they are squares which side equals the resolution) on the width and500
cells on the height. So ifresolution
is 0.01 (1cm) you mapwidth
is 5m andheight
5mAsked by Delb on 2018-10-12 02:23:19 UTC
So your robot coordinates aren't
(245,269)
it means that the robot is on the 245th cell on the width and the 269th cell on the height. The true coordinates of your robot is(245*resolution,269*resolution)
. In your function you think you returnmap_x/y
but in reality you return cells indexes.Asked by Delb on 2018-10-12 02:26:49 UTC
hi @Delb thanks for replying,
(245 , 269)
is my object position in map and I found this coordinates from monitoringgmapping
and I know myconvert_from_robot_to_map
function gaves me passed position cell indexes ingmapping
, how can I get theese indexes for object from below code?Asked by ashkan_abd on 2018-10-12 09:35:36 UTC