ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

swethmandava's profile - activity

2023-06-09 08:05:38 -0500 received badge  Good Answer (source)
2023-06-09 08:05:38 -0500 received badge  Enlightened (source)
2022-07-22 05:30:29 -0500 received badge  Nice Answer (source)
2020-03-07 22:38:28 -0500 received badge  Necromancer (source)
2019-04-23 01:14:23 -0500 received badge  Famous Question (source)
2018-07-19 10:25:06 -0500 received badge  Self-Learner (source)
2017-11-29 06:35:57 -0500 received badge  Famous Question (source)
2017-07-26 03:23:42 -0500 received badge  Notable Question (source)
2017-07-25 16:49:12 -0500 answered a question [Octomap] Not updating the map in real time?

If you're publishing the updated point cloud, can you manually call the function in octomap to convert point cloud to oc

2017-07-25 16:43:24 -0500 answered a question PCD to Octree

First used pcl to publish a point cloud from the pcd file and then used octomap to get the octree <launch> <no

2017-07-25 16:12:06 -0500 commented answer dynamicEDTOctomap.h: No such file or directory

I was following a tutorial given here http://wiki.ros.org/mallasrikanth/octomap But I think that is for an older version

2017-07-25 16:07:01 -0500 marked best answer Collision detection using point clouds

Hi all,

I'm presently using dynamic edt3d library to compute the closest obstacle to a point. If this distance is greater than a certain clearance, I say that the point is collision free for the robot. However, let's say I'm planning to use a robot of dimensions 4 * 2 * 0.5, then my clearance will have to be 4m which is too much. How can I effectively collision check keeping robot dimensions in mind? Finding the intersection of point cloud with robot shape point cloud might not be the most effective way I'm guessing.

Any and all suggestions are welcome. Thank you for your time!

2017-07-25 16:06:57 -0500 answered a question Collision detection using point clouds

Distance transforms with multiple resolutions in each axis might be the fastest and simplest solution to this problem. H

2017-07-25 16:05:01 -0500 commented answer Collision detection using point clouds

Thank you, I checked it out and it seems like a perfect solution for a complicated robot structure. Since I'm only using

2017-07-20 01:40:45 -0500 received badge  Popular Question (source)
2017-07-19 14:11:28 -0500 asked a question Collision detection using point clouds

Collision detection using point clouds Hi all, I'm presently using dynamic edt3d library to compute the closest obstacl

2017-07-19 14:05:31 -0500 received badge  Notable Question (source)
2017-07-19 14:05:31 -0500 received badge  Popular Question (source)
2017-02-28 11:46:06 -0500 asked a question dynamicEDTOctomap.h: No such file or directory

I installed dynamicedt3d library using sudo apt-get install ros-indigo-dynamicedt3d Included in my CMakeLists as:

set(DYNAMICEDT3D_LIBRARIES "/opt/ros/indigo/lib/libdynamicedt3d.so") include_directories( ${DYNAMICEDT3D_LIBRARIES} ) add_executable(get_occupancy src/get_occupancy.cpp) target_link_libraries(get_occupancy $(DYNAMICEDT3D_LIBRARIES))

But while doing a catkin_make, I still get the error fatal error: dynamicEDTOctomap.h: No such file or directory #include <dynamicedtoctomap.h>

Can somebody help me figure this out?

2016-12-14 10:54:12 -0500 marked best answer Communicate between my python library and ros

I have a python class called primitive_surfaces that uses the script called primitive_shapes.py to find waypoints. Initially, I had a main that would instantiate an object of primitive_surfaces with required arguments and print out the waypoints. I'm now planning to use this code and publish the variable waypoints using ROS. How can I communicate between non-ROS Python scripts and ROS?

Apologies if this question has been repeated many times, I'm fairly new to ROS and couldn't seem to find anything relevant. Thanks :)

2016-12-14 10:24:58 -0500 asked a question headers not exported from package to be used in another package

I'm trying to use a library from the package localizablity in another package my_package but I get the undefined reference error so I'm guessing there is a linking error. Also, the include folder in devel is empty. I have followed many tutorials but I seem to be missing in the CMakeLists that is attached.

When I catkin_make only localizablity the build goes through but when the my_package is also there, the build fails with undefined reference to a function in the header file of localizablity

CMakeFiles/get_occupancy.dir/src/get_occupancy.cpp.o: In function `main':

get_occupancy.cpp:(.text+0x3be): undefined reference to `Interpolator::Interpolator()'

get_occupancy.cpp:(.text+0x3d6): undefined reference to `Interpolator::get_localizablity(float, float, float)'

collect2: error: ld returned 1 exit status

cmake_minimum_required(VERSION 2.8.3)
project(localizablity)

find_package(catkin REQUIRED COMPONENTS
  cmake_modules
  roscpp
  roslib
  std_msgs
  sensor_msgs
  visualization_msgs
  pcl_ros
  pcl_conversions
)

find_package(PCL REQUIRED)
find_package(Eigen REQUIRED)
include_directories(${Eigen_INCLUDE_DIRS}
${PCL_INCLUDE_DIRS}
)

link_directories(
${Eigen_LIBRARY_DIRS}
${PCL_LIBRARY_DIRS}
)


catkin_package(
 INCLUDE_DIRS include
  LIBRARIES calculate_localizablity
  CATKIN_DEPENDS roscpp roslib std_msgs sensor_msgs visualization_msgs
                             pcl_ros pcl_conversions
  DEPENDS           Eigen PCL
)

include_directories(
  include
  ${catkin_INCLUDE_DIRS}
)

add_library(calculate_localizablity src/calculate_localizablity.cpp)
target_link_libraries(calculate_localizablity ${catkin_LIBRARIES} ${PCL_LIBRARIES})

add_executable(localizablity_calculator src/localizablity_calculator.cpp)
target_link_libraries(localizablity_calculator calculate_localizablity view_map
  ${catkin_LIBRARIES} ${PCL_LIBRARIES})

 install(TARGETS calculate_localizablity
   ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
   LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
   RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
 )

install(DIRECTORY include/${PROJECT_NAME}/
   DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
   #FILES_MATCHING PATTERN "*.h"
   PATTERN ".svn" EXCLUDE
 )
2016-11-08 05:13:31 -0500 received badge  Famous Question (source)
2016-11-04 21:06:25 -0500 received badge  Famous Question (source)
2016-05-19 08:09:58 -0500 received badge  Teacher (source)
2016-05-19 08:09:58 -0500 received badge  Necromancer (source)
2016-05-19 07:13:37 -0500 commented answer How can I simulate a camera model for software testing?

@JohnDoe2991 I also had to add the plugin to get the entry on rviz. http://www.generationrobots.com/en/co... this helped me.

2016-05-12 09:27:40 -0500 received badge  Notable Question (source)
2016-05-12 06:39:33 -0500 commented answer How can I simulate a camera model for software testing?

Hey thanks for the edit. I'm trying to follow your steps but can't find the camera image on rviz.

2016-05-11 08:00:07 -0500 received badge  Popular Question (source)
2016-05-11 00:38:47 -0500 commented answer How can I simulate a camera model for software testing?

Thanks @JohnDoe2991 I have never worked on Gazebo, I have a couple of doubts. can I move the camera model around and get images from all pose positions? And how do i compare the corresponding images to the 3D model being inspected to make sure the entire surface area is covered?

2016-05-10 22:10:35 -0500 asked a question How can I simulate a camera model for software testing?

I have camera poses to fully inspect a 3D structure. I would like to test this using a camera model and mark the fov by something in order to measure the coverage. Is there any way I can do this?

2016-04-13 08:57:52 -0500 received badge  Famous Question (source)
2016-04-13 04:23:10 -0500 answered a question How to keep Pose Markers in RViz?

You can use a Pose Array

def talker():
    obj = get_poses()
    pub = rospy.Publisher('topic', PoseArray, queue_size=len(obj))
    rospy.init_node('pose_array')
    rate = rospy.Rate(60) # 10hz
    while not rospy.is_shutdown():
        Poses = PoseArray()
        Poses.header.frame_id = "/surface"
        Poses.header.stamp = rospy.Time.now()
        for i in range(len(obj)):
            Poses.poses.append(obj[i])
        pub.publish(Poses)
        rate.sleep()
2016-04-13 04:14:26 -0500 marked best answer Quaternion of a 3D vector

My drone is to reach position x,y, z and orient itself along vector3D (ax, by, cz) wrt origin. In order for me to visualize this on rviz I must represent them in quaternion form. I understand that quaternions and vectors don't represent the same thing, but if I take my arbitrary axis as x-axis (vector [1,0,0]) and calculate quaternions between the two vectors, I get this result:

image description

As you can see here, the poses aren't all normal to the faces of the cube (my vectors are!) on rviz. What must I do to simply get the vector representation in quaternion form to visualize them?

2016-04-13 04:14:26 -0500 received badge  Scholar (source)
2016-04-13 04:14:21 -0500 answered a question Quaternion of a 3D vector
def  pose_from_vector3D(waypoint):
#http://lolengine.net/blog/2013/09/18/beautiful-maths-quaternion-from-vectors
pose= Pose()
pose.position.x = waypoint[0]
pose.position.y = waypoint[1]
pose.position.z = waypoint[2] 
#calculating the half-way vector.
u = [1,0,0]
norm = linalg.norm(waypoint[3:])
v = asarray(waypoint[3:])/norm 
if (array_equal(u, v)):
    pose.orientation.w = 1
    pose.orientation.x = 0
    pose.orientation.y = 0
    pose.orientation.z = 0
elif (array_equal(u, negative(v))):
    pose.orientation.w = 0
    pose.orientation.x = 0
    pose.orientation.y = 0
    pose.orientation.z = 1
else:
    half = [u[0]+v[0], u[1]+v[1], u[2]+v[2]]
    pose.orientation.w = dot(u, half)
    temp = cross(u, half)
    pose.orientation.x = temp[0]
    pose.orientation.y = temp[1]
    pose.orientation.z = temp[2]
norm = math.sqrt(pose.orientation.x*pose.orientation.x + pose.orientation.y*pose.orientation.y + 
    pose.orientation.z*pose.orientation.z + pose.orientation.w*pose.orientation.w)
if norm == 0:
    norm = 1
pose.orientation.x /= norm
pose.orientation.y /= norm
pose.orientation.z /= norm
pose.orientation.w /= norm
return pose

I realized my mistake, I didn't normalize the quaternion. This code works!

image description

2016-03-16 03:58:52 -0500 commented answer Quaternion of a 3D vector

That makes sense! But since I only want the visual representation of the vector on rviz, I don't think I need a unique quaternion. As long as my x axis of the frame always points towards the vector, y and z shouldn't matter right. But again, let me know if I'm way off here @robustify

2016-03-14 12:48:44 -0500 received badge  Notable Question (source)
2016-03-13 23:26:44 -0500 received badge  Popular Question (source)
2016-03-13 07:38:53 -0500 received badge  Notable Question (source)
2016-03-13 03:26:49 -0500 edited question Visualize collada file on rviz

I exported a collada file through sketch up but am unable to figure out what the problem is. Here is my code! Status on Rviz is okay but cannot see any display! Please help

#!/usr/bin/env python
import rospy
from visualization_msgs.msg import Marker

marker = Marker()
marker.header.frame_id = "/surface"
marker.header.stamp = rospy.Time()
marker.ns = "primitive_surfaces"
marker.id = 0
marker.type = marker.MESH_RESOURCE
marker.action = marker.ADD
marker.pose.position.x = 0
marker.pose.position.y = 0
marker.pose.position.z = 0
marker.pose.orientation.x = 0.0
marker.pose.orientation.y = 0.0
marker.pose.orientation.z = 0.0
marker.pose.orientation.w = 1.0
marker.scale.x = 1
marker.scale.y = 1
marker.scale.z = 1
marker.color.a = 1.0
marker.color.r = 0.0
marker.color.g = 1.0
marker.color.b = 0.0
marker.mesh_resource = "file:///home/swethmandava/catkin_ws/src/planning_inspection/planning_inspection_ros/nodes
/cuboid.dae";
pub = rospy.Publisher('surface', Marker, queue_size=1)
rospy.init_node('vis_surface')
rate = rospy.Rate(60)
while not rospy.is_shutdown():
  pub.publish(marker)
  rate.sleep()

Edit: Included rviz screenshot and frame information

image description

this is my tf frame:

#!/usr/bin/env python 
import roslib
roslib.load_manifest('planning_inspection_ros')
import rospy
import tf
if __name__ == '__main__':
    rospy.init_node('tf_broadcaster')
    br = tf.TransformBroadcaster()
    rate = rospy.Rate(10)
    while not rospy.is_shutdown():
        br.sendTransform((0.0,0.0,0.0), (0.0,0.0,0.0,1.0), rospy.Time.now(),"map", "surface")
        rate.sleep()