Ask Your Question

BadRobot's profile - activity

2017-09-23 07:11:43 -0600 received badge  Good Question (source)
2014-10-21 03:32:33 -0600 received badge  Famous Question (source)
2014-01-28 17:30:20 -0600 marked best answer rosserial & sending str_msg

Another really basic question but I cannot seem to figure it out:

I am trying to transmit data from a ROS node on an arduino. I figured that the best way to do this was through a str_msg. I have set the str_msg.data and I send that to my publisher which is set up to take in a str_msg. I am wondering if this is all that I need to do or if there are other fields that I need to populate before I even attempt to publish them?

2014-01-28 17:30:13 -0600 marked best answer Launch File Highlighting in Eclipse

I am wondering if anyone knows how to get eclipse to see .launch files as if they where XML and perform syntax highlighting based off the xml style?

2014-01-28 17:30:01 -0600 marked best answer ECTO + Kinect Issue

When trying to run a VERY basic ECTO cell before I integrate in any of my own functionality I am getting the following error:

python example.py 
Connecting to device.
 attempting to turn on registration...
Failed to start generating.Bad Parameter sent to the device!
303:/tmp/buildd/ros-fuerte-ecto-openni-0.3.8-0precise-20130313-1459/cells/Capture.cpp

Connected to device.
Failed to update all contexts.A timeout has occurred when waiting for new data!
430:/tmp/buildd/ros-fuerte-ecto-openni-0.3.8-0precise-20130313-1459/cells/Capture.cpp

Segmentation fault (core dumped)

I am not sure what is causing this as I am simply using ECTO to access the Kinect and to display the point cloud. I have attached the source for the example.py file below:

#!/usr/bin/env python

import ecto
import ecto_openni
import ecto_pcl

plasm = ecto.Plasm()

# capture from kinect and downsample
device = ecto_openni.Capture( 'device' )
cloud_generator = ecto_pcl.NiConverter('cloud_generator')
viewer = ecto_pcl.CloudViewer( "viewer", window_name="Clouds!" )

graph = [ device[:] >> cloud_generator[:],
          cloud_generator[:] >> viewer[:]
        ]

plasm.connect( graph )

if __name__ == "__main__":
    from ecto.opts import doit
    doit(plasm, description='test.')

Above this line has been resolved.


I have updated my python code to include the following:

#!/usr/bin/env python

import ecto, ecto_pcl, ecto_ros, ecto_pcl_ros

from ecto_openni import Capture

plasm = ecto.Plasm()

# capture from kinect and downsample
cloud_sub = ecto_sensor_msgs.Subscriber_PointCloud2("cloud_sub", topic_name='/camera/depth_registered/points')
msg2cloud = ecto_pcl_ros.Message2PointCloud("msg2cloud", format=ecto_pcl.XYZRGB)

cloud2msg = ecto_pcl_ros.PointCloud2Message("cloud2msg")
cloud_pub = ecto_sensor_msgs.Publisher_PointCloud2("cloud_pub",topic_name='/ecto_pcl/sample_output')

plasm.connect( cloud_sub['output'] >> msg2cloud[:],
               msg2cloud[:] >> cloud2msg[:],
               cloud2msg[:] >> cloud_pub[:] )

ecto_ros.init(sys.argv, "get_point_cloud")

if __name__ == "__main__":
    from ecto.opts import doit
    doit(plasm, description='Read a pcd through ROS.')

When I run this code I get following error:

Traceback (most recent call last):
  File "ros_example.py", line 10, in <module>
    cloud_sub = ecto_sensor_msgs.Subscriber_PointCloud2("cloud_sub", topic_name='/camera/depth_registered/points')
NameError: name 'ecto_sensor_msgs' is not defined

I am also not sure if I am performing the call to ecto_ros.init() properly.

2014-01-28 17:29:54 -0600 marked best answer usb_cam + Microsoft Lifecam Studio Rendering Issues

I am trying to get the Microsoft Lifecam up and running in ROS using the usb_cam node. It is able to open the camera fine and display and image but the image is extremely bright (see the image at the bottom of the post) and makes it almost impossible to detect anything in the image when there is natural sunlight around. The usb_cam node is also outputting the following error to the console when I try to use the camera:

[swscaler @ 0x1851a20] No accelerated colorspace conversion found from yuv422p to rgb24.

I am not sure if this is part of the problem of the web camera displaying improperly or not but if you could help with either of these issues it would be greatly appreciated!

image description

2014-01-28 17:29:51 -0600 marked best answer Connecting plasms in ECTO

I am creating a repository for a new visual processing pipeline which should be able to accommodate several different functionality (data capture, filtering, segmentation, etc) and in each function it should also be able to support several different implementations so that when the system is up and running it can select which implementation may best suit the current process.

I am wondering if I should implement this as each function as a plasm but this brings up the question can I connect multiple plasms in ECTO to make the final pipeline or must everything be connected together in one plasm?

I have gotten my cell structure set up the way that we want it to be I am not just trying to figure out how to best set up the plasms.

Any help would be appreciated.

2014-01-28 17:25:01 -0600 marked best answer rosserial_arduino Processing Messages

We are trying to set up a publisher/subscriber that will pull in data from a computer (publishing) that is sending drive commands (direction/speed) to the Arduino to control a UGV. We are also looking to have the Arduino send back information (battery_status, turn angle, speed, etc) and we are having issues with getting everything to work.

I know this is probably a really stupid question but is there a way that I can print out my str_msg.data to the Arduino serial monitor to make sure that I am actually getting that data and that I am not messing up on that 1st basic step?

2014-01-28 17:23:31 -0600 marked best answer gps + navigation

Hello,

I am working with a USB GPS unit which i have gotten working with the gpsd_client in order to read in the GPS data from the unit. I am trying to implement VERY simple navigation in a 2D work from one way point to another. I would have to imagine that there would be packages all ready out there to take in current GPS information and to provide even a straight line path to the next waypoint.

For some background this is for a UGV project which will eventually use more complex navigation but for now we just want to get the thing up and running quickly and I we were hoping to keep the project within the ROS world any help the community could provide would be super helpful.

Thanks is advance!

2014-01-12 13:35:48 -0600 received badge  Famous Question (source)
2013-12-05 16:32:41 -0600 marked best answer Beginning with ECTO

Hello I am fairly new to the world of ECTO and I am trying to create something similar to the http://wg-perception.github.com/object_recognition_core/ (Object Recognition Kitchen) for our own purposes. The idea is to create an adaptive pipeline which is able to change what implementation is being used to solve a problem based on the input from the environment and the feedback from various nodes.

The basic concept was to have the system essentially broken down into the initial 3 steps:

  • Data Capture
  • Filtering Step(s)
  • Segmentation Step(s)

This is where my confusion begins. I am trying to set up the framework and to get everything organized and I am not sure I am grasping the point of ECTO. We want to leverage ECTO for its ability to dynamically connect and disconnect various cells as well as its built in life cycle for the cells. What I am not sure of is how to organize the file structure to best represent this.

Should I have each one of the above tasks as its own "repository" with the cells and module inside of it or should it be in one "perception pipeline" repository which contains inside of the "cells" folders different modules which will be able to communicate with each other? I have seen several different versions from the GitHub repositories and I am wondering if there is a best practices or something out there to guide me in setting all of this up.

I have tried to find out this basic information by going through the documentation but it is sadly lacking in what I assume to be fairly basic information either that or I have missed it entirely. If anyone could point out how this should be set up as I need all of the modules to be able to talk to each other and to interact with the scheduler which will be added at a later date.

Any pointers you could provide would be very helpful!

All the best, Mat

2013-11-11 13:14:25 -0600 received badge  Stellar Question (source)
2013-11-04 03:13:13 -0600 received badge  Famous Question (source)
2013-09-25 22:18:55 -0600 received badge  Famous Question (source)
2013-08-14 12:45:08 -0600 received badge  Teacher (source)
2013-08-14 12:45:08 -0600 received badge  Necromancer (source)
2013-08-14 01:56:05 -0600 answered a question accessing kinect data

You could try looking at the following tutorial from PCL. If that doesn't help but you are all ready printing this information out to the terminal you could simply create an intermediary node that would take in all of the information and have it publish a series of messages that would contain the information for the coordinates that you are looking for. Once you have the node to do this you can "record" the information by either writing it to the ROS Log file or you can write it to your own log file if you want to.

Hope this helps!

2013-08-14 01:48:13 -0600 answered a question How to use pointclouds from Kinect?

I think that your best start would be to look into the openni_launch package this will help you to get the raw data from the kinect. The next thing I would recommend is to look into the Point Cloud Library. They are very heavily tied to ROS and provide a very robust library for handling RGB-D data.

Finally PCL offers many tutorials that discuss the topics you want to look into check out the following:

  • ICP
  • Localization (map building)
  • As for robot_pose_ekf I do not know what this is but there is a ROS page here that seems to cover the concept and the use of the ROS node pretty well so please check that out.

Hope this helps. BadRobot

2013-08-14 01:37:18 -0600 asked a question Displaying VTK mesh in RVIZ

I am working with ROS Groovy and Ubuntu 12.04.02.

I have used the PCL library to gather and pre-process information coming from a PrimeSense device that I have in the last pre-precessing step turned into a VTK mesh as there are operations I want to perform that VTK supports but PCL does not. I have gotten the mesh through the following:

ROS_WARN_STREAM( "Normal Estimation" );
pcl::NormalEstimationOMP<PointT, pcl::Normal> ne;
ne.setNumberOfThreads( 8 );
ne.setInputCloud( input_cloud.makeShared() );
ne.setRadiusSearch( 0.01 );
Eigen::Vector4f centroid;
compute3DCentroid( input_cloud, centroid );
ne.setViewPoint( centroid[0], centroid[1], centroid[2] );

pcl::PointCloud<pcl::Normal>::Ptr cloud_normals( new pcl::PointCloud<pcl::Normal>() );
ne.compute( *cloud_normals );

for (size_t i = 0; i < cloud_normals->size (); ++i)
{
  cloud_normals->points[i].normal_x *= -1;
  cloud_normals->points[i].normal_y *= -1;
  cloud_normals->points[i].normal_z *= -1;
}

PointCloudWithNormals::Ptr cloud_smoothed_normals (new PointCloudWithNormals );
concatenateFields( input_cloud, *cloud_normals, *cloud_smoothed_normals   );

// Create search tree*
pcl::search::KdTree<PointNormalT>::Ptr tree2 (new pcl::search::KdTree<PointNormalT>);
tree2->setInputCloud( cloud_smoothed_normals );

// Initialize objects
pcl::GridProjection<PointNormalT> gbpolygon;
PCLMesh triangles;

// Set parameters
gbpolygon.setResolution( 0.005 );
gbpolygon.setPaddingSize( 3 );
gbpolygon.setNearestNeighborNum( 100 );
gbpolygon.setMaxBinarySearchLevel( 10 );

// Get result
gbpolygon.setInputCloud( cloud_smoothed_normals );
gbpolygon.setSearchMethod( tree2 );
gbpolygon.reconstruct( triangles );

pcl::io::saveVTKFile( file_name + "mesh.vtk", triangles );
ROS_INFO_STREAM( "Mesh saved to: " << file_name << "mesh.vtk" );

Now I am trying to output them to RVIZ and I tried using the marker function like the RVIZ ROS page states but it is not showing up. Here is the code for doing so:

bool HelperFunctions::PublishMeshMarker( ros::Publisher mesh_publisher, std::string file_name )
{
  ROS_WARN_STREAM( "Starting Mesh Conversion & Publication" );
  visualization_msgs::Marker mesh_marker;

  mesh_marker.header.frame_id = "\base_link";
  mesh_marker.header.stamp = ros::Time();
  mesh_marker.ns = "ObjectMesh";
  mesh_marker.id = 0;
  mesh_marker.type = visualization_msgs::Marker::MESH_RESOURCE;
  mesh_marker.action = visualization_msgs::Marker::ADD;

  //only if using a MESH_RESOURCE marker type:
  mesh_marker.mesh_resource = file_name + "mesh.vtk";
  mesh_publisher.publish( mesh_marker );
  return true;
}

I have the feeling that the answer to this problem is probably pretty simple. Any assistance anyone could provide would be very appreciated.

2013-08-11 16:09:56 -0600 received badge  Famous Question (source)
2013-07-14 04:04:21 -0600 received badge  Notable Question (source)
2013-07-04 23:22:53 -0600 received badge  Notable Question (source)
2013-06-27 05:20:51 -0600 received badge  Notable Question (source)
2013-06-24 00:30:22 -0600 received badge  Popular Question (source)
2013-06-21 21:56:16 -0600 asked a question Roslaunch across multiple machines failing when calling nodes.

I currently have a two machine robotic set-up one that is operating internally on the robot itself and an "external" machine which is used for heavier computations and graphical processing. We are trying to launch everything from one computer and have set up the following launch file:

<?xml version="1.0"?>
<launch>
    <group>
        <machine name="pc1" address="youbot-hbrs1-pc1" default="true"/>

        <!-- base components -->
        <include file="$(find raw_bringup)/robot.launch" />

        <!-- base placement -->
        <include file="$(find raw_base_placement)/ros/launch/placement_wrt_workspace.launch" />

        <!-- motion controller -->
        <include file="$(find raw_relative_movements)/launch/base_controller.launch" />

        <!-- manipulation -->
        <include file="$(find raw_arm_navigation)/ros/launch/simple_manipulation.launch" />
    </group>
    <group>
        <machine name="pc2" address="youbot-hbrs2-pc2" default="true" env-loader="/home/b-it-bots/RoboCupAtWork/raw_env_loader.sh"/>

        <!-- Kinect -->
        <include file="$(find raw_bringup)/components/cam3d.launch" />

        <!-- 3d perception pipeline -->
        <include file="$(find raw_object_detection)/ros/launch/perception_pipeline.launch" />

        <include file="$(find raw_visual_servoing)/ros/launch/microsoft_lifecam_720_online.launch"/>

        <!-- navigation -->
        <include file="$(find raw_2dnav)/ros/launch/2dnav.launch" />
    </group>
</launch>

When I load this file everything up until the "raw_visual_servoing" has no problem running. The visual servoing application is one that uses the usb_cam package to grab a live video stream from a webcam and to perform blob detection on it using OpenCV libraries. It is run using the following command

$rosservice call /raw_visual_servoing/do_visual_servoing

It is normally launched form the 2nd (external) computer and run there as the internal one cannot run it. When we do this using one launch file we get the following errors:

PC2

$ rosservice call /raw_visual_servoing/do_visual_servoing 
ERROR: transport error completing service call: unable to receive data from sender, check sender's logs for details

From the node "raw_visual_servoing"

more raw_visual_servoing-raw_visual_servoing-26-stderr.log 

(GRAY:32551): Gtk-WARNING **: cannot open display: 

** (GRAY:32551): WARNING **: Command line `dbus-launch --autolaunch=0dcb19185f6f6b6fddfd12cb00000002 --binary-syntax --close-stderr' exited with non-zero exit status 1: Autolaunch error: X11 initialization failed.\n

I should also mention that if I move each group into its own launch file which will be launched from each machine separately they seem to work fine. It is ONLY when all of the nodes are started using the single launch file.

with no errors present or detectable on PC1 I know that you should be able to start everything from one launch file so any ones assistance in getting this to work would be VERY appreciated!

Thanks, Mat