2017-09-23 07:11:43 -0600 | received badge | ● Good Question (source) |
2014-10-21 03:32:33 -0600 | received badge | ● Famous Question (source) |
2014-01-28 17:30:20 -0600 | marked best answer | rosserial & sending str_msg Another really basic question but I cannot seem to figure it out: I am trying to transmit data from a ROS node on an arduino. I figured that the best way to do this was through a str_msg. I have set the str_msg.data and I send that to my publisher which is set up to take in a str_msg. I am wondering if this is all that I need to do or if there are other fields that I need to populate before I even attempt to publish them? |
2014-01-28 17:30:13 -0600 | marked best answer | Launch File Highlighting in Eclipse I am wondering if anyone knows how to get eclipse to see .launch files as if they where XML and perform syntax highlighting based off the xml style? |
2014-01-28 17:30:01 -0600 | marked best answer | ECTO + Kinect Issue When trying to run a VERY basic ECTO cell before I integrate in any of my own functionality I am getting the following error: I am not sure what is causing this as I am simply using ECTO to access the Kinect and to display the point cloud. I have attached the source for the example.py file below: Above this line has been resolved. I have updated my python code to include the following: When I run this code I get following error: I am also not sure if I am performing the call to ecto_ros.init() properly. |
2014-01-28 17:29:54 -0600 | marked best answer | usb_cam + Microsoft Lifecam Studio Rendering Issues I am trying to get the Microsoft Lifecam up and running in ROS using the usb_cam node. It is able to open the camera fine and display and image but the image is extremely bright (see the image at the bottom of the post) and makes it almost impossible to detect anything in the image when there is natural sunlight around. The usb_cam node is also outputting the following error to the console when I try to use the camera: I am not sure if this is part of the problem of the web camera displaying improperly or not but if you could help with either of these issues it would be greatly appreciated! |
2014-01-28 17:29:51 -0600 | marked best answer | Connecting plasms in ECTO I am creating a repository for a new visual processing pipeline which should be able to accommodate several different functionality (data capture, filtering, segmentation, etc) and in each function it should also be able to support several different implementations so that when the system is up and running it can select which implementation may best suit the current process. I am wondering if I should implement this as each function as a plasm but this brings up the question can I connect multiple plasms in ECTO to make the final pipeline or must everything be connected together in one plasm? I have gotten my cell structure set up the way that we want it to be I am not just trying to figure out how to best set up the plasms. Any help would be appreciated. |
2014-01-28 17:25:01 -0600 | marked best answer | rosserial_arduino Processing Messages We are trying to set up a publisher/subscriber that will pull in data from a computer (publishing) that is sending drive commands (direction/speed) to the Arduino to control a UGV. We are also looking to have the Arduino send back information (battery_status, turn angle, speed, etc) and we are having issues with getting everything to work. I know this is probably a really stupid question but is there a way that I can print out my str_msg.data to the Arduino serial monitor to make sure that I am actually getting that data and that I am not messing up on that 1st basic step? |
2014-01-28 17:23:31 -0600 | marked best answer | gps + navigation Hello, I am working with a USB GPS unit which i have gotten working with the gpsd_client in order to read in the GPS data from the unit. I am trying to implement VERY simple navigation in a 2D work from one way point to another. I would have to imagine that there would be packages all ready out there to take in current GPS information and to provide even a straight line path to the next waypoint. For some background this is for a UGV project which will eventually use more complex navigation but for now we just want to get the thing up and running quickly and I we were hoping to keep the project within the ROS world any help the community could provide would be super helpful. Thanks is advance! |
2014-01-12 13:35:48 -0600 | received badge | ● Famous Question (source) |
2013-12-05 16:32:41 -0600 | marked best answer | Beginning with ECTO Hello I am fairly new to the world of ECTO and I am trying to create something similar to the http://wg-perception.github.com/object_recognition_core/ (Object Recognition Kitchen) for our own purposes. The idea is to create an adaptive pipeline which is able to change what implementation is being used to solve a problem based on the input from the environment and the feedback from various nodes. The basic concept was to have the system essentially broken down into the initial 3 steps:
This is where my confusion begins. I am trying to set up the framework and to get everything organized and I am not sure I am grasping the point of ECTO. We want to leverage ECTO for its ability to dynamically connect and disconnect various cells as well as its built in life cycle for the cells. What I am not sure of is how to organize the file structure to best represent this. Should I have each one of the above tasks as its own "repository" with the cells and module inside of it or should it be in one "perception pipeline" repository which contains inside of the "cells" folders different modules which will be able to communicate with each other? I have seen several different versions from the GitHub repositories and I am wondering if there is a best practices or something out there to guide me in setting all of this up. I have tried to find out this basic information by going through the documentation but it is sadly lacking in what I assume to be fairly basic information either that or I have missed it entirely. If anyone could point out how this should be set up as I need all of the modules to be able to talk to each other and to interact with the scheduler which will be added at a later date. Any pointers you could provide would be very helpful! All the best, Mat |
2013-11-11 13:14:25 -0600 | received badge | ● Stellar Question (source) |
2013-11-04 03:13:13 -0600 | received badge | ● Famous Question (source) |
2013-09-25 22:18:55 -0600 | received badge | ● Famous Question (source) |
2013-08-14 12:45:08 -0600 | received badge | ● Teacher (source) |
2013-08-14 12:45:08 -0600 | received badge | ● Necromancer (source) |
2013-08-14 01:56:05 -0600 | answered a question | accessing kinect data You could try looking at the following tutorial from PCL. If that doesn't help but you are all ready printing this information out to the terminal you could simply create an intermediary node that would take in all of the information and have it publish a series of messages that would contain the information for the coordinates that you are looking for. Once you have the node to do this you can "record" the information by either writing it to the ROS Log file or you can write it to your own log file if you want to. Hope this helps! |
2013-08-14 01:48:13 -0600 | answered a question | How to use pointclouds from Kinect? I think that your best start would be to look into the openni_launch package this will help you to get the raw data from the kinect. The next thing I would recommend is to look into the Point Cloud Library. They are very heavily tied to ROS and provide a very robust library for handling RGB-D data. Finally PCL offers many tutorials that discuss the topics you want to look into check out the following:
Hope this helps. BadRobot |
2013-08-14 01:37:18 -0600 | asked a question | Displaying VTK mesh in RVIZ I am working with ROS Groovy and Ubuntu 12.04.02. I have used the PCL library to gather and pre-process information coming from a PrimeSense device that I have in the last pre-precessing step turned into a VTK mesh as there are operations I want to perform that VTK supports but PCL does not. I have gotten the mesh through the following: Now I am trying to output them to RVIZ and I tried using the marker function like the RVIZ ROS page states but it is not showing up. Here is the code for doing so: I have the feeling that the answer to this problem is probably pretty simple. Any assistance anyone could provide would be very appreciated. |
2013-08-11 16:09:56 -0600 | received badge | ● Famous Question (source) |
2013-07-14 04:04:21 -0600 | received badge | ● Notable Question (source) |
2013-07-04 23:22:53 -0600 | received badge | ● Notable Question (source) |
2013-06-27 05:20:51 -0600 | received badge | ● Notable Question (source) |
2013-06-24 00:30:22 -0600 | received badge | ● Popular Question (source) |
2013-06-21 21:56:16 -0600 | asked a question | Roslaunch across multiple machines failing when calling nodes. I currently have a two machine robotic set-up one that is operating internally on the robot itself and an "external" machine which is used for heavier computations and graphical processing. We are trying to launch everything from one computer and have set up the following launch file: When I load this file everything up until the "raw_visual_servoing" has no problem running. The visual servoing application is one that uses the usb_cam package to grab a live video stream from a webcam and to perform blob detection on it using OpenCV libraries. It is run using the following command It is normally launched form the 2nd (external) computer and run there as the internal one cannot run it. When we do this using one launch file we get the following errors: PC2 From the node "raw_visual_servoing" I should also mention that if I move each group into its own launch file which will be launched from each machine separately they seem to work fine. It is ONLY when all of the nodes are started using the single launch file. with no errors present or detectable on PC1 I know that you should be able to start everything from one launch file so any ones assistance in getting this to work would be VERY appreciated! Thanks, Mat |