ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Rouno's profile - activity

2019-10-15 02:11:05 -0500 marked best answer Detecting non static objects while a robot is moving

Hi everyone,

I have a turtlebot 2.0 (groovy) with a laser rangefinder (Hokuyo urg04lx) and a kinect. The robot performs navigation and mapping very well. Now I'd like it to recognize moving objects, especially people, while moving as well. So I made some considerations I'd like to share with you, any feedback woud be appreciated :

  • Using "Detecting people on a ground plane with RGB-D data" from pcl. But this might be overkill.

  • Using "Canonical Scan Matcher" outliers somehow. Then stop the robot and launch NITE user tracker to remove any doubt. A lot more efficient I guess, but I don't know how to publish outliers.

  • Same kind of approach as above but with a own implementation. Using for instance obstacles in the local map and comparing prediction and observation just like this paper: "Dynamic object detection using Laser data and transforming the points to lines". But I'm sure there is an easier solution than reinventing the wheel.

Which approach should I consider first ? Do you see anything else even easier than those suggestions ?

Thank you very much.

2019-02-28 20:15:35 -0500 received badge  Famous Question (source)
2017-09-18 03:14:03 -0500 received badge  Famous Question (source)
2017-07-12 05:00:37 -0500 received badge  Famous Question (source)
2017-06-05 03:28:33 -0500 received badge  Nice Question (source)
2017-03-22 03:25:57 -0500 received badge  Famous Question (source)
2017-02-19 14:25:50 -0500 received badge  Famous Question (source)
2017-02-19 14:25:50 -0500 received badge  Notable Question (source)
2017-02-19 14:25:50 -0500 received badge  Popular Question (source)
2016-10-07 09:38:55 -0500 received badge  Notable Question (source)
2016-10-03 14:25:34 -0500 received badge  Popular Question (source)
2016-10-03 13:47:40 -0500 commented answer Easiest way to set 2 cameras relative transformation ?

Hey Lucasw, this is just the perfect answer I was looking for. I'll give it a try asap !!!

Thanks a lot for your help ;)

2016-10-03 11:00:55 -0500 asked a question Easiest way to set 2 cameras relative transformation ?

Hi guys,

I've set up a system with downward facing camera and a visual odometry algorithm running on it. I would like to plug a 2nd camera that is facing forward and that would benefit from the first camera localization.

The idea is being able to see some AR markers in a rviz camera view from the 2nd camera perspective.

I read a couple of tutorial of tf, urdf, robot models etc. but can't figure out the easiest way to set it. What would you recommend ?

My system runs ROS Kinetic on an Odroid C2

Thanks !

2016-08-08 00:40:01 -0500 marked best answer Problem with leg_detector and people_tracking_filter

Hi ROS users !

I'd like to simulate a human wandering around with MORSE simulator and see the robot detecting it in rviz. I managed to install people stack from David Lu but got a couple of problems when I run it.

First I run filter.launch in people_tracking_filter package. Although I don't have any odom_combined frame as fixed frame, I replaced it with odom instead. /people_tracker_filter_visualization seems to be updated 10 times/s but is empty as shown with rostopic :

header: 
  seq: 375
  stamp: 
    secs: 0
    nsecs: 0
  frame_id: odom
points: []
channels: 
  - 
    name: rgb
    values: []

So I run leg_detector.launch is the corresponding package but I get the following error :

[ERROR] [1378816996.919008120]: Client [/people_tracker] wants topic /people_tracker_measurements to have datatype/md5sum [people_msgs/PositionMeasurement/54fa938b4ec28728e01575b79eb0ec7c], but our version has [people_msgs/PositionMeasurementArray/59c860d40aa739ec920eb3ad24ae019e]. Dropping connection.
[ WARN] [1378817011.705600206]: MessageFilter [target=/odom_combined ]: Dropped 100.00% of messages so far. Please turn the [ros.leg_detector.message_notifier] rosconsole logger to DEBUG for more information.

So second thing : the launch file refers to base_scan topic but mine is empty, every data is contained in my scan topic. However replacing base_scan by scan doesn't change anything. And also there is still some things related to odom_combined ...

Any guess ?

Thank you very much for your time.

I'm running groovy on ubuntu 12.04, MORSE 1.1 simulator with a differential robot and a 2D LIDAR.

2016-08-05 02:55:06 -0500 received badge  Notable Question (source)
2016-08-03 04:48:05 -0500 commented answer Publishing compressed images directly for 120fps streaming

I have edited the topic title and content to be more specific

2016-08-03 04:42:22 -0500 received badge  Popular Question (source)
2016-08-02 18:52:03 -0500 commented answer Publishing compressed images directly for 120fps streaming

I will have a look on my cpu performance indeed. But in any case, shouldn't we be able to send directly mjpeg from a camera on the network without re encoding overhead ? There are few words about this in the last section of image compressed plugin wiki but I'm not dure it's the right way to do it

2016-08-02 18:52:03 -0500 received badge  Commentator
2016-08-02 12:56:27 -0500 commented question Publishing compressed images directly for 120fps streaming

To be more specific, I run usb_cam driver and would like to stream mjpeg camera output to a remote node based on my camera framerate

2016-08-02 12:54:16 -0500 commented answer Publishing compressed images directly for 120fps streaming

Thanks for the answer I have no problem changing usb_cam/image_raw framerate. My issue is on the compressed topic of it (through image_transport plugin)

I would like to stream the 120fps mjpeg stream to a remote node but it seems that ros is re-encoding the usb camera output at a fixed framerate

2016-08-02 07:02:12 -0500 asked a question Publishing compressed images directly for 120fps streaming

Hi,

I have a usb camera able to out mjpeg up to 120fps. I would like to send this video stream over a wifi network to a remote node. So far using image_raw/compressed topic re-encode video stream and thus poorly optimal for my purpose.

According to http://wiki.ros.org/compressed_image_... :

"a quick and dirty approach is to simply copy the JPEG data into a sensor_msgs/CompressedImage message and publish it on a topic of the form image_raw/compressed"

But this is "dirty" and may not be the best way to do it. What do you think ?

Thanks

2016-08-01 07:38:18 -0500 received badge  Notable Question (source)
2016-07-31 18:57:31 -0500 received badge  Popular Question (source)
2016-07-31 09:42:23 -0500 received badge  Citizen Patrol (source)
2016-07-31 08:32:27 -0500 answered a question ethzasl_ptam + Kinetic input image problem

Ok so I was giving the wrong image format. Giving a mono image through image_proc solved the issue _

2016-07-19 09:23:21 -0500 commented answer Incomplete packages for kinetic armhf jessie

My bad, I wanted to refer to Arm64 instead

2016-07-19 03:29:01 -0500 commented answer Incomplete packages for kinetic armhf jessie

Hi, is there any release scheduled yet for Kinetic Armhf + Ubuntu 16.04 ?

2016-07-18 09:42:06 -0500 answered a question svo_ros + kinetic, no topic publishing

FYI I find a solution to fix the problem temporary (I'm sure there is still some topics not published)

I made last_frame_ public in svo/frame_handler_mono.h instead of protected

Somehow the function lastFrame() always returned NULL otherwise.

Now Rviz displays svo/image and tf properly

2016-07-18 05:23:27 -0500 asked a question svo_ros + kinetic, no topic publishing

Hi everyone,

I'm trying to run SVO on Ubuntu 16.04 and Kinetic running on my macbook air. Compiling is ok and tracking seems to work good on svo rosbag dataset from what I see rqt_svo GUI ("GOOD TRACKING" displayed).

However, no tf topic nor pose and image are published form svo_ros node. In fact, vo_->last_frame_ always seems null.

I posted the issue on svo github ( https://github.com/uzh-rpg/rpg_svo/is... ) but I re post it here because I think it is more related to ROS.

Because pointers on cv::Mat are involved, I was wondering if the problem came from openCV (version 3 in Kinetic).

What do you think ?

2016-07-18 05:16:26 -0500 received badge  Enthusiast
2016-07-12 11:02:51 -0500 asked a question ethzasl_ptam + Kinetic input image problem

Hi everyone,

Here below is a capture from usb_cam/image_raw and the same picture but from ethzasl_ptam calibrator

image description

image description

I'm using ROS kinetic on Ubuntu 16.04 and modified ethzasl_ptam to compile it with opencv3.

Any guess ?

2016-07-08 11:19:54 -0500 answered a question Garbled image problem on usb_cam

Hi everyone, I'm quite new to ROS and experienced also the same problem with Indigo & Ubuntu 14.04. Is there a simple yet step by step way of solving this ? I tried libuvc_camera package but my camera module doesn't seem to be uvc compliant.

Thx

2016-07-08 06:35:34 -0500 commented answer Garbled image problem on usb_cam

Hi Allen, I'm also facing the exact same problem. Can you elaborate on the solution of rewriting mjpeg2rgb function ? Best regards

2014-01-29 11:21:57 -0500 received badge  Notable Question (source)
2013-11-03 21:30:59 -0500 received badge  Popular Question (source)
2013-10-29 22:34:31 -0500 received badge  Famous Question (source)
2013-10-17 22:06:45 -0500 answered a question Detecting non static objects while a robot is moving

@ctguell, I guess you're using people detector on ground plane from pcl aren't you ? We also gave it a try but as our robot camera shoots on below, the algorithm didn't work very well. For now we're using a laser based people detection but I'm sure that taking care of ICP outliers would definitely be the best option.

2013-10-17 06:39:11 -0500 received badge  Famous Question (source)