ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

face_detector not detecting with RGBD camera

asked 2015-06-07 09:58:31 -0500

Pinchi gravatar image

updated 2015-06-07 10:49:13 -0500

I recently installed the face_detector node and compilled it successfully. However, when I run the .launch given in the package I cannot detect any faces. I'm using Ubuntu 12.04, ROS Hydro, rosbuild and a Kinect both for its RGB camera and its depth sensor. I already checked dependency libraries and I think they're okay. Whenever I do:

rostopic echo face_detector/status

It shows me that the node it's working, but it's not publishing anything in the face detection topics. Also, even if I set the parameter do_display to true, it doesn't show a OpenCV window.

Here's the .launch I'm using to run the node. I run openni_launch before using this:

<launch>

  <arg name="camera" default="camera" />
  <arg name="image_topic" default="image_rect_color" />
  <arg name="depth_topic" default="image_rect_raw" />
  <arg name="fixed_frame" default="camera_rgb_optical_frame" />
  <arg name="depth_ns" default="depth_registered" />

  <!--include file="$(find openni_launch)/launch/openni.launch"/-->
  <node name="$(anon dynparam)" pkg="dynamic_reconfigure" type="dynparam" args="set_from_parameters $(arg camera)/driver">
    <param name="depth_registration" type="bool" value="true" />
  </node>

 <node pkg="face_detector" type="face_detector" name="face_detector" output="screen">
    <remap from="camera" to="$(arg camera)" />
    <remap from="image_topic" to="$(arg image_topic)" />
    <remap from="depth_topic" to="$(arg depth_topic)" />
    <remap from="depth_ns" to="$(arg depth_ns)" />
    <param name="fixed_frame" type="string" value="$(arg fixed_frame)" />
    <param name="classifier_name" type="string" value="frontalface" />
    <param name="classifier_filename" type="string" value="$(env ROS_ROOT)/../OpenCV/haarcascades/haarcascade_frontalface_alt.xml" />
    <param name="classifier_reliability" type="double" value="0.9"/>
    <param name="do_continuous" type="bool" value="true" />
    <param name="do_publish_faces_of_unknown_size" type="bool" value="true" />  
    <param name="do_display" type="bool" value="true" />
    <param name="face_size_min_m" type="double" value="0.05" />
    <param name="face_size_max_m" type="double" value="0.8" />
    <param name="max_face_z_m" type="double" value="8.0" />
    <param name="use_rgbd" type="bool" value="true" />
    <param name="approximate_sync" type="bool" value="true" />

  </node>

</launch>

EDIT: Thanks to Dan Lazewatsky's suggestion I created a sample code that subscribes to topics /face_detector/faces_cloud and /face_detector/people_tracker_measurements_array. However, it's still not detecting faces. Here's my code:

#include <ros/ros.h>
#include <cv_bridge/cv_bridge.h>
#include <opencv/cv.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <people_msgs/PositionMeasurementArray.h>
#include <sensor_msgs/PointCloud.h>
#include <sensor_msgs/image_encodings.h>

void face_cloud_Callback(sensor_msgs::PointCloud a)
{
  ROS_INFO("I received some point cloud info!");
}

void pos_meas_Callback(people_msgs::PositionMeasurementArray a)
{
  ROS_INFO("I received some position info!");
}

int main(int argc, char **argv)
{
  ros::init(argc, argv, "faces");
  ros::NodeHandle nh;

  ros::Subscriber sub = nh.subscribe("/face_detector/faces_cloud", 1000, face_cloud_Callback);
  ros::Subscriber sub2 = nh.subscribe("/face_detector/people_tracker_measurements_array", 1000, pos_meas_Callback);
  ros::spin();
}
edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2015-06-07 10:17:44 -0500

updated 2015-06-07 12:29:53 -0500

The face detector won't do anything unless something is subscribing to its output on face_detector/people_tracker_measurements or face_detector/faces_cloud, and if no faces are detected, no messages will be published (rather than publishing messages with zero faces in them). Additionally, /face_detector/status is part of the action interface, so I wouldn't trust anything published on that topic when running in continuous mode.

Edit: More suggestions Check to make sure face_detector is subscribing to the topics your camera is producing. Take a look at the output of rosnode info /face_detector. In the subscriptions section, you should see something like

Subscriptions: 
 * /camera/depth_registered/image_rect [sensor_msgs/Image]
 * /tf [tf/tfMessage]
 * /camera/rgb/image_rect [sensor_msgs/Image]
 * /camera/rgb/camera_info [sensor_msgs/CameraInfo]
 * /camera/depth_registered/camera_info [sensor_msgs/CameraInfo]

If any of the topics say [unknown type] next to them, then the subscription isn't working (possibly due to the topic being wrong). If you see any of those, check to see what topic data is actually being produced on.

edit flag offensive delete link more

Comments

Thanks for your answer. I followed your suggestion and created a code that subscribes to both topics. However, it's still not detecting faces (these faces were detected with OpenCV face detector, so I don't think that's a problem). I edited my question with new info. Thank you for your support.

Pinchi gravatar image Pinchi  ( 2015-06-07 10:51:38 -0500 )edit

I've updated my answer with more suggestions.

Dan Lazewatsky gravatar image Dan Lazewatsky  ( 2015-06-07 12:30:07 -0500 )edit
1

answered 2015-09-01 16:50:11 -0500

130s gravatar image

updated 2015-09-01 16:55:20 -0500

In addition to @Dan Lazewatsky's through answer, I summarized what I found out for some argument names in launch files in pkg's wiki page. I had to correct the topic names at args in a launch file.

As a record I paste what works for me with camera output from roslaunch turtlebot_gazebo turtlebot_world.launch (I used this human model on Gazebo).

<launch>

  <arg name="camera" default="camera" />
  <arg name="depth_ns" default="depth" />
  <arg name="image_topic" default="image_raw" />
  <arg name="depth_topic" default="image_raw" />
  <arg name="fixed_frame" default="/camera_rgb_frame" />

  <!--include file="$(find openni_launch)/launch/openni.launch"/-->
  <node name="$(anon dynparam)" pkg="dynamic_reconfigure" type="dynparam" args="set_from_parameters $(arg camera)/driver">
    <param name="depth_registration" type="bool" value="true" />
  </node>

 <node pkg="face_detector" type="face_detector" name="face_detector" output="screen">
    <remap from="camera" to="$(arg camera)" />
    <remap from="image_topic" to="$(arg image_topic)" />
    <remap from="depth_topic" to="$(arg depth_topic)" />
    <remap from="depth_ns" to="$(arg depth_ns)" />
    <param name="fixed_frame" type="string" value="$(arg fixed_frame)" />
    <param name="classifier_name" type="string" value="frontalface" />
    <rosparam command="load" file="$(find face_detector)/param/classifier.yaml"/>
    <param name="classifier_reliability" type="double" value="0.9"/>
    <param name="do_continuous" type="bool" value="true" />
    <param name="do_publish_faces_of_unknown_size" type="bool" value="false" /> 
    <param name="do_display" type="bool" value="false" />
    <param name="use_rgbd" type="bool" value="true" />
    <param name="approximate_sync" type="bool" value="true" />

  </node>

</launch>

Topics I was getting from turtlebot Gazebo.

$ rosnode info /face_detector
--------------------------------------------------------------------------------
Node [/face_detector]
Publications: 
 * /face_detector/result [face_detector/FaceDetectorActionResult]
 * /face_detector/status [actionlib_msgs/GoalStatusArray]
 * /face_detector/faces_cloud [sensor_msgs/PointCloud]
 * /face_detector/people_tracker_measurements_array [people_msgs/PositionMeasurementArray]
 * /rosout [rosgraph_msgs/Log]
 * /face_detector/feedback [face_detector/FaceDetectorActionFeedback]

Subscriptions: 
 * /camera/rgb/image_raw [sensor_msgs/Image]
 * /tf [tf2_msgs/TFMessage]
 * /camera/depth/camera_info [sensor_msgs/CameraInfo]
 * /tf_static [unknown type]
 * /face_detector/cancel [unknown type]
 * /camera/rgb/camera_info [sensor_msgs/CameraInfo]
 * /face_detector/goal [unknown type]
 * /camera/depth/image_raw [sensor_msgs/Image]
 * /clock [rosgraph_msgs/Clock]

image description image description

Result:

$ rostopic echo /face_detector/people_tracker_measurements_array 
header: 
  seq: 202
  stamp: 
    secs: 3317
    nsecs: 190000000
  frame_id: camera_depth_optical_frame
people: 
  - 
    header: 
      seq: 0
      stamp: 
        secs: 3317
        nsecs: 190000000
      frame_id: camera_depth_optical_frame
    name: frontalface
    object_id: 3
    pos: 
      x: -0.815850454715
      y: -1.44855080735
      z: 3.6913383007
    reliability: 0.9
    covariance: [0.04, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.04]
    initialization: 1
cooccurrence: []
edit flag offensive delete link more

Comments

How did you know what to write in the launch file? I'm using raspberry pi camera. Can you help?

Nelle gravatar image Nelle  ( 2018-08-15 01:48:52 -0500 )edit

@Nelle Please open a new question. Comment section is not designed for asking a new question.

130s gravatar image 130s  ( 2018-08-15 23:52:35 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2015-06-07 09:58:31 -0500

Seen: 1,866 times

Last updated: Sep 01 '15