ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

face_detector not detecting faces

asked 2019-03-20 12:07:32 -0500

ZeroSan gravatar image

updated 2019-03-21 12:36:42 -0500

I have successfully built face_detector package with catkin_make and fixed the launch file's depth and rgb topic based on Face Detector Example Usage.

Besides this, I made a face_detection_behavior node which subscribes to topic /face_detector/people_tracker_measurements_array and /face_detector/faces_cloud.

rostopic info /face_detector/people_tracker_measurements_array 
Type: people_msgs/PositionMeasurementArray

Publishers: 
 * /face_detector (http://192.168.31.164:43851/)

Subscribers: 
 * /face_detection_behavior (http://192.168.31.164:45399/)

rostopic info /face_detector/faces_cloud
Type: sensor_msgs/PointCloud

Publishers: 
 * /face_detector (http://192.168.31.164:43851/)

Subscribers: 
 * /face_detection_behavior (http://192.168.31.164:45399/)

However, no faces can be detected.

rostopic echo /face_detector/faces_cloud
rostopic echo /face_detector/people_tracker_measurements_ar

The above returns no messages.

This is my face_detector launch file:

<launch>                                                                                  
  1 
  2   <arg name="camera" default="camera" />
  3   <arg name="depth_ns" default="depth_registered" />
  4   <arg name="rgb_ns" default="rgb" />
  5   <arg name="image_topic" default="image_rect_color" />
  6   <arg name="depth_topic" default="image_rect_raw" />
  7   <arg name="fixed_frame" default="/camera_rgb_optical_frame" />
  8 
  9   <!--include file="$(find openni_launch)/launch/openni.launch"/-->
 10   <node name="$(anon dynparam)" pkg="dynamic_reconfigure" type="dynparam" args="set_from_p
    arameters $(arg camera)/driver">
 11     <param name="depth_registration" type="bool" value="true" />
 12   </node>
 13 
 14  <node pkg="face_detector" type="face_detector" name="face_detector" output="screen">
 15     <remap from="camera" to="$(arg camera)" />
 16     <remap from="image_topic" to="$(arg image_topic)" />
 17     <remap from="depth_topic" to="/hw_registered/$(arg depth_topic)" />
 18     <remap from="depth_ns" to="$(arg depth_ns)" />
 19     <remap from="rgb_ns" to="$(arg rgb_ns)" />
 20     <param name="fixed_frame" type="string" value="$(arg fixed_frame)" />
 21     <param name="classifier_name" type="string" value="frontalface" />
 22     <rosparam command="load" file="$(find face_detector)/param/classifier.yaml"/>
 23     <param name="classifier_reliability" type="double" value="0.9"/>
 24     <param name="do_continuous" type="bool" value="true" />
 25     <param name="do_publish_faces_of_unknown_size" type="bool" value="true" />
 26     <param name="do_display" type="bool" value="true" />
 27     <param name="use_rgbd" type="bool" value="true" />
 28     <param name="approximate_sync" type="bool" value="true" />
 29 
 30   </node>
 31 
 32 </launch>

Here is what I have checked: face detector node:

rosnode info /face_detector
--------------------------------------------------------------------------------
Node [/face_detector]
Publications: 
 * /face_detector/faces_cloud [sensor_msgs/PointCloud]
 * /face_detector/feedback [face_detector/FaceDetectorActionFeedback]
 * /face_detector/people_tracker_measurements_array [people_msgs/PositionMeasurementArray]
 * /face_detector/result [face_detector/FaceDetectorActionResult]
 * /face_detector/status [actionlib_msgs/GoalStatusArray]
 * /rosout [rosgraph_msgs/Log]

Subscriptions: 
 * /camera/depth_registered/camera_info [sensor_msgs/CameraInfo]
 * /camera/depth_registered/hw_registered/image_rect_raw [sensor_msgs/Image]
 * /camera/rgb/camera_info [sensor_msgs/CameraInfo]
 * /camera/rgb/image_rect_color [sensor_msgs/Image]
 * /face_detector/cancel [unknown type]
 * /face_detector/goal [unknown type]
 * /tf [tf2_msgs/TFMessage]
 * /tf_static [unknown type]

Services: 
 * /face_detector/get_loggers
 * /face_detector/set_logger_level

camera depth image_rect_raw node:

rostopic info /camera/depth_registered/hw_registered/image_rect_raw
Type: sensor_msgs/Image

Publishers: 
 * /camera/camera_nodelet_manager (http://192.168.31.135:38017/)

Subscribers: 
 * /face_detector (http://192.168.31.164:43851/)

camera rgb image_rect_color node:

rostopic info /camera/rgb/image_rect_color
Type: sensor_msgs/Image

Publishers: 
 * /camera/camera_nodelet_manager (http://192.168.31.135:38017/)

Subscribers: 
 * /face_detector (http://192.168.31.164:43851/)

The cascade file I'm using is from Haar cascades, Frontal Face. What could be the problem here?

Edit: Using Kinect and openni_launch package.

Edit2: I could not found the cause of the problem. But I have manually applied haar's cascade using the rgb and depth topic published from ... (more)

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2019-03-26 09:56:01 -0500

ZeroSan gravatar image

updated 2019-08-26 06:03:13 -0500

I can't seem to find to find the root cause of the problem. However, I created a face detector ROS package that uses's Haar's cascade and CV Bridge.

This approach works for my use case.


Edit: Solution

I created a node that subscribes to /camera/rgb/image_raw and /camera/depth/image_raw.

The ROI of detected face can be obtained by applying Haar's cascade to Kinect's RGB image channel. Then, the same ROI is used to get the Depth image of detected face. This Depth image is used to compute the 3D point cloud (x, y, z) of the detected face.

RGB image callback:

 1. Convert RGB image message data to cv2 image.
 2. Apply Haar's cascade to the converted cv2 image to obtain the ROI of detected face.

Since now you know ROI of the face, you can basically use this ROI on the Depth image channel.

Depth image callback:

 1. Convert Depth image message data to cv2 image.
 2. Apply the ROI of detected face to the converted cv2 image.
 3. Convert the depth image to 3D point cloud data.

Use this CV Bridge guide to convert raw image message data to cv2 image.

As for converting the depth image to point cloud, I referred to this Conference Paper. For convenience, the multiplier constant M_x and M_y was not given in the paper however I found it some time ago at some forum.

Multiplier of X coordinate = 1.12032
Multiplier of Y coordinate = 0.84024

If successful, you should be getting three coordinates (x, y and z) which is basically the point cloud of detected face.

edit flag offensive delete link more

Comments

Hi, can you share the package (or at least a small guide)? Thanks!

SometimesButMaybe gravatar image SometimesButMaybe  ( 2019-08-25 05:31:01 -0500 )edit
1

@SometimesButMaybe: I edited my answer.

ZeroSan gravatar image ZeroSan  ( 2019-08-26 06:05:52 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2019-03-20 12:07:32 -0500

Seen: 295 times

Last updated: Aug 26 '19