ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I am really sorry for answering an answer with another (bunch of) question(s), but I guess we need some more info.

First, you are stating that your classroom is square, but the map captured by your Asus is not. Is that right ?

  • How big is the room ? What are its dimensions ?
  • How far - in average - is the robot from the walls the moment you capture and process the readings ? Remember that "Knect-like" devices have an operating range (1.2 m to 3.5 m) and that the farther the object, the greater the error on your reading.

Now, from articles around the net, the Kinect technology uses IR "dots" that are projected by the IR emitter, and read by the IR camera. If those dots are small, the object is close, and if the dots are large, the object is far. BUT, this is based on light reflection, so, the color, texture, material of the wall can interfere a lot. Can you imagine the "damage" a cork wall could do to your readings ? Or a matt-black for that matter. A high-gloss paint can be a disaster as well.

Now, it all depends on your implementation, of course.

It could be odometry also. How do you collect you odometer data ? On a wheel ? If you have a robot with three wheels and two of them are motor-driven, but you read you odometry on one (motor-driven) wheel only, whenever you make a curve, one of the wheels travels a shorter distance, when comparing to the other. Additionally, there is wheel skidding, and odometer reading errors or rounding-up errors on the math that could have a snow ball effect on your readings.

So, here you have a nice set of theories. Let's see if they help.

I am really sorry for answering an answer a question with another (bunch of) question(s), but I guess we need some more info.

First, you are stating that your classroom is square, but the map captured by your Asus is not. Is that right ?

  • How big is the room ? What are its dimensions ?
  • How far - in average - is the robot from the walls the moment you capture and process the readings ? Remember that "Knect-like" devices have an operating range (1.2 m to 3.5 m) and that the farther the object, the greater the error on your reading.

Now, from articles around the net, the Kinect technology uses IR "dots" that are projected by the IR emitter, and read by the IR camera. If those dots are small, the object is close, and if the dots are large, the object is far. BUT, this is based on light reflection, so, the color, texture, material of the wall can interfere a lot. Can you imagine the "damage" a cork wall could do to your readings ? Or a matt-black for that matter. A high-gloss paint can be a disaster as well.

Now, it all depends on your implementation, of course.

It could be odometry also. How do you collect you odometer data ? On a wheel ? If you have a robot with three wheels and two of them are motor-driven, but you read you odometry on one (motor-driven) wheel only, whenever you make a curve, one of the wheels travels a shorter distance, when comparing to the other. Additionally, there is wheel skidding, and odometer reading errors or rounding-up errors on the math that could have a snow ball effect on your readings.

So, here you have a nice set of theories. Let's see if they help.