ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

parzival's profile - activity

2023-08-10 15:59:19 -0500 received badge  Nice Answer (source)
2023-04-09 05:20:42 -0500 received badge  Famous Question (source)
2023-04-09 05:20:42 -0500 received badge  Popular Question (source)
2023-04-09 05:20:42 -0500 received badge  Notable Question (source)
2022-11-03 12:07:00 -0500 marked best answer What is the difference between min_vel_x, min_trans_vel and trans_stopped_vel?

This might be a stupid question, but in the DWA planner, I've seen group of these three parameters which seems to be indicating the same thing to me.

My understanding: Minimum velocity in x direction is 0.1m/s let's say. Then by definition, that is also going to be the minimum translational velocity. For trans_stopped_vel, when I hover over the parameter in rqt_reconfigure, I see the definition on the lines of "Minimum velocity below which the robot is assumed to be not translating", which again seems to be saying the same thing to me.

If these three parameters actually mean the same thing, then it would be a waste to set them in thrice, so I'm assuming that there is a good reason they are not the same. Please enlighten me if that is the case and explain what each of them does.

Secondly, if I want the robot to be moving forward when possible and back off when it can't move forward but not prefer moving backwards otherwise, what values should I set these three parameters to?

Related third question: Is min_in_place_vel_theta parameter supported in kinetic and higher? I've seen some people mention it in previous answers, but I haven't noticed any difference by including/excluding it. And again, how are min_in_place_vel_theta, rot_stopped_vel and min_rot_vel different?

This group of parameters which seems trivially different to me appears for minimum rotational velocity, max velocities as well.

I'm happy to read the paper on DWA if it needs more information to understand, and would be happy if someone can point me to some literature for in depth understand, but I also think it would be beneficial for me and others to know at least a short summary to get it working with the DWA algorithm still being abstract, because I think that's the true advantage of ROS.

Robot details: I have a non-holonomic two wheeled differential drive robot, with DC geared motors(175rpm, 8.4kgfcm)

2022-10-12 03:00:03 -0500 received badge  Good Answer (source)
2022-09-14 17:43:54 -0500 received badge  Famous Question (source)
2022-09-14 17:43:54 -0500 received badge  Notable Question (source)
2022-08-23 00:51:32 -0500 received badge  Nice Answer (source)
2022-06-16 09:56:47 -0500 commented question rosbag2 remap topic

for reference, here's how to do it in ros1: https://answers.ros.org/question/9248/how-do-you-remap-a-topic/?comment=4024

2022-06-07 23:49:36 -0500 marked best answer How to publish +Inf and -Inf in sensor_msgs/Range?

From the documentation, if value of sonar is more than max range, +Inf should be published. Similarly for min range and -Inf. But how do I do that? The script would only accept float values.

2022-06-05 20:02:06 -0500 received badge  Nice Question (source)
2022-05-17 08:20:50 -0500 marked best answer Issues with Arduino: Wrong checksum, Mismatched protocol version

I am trying to run base controller on an arduino uno, which will run the motors by subscribing to topics for left and right motor speeds which come through the Twist messages, and also publishes encoder values to topics /lwheel and /rwheel. When I run the arduino rosserial by connecting it to my PC, there is no problem and everything works fine. However, when I connect the uno to my raspberry pi, which is connected to my PC, problems occur. I notice the following messages and errors on my terminal running the command rosrun rosserial_python serial_node.py /dev/ttyACM0 :

[INFO] [1577335551.127150]: Connecting to /dev/ttyACM0 at 57600 baud
[INFO] [1577335553.249219]: Requesting topics...
[INFO] [1577335553.290434]: Note: publish buffer size is 512 bytes
[INFO] [1577335553.298687]: Setup publisher on lwheel [std_msgs/Float32]
[INFO] [1577335553.321495]: Setup publisher on rwheel [std_msgs/Float32]
[INFO] [1577335553.348951]: Note: subscribe buffer size is 512 bytes
[INFO] [1577335553.357481]: Setup subscriber on left_wheel_speed [std_msgs/Float32]
[INFO] [1577335553.386113]: Setup subscriber on right_wheel_speed [std_msgs/Float32]
[INFO] [1577335556.177063]: wrong checksum for topic id and msg
[INFO] [1577335559.040959]: wrong checksum for topic id and msg
[ERROR] [1577335561.888282]: Mismatched protocol version in packet ('\x00'): lost sync or rosserial_python is from different ros release than the rosserial client
[INFO] [1577335561.896526]: Protocol version of client is unrecognized, expected Rev 1 (rosserial 0.5+)
[INFO] [1577335564.716741]: wrong checksum for topic id and msg
[ERROR] [1577335579.781605]: Lost sync with device, restarting...
[INFO] [1577335579.790889]: Requesting topics...
[INFO] [1577335579.826395]: Setup publisher on lwheel [std_msgs/Float32]
[INFO] [1577335579.839117]: Setup publisher on rwheel [std_msgs/Float32]
[INFO] [1577335582.842464]: wrong checksum for msg length, length 4
[INFO] [1577335582.850964]: chk is 0

These messages are repeated over and over. I also notice weird behavior in the robot itself. There is random latency between my key press and actual movement, and the longer I press the key, the longer it continues that movement "after" I release the key.

I thought it might be an issue with the uno's dynamic memory size. So I shifted to arduino mega. It uses 28% of its dynamic memory, but I face the same issue with Arduino Mega. I don't think its an issue with buffer as well, because it works well when connected directly to the PC. I am running Ubuntu 16.04 and ROS Kinetic on PC, and Ubuntu Mate 18.04 and ROS Melodic on Raspberry Pi Model 3 B. Can that be the cause? If yes, then why isn't that causing problems when I have no publishers? (The uno works perfectly well when the code is just subscribing to speed topics and running motors, even when connected to pi).

Arduino Code:

#include <ros.h>
#include <std_msgs/Float32.h>
#include "Arduino.h"

ros::NodeHandle nh;
// Left encoder

int Left_Encoder_PinA = 2;
int Left_Encoder_PinB = 9;

volatile long Left_Encoder_Ticks = 0;

//Variable to read current state of left encoder pin
volatile bool LeftEncoderBSet;

//Right Encoder

int Right_Encoder_PinA = 3;
int Right_Encoder_PinB = 10;
volatile ...
(more)
2022-05-16 20:01:25 -0500 received badge  Famous Question (source)
2022-04-22 07:00:17 -0500 commented question when i start navigation every thing is active but my robot doesnt move i got this

There is problem with your tf tree. This can also be due to limited hardware resources. Checkout navigation wiki for mor

2022-03-09 08:36:38 -0500 commented question How does TF determine the transforms between different sensors?

Have you gone through the tf tutorials?

2022-03-09 08:33:07 -0500 answered a question hector mapping tf issue: TF_NAN_INPUT and TF_DENORMALIZED_QUATERNION

The problem arised due to a low power board. Raspberry Pi 3B wasn't enough to handle the load of all the scripts we were

2022-03-09 08:31:41 -0500 commented answer Can view topics/nodes but can't subscribe when using sytemd services

thanks for taking time to share what worked for you, but if you read the comments, I did try this early on, but didn't s

2021-12-13 01:23:01 -0500 received badge  Popular Question (source)
2021-11-26 02:29:26 -0500 received badge  Famous Question (source)
2021-10-25 12:59:15 -0500 received badge  Notable Question (source)
2021-10-19 03:14:30 -0500 asked a question Loading dynamic reconfigure params/ rosparam from a ROS node

Loading dynamic reconfigure params/ rosparam from a ROS node I want to change the move_base params (costmap inflation ra

2021-10-18 11:21:47 -0500 marked best answer Getting deeper into map (Occupancy Grid)

I'm developing a navigation algorithm, for which I need to find out corners and walls present in the room. I've been playing around with turtlebot simulation. I've made a very simple rectangular room of about 3.0 x 2.4 meter square. I've saved the map I generated using gmapping.

To find the corners and walls, I'm writing my own python script, using OpenCV functions like Harris Corner detection and Contours, and Canny. I'll be feeding the map.pgm file to do the same. I'll receive the pixels with corners and wall.

Now, I need to know where these are in relation to the bot. That's where the problem is.

I am not able to understand where does the map set its origin?

And if it's the same for every map file?

Does the robot's start, end pose or trajectory during the gmapping affect the map origin?

Also, if the map isn't at the top left corner, then how can I compute the distance between that pixel and my robot? I'd thought of identifying the pixel, then knowing its location with respect to map, and map's origin with respect to base_footprint. That way I'd know the vector joining the pixel and the base_footprint, and can drive to the point if I want to. I'm using the default 0.05 resolution, therefore each pixel should be 5 cm.

According to my observations, the map origin depends on the start pose of the robot while the mapping process. But if someone experienced can answer these questions, it would be a great help.

Also, if you think this isn't a good approach, and have a better approach, please teach me, or let me know. Thanks!

As per following images, map origin isn't same, and not necessarily at the top left. Map 1:

image description

Map 2:

image description

2021-10-18 11:21:17 -0500 marked best answer Why was PGM format selected to save map?

I was wondering why people at Willow Garage chose .pgm format to save the maps via map_server. I've seen a pull request to let users change format to png but I guess it wasn't accepted and it isn't present in current versions(why?).

I am aware that the map server does load up maps saved in png format, but I'm curious behind this design decision to only let maps be saved as .pgm

Would be great if someone familiar with the matter can answer!

(This is regarding ROS1, as I'm only familiar with ROS1. Would also like to know if ROS2 have or will have this feature.)

2021-09-16 05:05:46 -0500 commented question Doubt regarding the likelihood field in measurement model

Robotics stack exchange could be one right place to ask this question

2021-09-16 05:01:47 -0500 commented question Robot jumps after the initial estimate with gmapping

Glad I could help :)

2021-09-14 07:38:28 -0500 received badge  Self-Learner (source)
2021-09-14 07:25:47 -0500 commented question Robot jumps after the initial estimate with gmapping

What is your odometry source?

2021-09-14 07:24:29 -0500 commented question How to use/activate move_slow_and_clear?

I have edited the wiki page to help others: http://wiki.ros.org/move_slow_and_clear#Implementation

2021-09-14 07:11:24 -0500 commented question How to use/activate move_slow_and_clear?

I asked some of the folks I know. Thanks for helping :)

2021-09-14 04:09:52 -0500 marked best answer How to use/activate move_slow_and_clear?

I want to use move_slow_and_clear recovery behavior but there is very limited documentation on it. Could someone help me with the setup required to activate or use the recovery behaviour?

2021-09-14 04:09:43 -0500 received badge  Rapid Responder (source)
2021-09-14 04:09:43 -0500 answered a question How to use/activate move_slow_and_clear?

To use move_slow_and_clear or any custom recovery behavior, one needs to either append the following lines in move base

2021-09-14 00:23:08 -0500 commented question How to use/activate move_slow_and_clear?

These are parameters which can be adjusted, I know. But how can one enable this recovery behaviour? Moreover, it is not

2021-09-13 12:18:40 -0500 commented answer Mouse Teleop TurtleSim

The package really needs to be better documented. I don't think it will be of any use to a beginner.

2021-09-13 12:10:06 -0500 commented question Robot jumps after the initial estimate with gmapping

What computer are you running gmapping on? Do you have a network setup or are all the scripts run on single computer?

2021-09-13 11:51:01 -0500 asked a question How to use/activate move_slow_and_clear?

How to use/activate move_slow_and_clear? I want to use move_slow_and_clear recovery behavior but there is very limited d

2021-08-25 08:06:19 -0500 received badge  Famous Question (source)
2021-07-30 08:11:35 -0500 received badge  Necromancer (source)
2021-07-11 03:56:24 -0500 received badge  Notable Question (source)
2021-07-11 03:56:24 -0500 received badge  Famous Question (source)
2021-06-15 09:59:43 -0500 received badge  Popular Question (source)
2021-06-15 00:54:34 -0500 received badge  Notable Question (source)
2021-06-15 00:54:34 -0500 received badge  Popular Question (source)
2021-06-13 14:59:14 -0500 received badge  Nice Answer (source)
2021-06-13 01:13:35 -0500 received badge  Rapid Responder (source)
2021-06-13 01:13:35 -0500 answered a question Will layer standoffs cause problem with YDLIDAR X4 for SLAM?

It should not be a problem. If it causes any issues, you can use laser_filters, more specifically look here: http://wik