Ask Your Question

Atom's profile - activity

2021-05-20 02:49:03 -0600 received badge  Good Question (source)
2020-02-17 01:35:06 -0600 received badge  Popular Question (source)
2020-02-17 01:35:06 -0600 received badge  Famous Question (source)
2020-02-17 01:35:06 -0600 received badge  Notable Question (source)
2018-11-21 01:36:10 -0600 received badge  Good Answer (source)
2015-03-10 17:51:56 -0600 commented answer Ubuntu 14.04.2 unmet dependencies (similar for 14.04.3)

Thanks this worked for me installing on a fresh 14.04.2 64bit...

2015-01-29 07:23:17 -0600 received badge  Nice Answer (source)
2014-08-28 19:00:50 -0600 received badge  Nice Answer (source)
2014-01-28 17:26:58 -0600 marked best answer how to publish kinect laser scan 2d image in a browser

Hello.. I am interested in publishing a kinect laser scan in a browser to be broadcast over the internet. Something that is close to a player laser scan image or a rviz 2dnav image would be perfect. I am currently using WebUi to control my turtlebot with button control for cmd_vel and a video cam feed to view the environment. But I would like to transmit the kinect 2dnav scan like in rviz to get a idea of where I might be in a saved map with a view of the current laser scan in the browser. I was thinking about a screen image broadcast of the rviz window in place of a webcam image but I hope that someone on here might be able to point me to better solution.

2014-01-28 17:26:57 -0600 marked best answer Webui.. How to publish to turtlebot_node/cmd_vel ?

Im trying to add a publish statement with gPump.publish in webui, but because of the msg type im having issues. I can publish to a string easy enough but I'm having trouble understanding the msgs for cmd_vel. And also was wondering if gpump.publish was able to continue to publish until I send another command or to stop when I require. My main goal is to make buttons in webui to control the turtlebot drive control. Any advise would be appreciated ...

2014-01-28 17:24:48 -0600 marked best answer How to Adjust Turtlebots odometry manually ?

I would like to adjust the encoder information coming from my turtlebot so that it is accurate for building maps in gmapping, etc. Im trying to build a better robot base without having to reinvent the wheel, as I am very pleased with turtlebots performance over all. So basically I am fitting custom wheels(5X larger) and motors to a larger robot base but would still like to use the create with encoders and base sensors (create base still onboard the robot). I would add the creates encoders to the new shafts but instead of using gears to tune the create encoders to turtlebot specs I figured it would be easier to adjust the code. Is this possible manually , if so where should I start to play with the numbers? ,Turtlebot_node ..

self.odom_angular_scale_correction = rospy.get_param('~odom_angular_scale_correction', 1.0) self.odom_linear_scale_correction = rospy.get_param('~odom_linear_scale_correction', 1.0) ??

Also theoretically might turtlebot calibration work for this task aswell, along with some manual fine tuning of the code? I would like to understand this a bit better before breaking down the create base.. Thanks for viewing

I thought this might be a simple hardware hack to get bigger motors and wheels to carry larger payloads. I know the creates encoders might not be the most accurate but I have found them to be reliable for my needs. Maybe later I could try writing a new set of code to a new more advanced base.

2014-01-28 17:23:32 -0600 marked best answer Can we add wifi signal to turtlebot_dashboard...

I would like to monitor the wifi signal from the asus 1215n laptop on my turtlebot from the dashboard node. Is anyone else interested in this option. If no one is willing to work on it, where should I start? Maybe there is some code for the Pr2 I could look at to get me started in the right direction. Thanks in advance :)

2014-01-28 17:23:17 -0600 marked best answer Using Webui...

I have recently been trying to interface turtlebot with Webui packages to create some telepresence over a internet web browser. I was able to get everything installed and working, but I can only launch apps under localhost only and not through my external ip. I believe my network is configured properly through port forwarding. As I can load the gui from the internet and read some information about roscore. But I can only see nodes, and topics under localhost. Is this normal operation with Webui? I understand the stack maybe deprecated or no longer under development. But I was hoping to get some help on this issue. My main goal is to get a web interface similar to Roborealm's where I can control turtlebot and broadcast video over the internet. Any Ideas appreciated....

2014-01-28 17:23:06 -0600 marked best answer How to adjust speed of turtlebot in Gmapping?

I was looking through some of the launch and node files and wasn't sure where to begin with adjusting turtlebots speed when executing a goal in gmapping or amcl? I want the robot to move a bit slower to clean up some of the jerkyness. Also like to include that I am using Diamondback on 10.10 with updated and basically stock turtlebot files(with the exception of adjusting the teleop packages to slow speed). I am assuming the gmapping works differently than the teleop. Am I wrong? Any advice appreciated

2013-11-10 05:07:14 -0600 received badge  Favorite Question (source)
2013-11-04 06:02:08 -0600 marked best answer How to use Simple_navigation_goals ?

I am trying to figure out how to read navigation position of turtlebot in rviz (saved map,or new), and translate that info to write it in the Simple_navigation_goals.cpp. Does the cpp file send a goal similar to the way rviz does in gmapping, or does it just tell it to move forward then move left ..etc . How can I get the turtlebot to draw out a square. And how can I get turtlebot to go to a fixed location ? Thanks --Atom--

2013-07-15 06:02:04 -0600 received badge  Taxonomist
2013-04-18 13:49:33 -0600 received badge  Famous Question (source)
2013-02-15 07:16:02 -0600 received badge  Notable Question (source)
2013-01-16 22:37:48 -0600 received badge  Popular Question (source)
2013-01-03 22:03:53 -0600 received badge  Critic (source)
2012-12-20 10:42:42 -0600 asked a question Is multiple cameras views for mjpeg_server possible?

I am trying to set up multiple camera views for tele-operation under mjpeg_server. I would like to utilize up to 4 cameras running on one machine simultaneously to be output to one webpage. Or possibly use uvc_camera to run the camera feeds into ros and use rosservice to set which camera outputs to mjpeg_server. Anyone running this type of setup or something similar, any advice would be appreciated. I have also tried using the options under gscam. But it only allows for 2 camera views in the same window and I was not able to get it working either.

2012-12-13 09:51:49 -0600 commented answer New to ROS, want to build a robot =)

Since you are new to ROS, I would recommend first starting with something like Turtlebot. It has a fair amount of documentation and servo arm upgrades. As writing code for a custom base can become quite complex for the beginner.

2012-12-12 15:59:25 -0600 answered a question New to ROS, want to build a robot =)

I think that even that Atom D525 is going to be slightly under powered when it comes to mapping and controlling arm navigation. So the Raspberry Pi is definitely not going to be suitable for the tasks that Maxwell is capable of.

2012-11-22 23:38:09 -0600 received badge  Popular Question (source)
2012-11-22 23:38:09 -0600 received badge  Notable Question (source)
2012-11-22 23:38:09 -0600 received badge  Famous Question (source)