ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Aslund's profile - activity

2020-06-19 18:29:23 -0500 received badge  Enlightened (source)
2020-06-19 18:29:23 -0500 received badge  Good Answer (source)
2019-08-31 03:57:03 -0500 received badge  Nice Answer (source)
2017-07-11 08:03:46 -0500 received badge  Nice Question (source)
2015-04-07 00:37:22 -0500 received badge  Famous Question (source)
2015-01-18 08:20:15 -0500 received badge  Nice Answer (source)
2014-04-20 06:47:20 -0500 marked best answer Converting Kinect RGB image to OpenCV gives wrong colours

Hey everyone

I have been playing a bit with the kinect camera using the openni driver. When running openni_node.launch and viewing the rgb image using rviz, then everything looks perfect, but when I create a node that has to the purpose to read the rgb image, convert it to a cv image and displaying it using imshow, then the colours are all wrong. Red is blue, blue is red, green is something yellow and yellow is light blue. I have been trying to use RGB8 and BGR8 encodings, but nether of them shows any difference. A sample of my code is shown below, inspired from the example in the cv_bridge tutorial:

void processRGB::processImage( const sensor_msgs::Image::ConstPtr& img ) {
    CvImagePtr cv_ptr;

    try {

        cv_ptr = toCvCopy(img, enc::BGR8); //enc::RGB8 also used

    } catch (cv_bridge::Exception& e) {

        ROS_ERROR("cv_bridge exception: %s", e.what());
        return;
    }

    imshow("Kinect RGB image", cv_ptr->image);
    waitKey(3);
}

Does anyone know what might cause this problem? I am using the newest version of Diamondback.

Regards

Sebastian Aslund

2014-04-20 06:47:16 -0500 marked best answer roscd pointing to wrong folder

Hey everyone

Earlier today I tried to install the OpenNI driver according to the tutorial on the ROS homepage link, but the result turned my ROS installation into a total mess. The driver was unable to install due to some part of openCV missing, but I was unable to install them since roscd suddenly pointed to the ros/unstable folder instead of ros/cturtle. I tried to remove ROS and install it again hoping that would help, but seems the path is defined in some bash file, but I have no clue where or how to change it back so my ROS works again. Furthermore, then during the installation of the new ROS I got a few messages like this one "sh: getcwd() failed: No such file or directory", is it also caused by his failure in roscd or some other problem, never experienced this message during the installation of ROS through the repository. Last but not least, it is not possible to install the OpenNI driver without involving creating the unstable folder? Since it seems that it creates more harm than good.

Thanks for your help

Sebastian Aslund

2014-01-28 17:27:04 -0500 marked best answer How does the local planner determine when the goal is reached?

Hey everyone

I have a maybe simple question, how does the local planner described in the navigation tutorial determine when the goal is reached? My question arises due to my testing of the local planner. I have 5 goals/offices in an simulated environment with odometry noise. Move_base tells whether the goal have been reached successfully but I also want the pose. So after each goal have been reached I publish the pose estimated by AMCL. I can see it fits very well with the xy goal tolerance but some times it is off by 1-3cm which it not much but I still find it very strange. The move_base should not confirm a succeeded reached goal if it is 1-3cm outside the defined tolerance. It have made me question how the local planner actually determines when a goal is reached but I can not find any detailed documentation on this and I hope someone here can help.
It might sound like nitpicking but for my master I can not omit such an obvious abnormality. My setup of the local planner is shown below:

  • controller_frequency: 5.0

    • TrajectoryPlannerROS:
    • max_vel_x: 0.5
    • min_vel_x: 0.0
    • max_vel_theta: 1.0
    • min_vel_theta: 0.0
    • max_rotational_vel: 1.0
    • min_in_place_rotational_vel: 0.75
    • escape_vel: -0.20
    • acc_lim_th: 1.5708
    • acc_lim_x: 0.5
    • acc_lim_y: 0.5

    • holonomic_robot: false

    • yaw_goal_tolerance: 0.7854
    • xy_goal_tolerance: 0.10

    • dwa: true

    • simple_attractor: false
2014-01-28 17:26:55 -0500 marked best answer Problems with TF using the navigation stack

Hey everyone

To better understand my question I will provide some background information. I am running a mobile robot in Stage (simulation). The "problem" with this approach is that my odometry information is ideal and I want some error introduced to evaluate the navigation stack. I have therefore created a node which sends out a frame called "odom_error" which represents the erroneous odometry.
Basically the "odom_error" frame only contains the deviation and have "odom" as parent, the ideal odometry.

broadcaster.sendTransform(

    tf::StampedTransform(  

    tf::Transform(
        tf::createQuaternionFromRPY(0.0, 0.0, phi), tf::Vector3(x, y, 0.0) 
    ), ros::Time::now(), "odom", "odom_error"
    )
);

Here x, y and phi is my deviation.

There is thereby a link between the frames "odom_error" and "odom", but somehow the navigation stack is unable to detect this shown in the image below. I tell the AMCL node to use odom_error as source for odometry information, but when doing so the costmaps are unable to detect the connection between /map and /base_link. I get the following error:
Waiting on transform from /base_link to /map to become available before running costmap, tf error: Could not find a connection between '/map' and '/base_link' because they are not part of the same tree.Tf has two or more unconnected trees
But when I execute "rosrun tf tf_echo /odom /odom_error" I am able to detect a connection. Does anyone have any idea on how to fix this problem? It is pretty critical as my deadline is quite close. If there is something that does not make sense just ask and I will try to provide more information.

Comparision

2014-01-28 17:26:37 -0500 marked best answer Improve gmapping results

Hey everyone

I am working with the ROS Navigation stack using a simulated environment in Stage. Initially I would have created a topic criticizing the precision of the AMCL localization stack, but after deeper research I found the error to occur from the map I have created with the GMapping.
The following image shows the comparison between the ideal image (blue) and the resulting map from GMapping (red). Interesting enough the data fed into the GMapping process are all ideal, no error in either the odometry or the laser scans.
Besides setting the number of particles to 150 I use the standard settings when executing the GMapping process. Actually I have tried to reduce the update frequency in the translation and angular rotation, but it did not change alot.

Comparision

I have also tried to visualize quite primitively how I have traveled through the area. The black dot is the start and the numbers shows the order in which the different areas have been explored.

Comparision

Does anyone have any idea to how to improve the result of the GMapping process? So far it does not seem that impressive.

Regards
Sebastian Aslund

P.s my laser scanner is modeled to a SICK LMS200 with a range at 8m.
P.p.s Maybe it was an idea to show the results between a higher update frequency. The red figure shows the map with the standard update intervals. In the blue map the linear update is set to 0.2 and the angular update is set to 0.1.

Comparision

2014-01-28 17:26:13 -0500 marked best answer Deprecated Kinect driver superior to the new one

Hey everyone

I have been working for a project for a couple of months now which includes cup detection using the Kinect camera. After the release of Fuerte I upgraded my machines accordingly, but apparently I forgot the machine running my Kinect camera.
I have now upgraded my machine but I noticed strange deviations in my results. I made some quick comparison and it looks like the deprecated driver provides alot better result than the new one introduced in Fuerte.
test test

The first image is taken with the deprecated driver and the second with the new one. While the scaling is a bit different, it is purely due to zoom. The position of the camera is fixed and the cups have not been moved.
When I execute my algorithm to extract the cups I get the following results: test test
As it is clearly seen, it seems there some failure in the new driver when comes to registering the rgb image with the pointcloud. It looks like there is a shift in the image. I have not dived into the problem and I might have missed something totally obvious, but for now I leave my post as it is until someone can enlighten me on this strange phenomena or I figure out why. Atleast I find the problem quite relevant.

Regards
Sebastian Aslund

2014-01-28 17:25:17 -0500 marked best answer Linking problems between two packages

Hey everyone

I have a very strange problem. I am working on creating an interface to a Katana arm by Neuronics. There have already been made one here http://www.ros.org/wiki/katana, but it is way to bloated and user defined for my purpose. So what I did was to extract the core package for the katana arm. This is actually just a ROS wrapper to the original interface API created by Neuronics. What the wrapper does is compiling the API and setup export paths in the manifest file:

  <export>
    <cpp cflags="-I${prefix}/KNI_4.3.0/include" lflags="-Wl,-rpath,${prefix}/KNI_4.3.0/lib/linux -L${prefix}/KNI_4.3.0/lib/linux
     -lKNIBase -lKinematics -lKNI_InvKin -lKNI_LM"/>
  </export>

Afterwards I created my own package which included the katana API wrapper package in its manifest file. This gave me seamlessly access to the API's include file and everything compiled nicely until today. Not matter what I try will the linking of my interface e.g. return the following error:

/home/aslund/Dropbox/MasterProject/kni/KNI_4.3.0/include/KNI_LM/lmBase.h:73: undefined reference to `CikBase::~CikBase()'
/home/aslund/Dropbox/MasterProject/kni/KNI_4.3.0/include/KNI_LM/lmBase.h:73: undefined reference to `CikBase::~CikBase()'
/home/aslund/Dropbox/MasterProject/kni/KNI_4.3.0/include/KNI_LM/lmBase.h:73: undefined reference to `CikBase::~CikBase()'

I have tried so many things, but what worked a few days ago seems to lead to nothing now. I have tried making everything from scratch, but with the same result. Another point to make is that I am running ROS Fuerte with the newest updates. Could it be these updates that causes the error or is there a totally different reason why linking the wrapper to my interface fails?

Regards

Sebastian Aslund

2014-01-28 17:22:26 -0500 marked best answer Love Arch Linux, but won't behave with ROS

Hey everyone

I recently moved to Arch Linux after getting tired of Ubuntu due to its rigid structure and I absolutely love it, but the last couple of days have I worked with installing ROS. This have not be an pleasant journey and I am currently stucked at a problem I have been unable to figure out. I get the following three line error message:

  • Bootstrapping ROS build
  • Detected ros_comm bootstrapping it too.
  • Rospack failed to build

I hope someone have an idea how to solve it.

Regards

Sebastian Aslund

2014-01-28 17:22:20 -0500 marked best answer Calibrating Kinect with surroundings

Hey everyone

I am having a serious problem with my Kinect project, but since many are working with the Kinect camera, then I hope someone have solved this or atleast have an idea how to solve it.

My Kinect camera is able to find a suitable grasping point, but is done according to the optical frame of the Kinect camera. In order to use it with a robot arm, then I need to know how the base of the robot arm is located according to the camera.

I have tried to use markers on the robots end-effector and combine it with the knowledge of the forward kinematic, but apparently there is some serious errors with the markers (ArToolKitPlus), which means the distance on the z-axis is several centimeters off. In am not entirely sure why, but atleast I am looking for an alternative method to calibrate the Kinect camera with the environment and I hope someone have some ideas

Regards

2014-01-28 17:22:16 -0500 marked best answer OpenNI camera parameters gives faulty marker detection

Hey everyone

Following my problem with getting the camera parameters from the OpenNi driver which is now solved, then I have set up ArToolKitPlus to detect markers using the RGB camera with the parameters from "rgb/camera_info". I have previously used the ArToolKitPlus in another project where it worked fine, but this time I experienced some problems with the distance in the z-axis. I have used rviz to compare the frame of the marker with the RGB point cloud, the marker frame has "openni_rgb_optical_frame". Screenshots of this is can be seen here and here, there square marker might be a bit hard to see, but it looks like the found marker fits regarding the x- and y-position, but the depth is totally off. The question is why the depth is failing using ArToolKitPlus, I am have also tried to do manual calibration of the camera and getting more or less the same parameters, so the problem must be something internal in the Kinect camera. Maybe something originating from the depth sensing?

I hope someone has an idea to solve this problem as marker detection is crucial for two of my projects.

Regards

Sebastian Aslund

2014-01-28 17:21:54 -0500 marked best answer Kinect and PCL

Hey everyone

I am currently working on a project where I have a pile of cloth and kinect camera mounted above this pile looking down on it. The aim is then to find a grasping point using the kinect camera so a robot arm is able to grasp a piece of cloth. My initial idea was to use the depth information to find the deviation between different pixels, an area with high deviation will then correspond to a piece of cloth sticking out. A good example could be this image where the folds will give a high deviation and thus a good grasping point. A thing I have noticed during my research is the widely use of pointclouds and the PCL library with the Kinect camera, and therefore I wonder about using the pointclouds to find the deviation. My problem is that I cant find a good indeep tutorial how to use the PCL library and I wonder if someone has some good links or explanation how to process pointcloids with the PCL library.

2014-01-28 17:21:53 -0500 marked best answer Getting all parameter names using C++

I am developing an application where I need to get all the parameter names from the parameter server (without specifically knowing the names of each parameter). I have only seen it possible on python using "getParamNames(caller_id)", but is something similar possible using C++?

2014-01-28 17:21:44 -0500 marked best answer Getting the calibration parameters of the Kinect camera

Hey everyone

I am working with the Kinect camera and plan to include some marker detection using ArToolKitPlus, but it requires that I provide a calibration file and my question is how I get these information from the Kinect camera. Inside the openni_camera folder under a subfolder info there are some calibration files present, but I am not sure whether the information stored in these file are usable. ArToolKitPlus uses these parameters.

Regards

Sebastian