2021-10-16 18:55:20 -0500 | received badge | ● Good Question
(source)
|
2021-01-26 10:38:42 -0500 | received badge | ● Nice Question
(source)
|
2015-10-29 06:27:12 -0500 | received badge | ● Famous Question
(source)
|
2015-10-29 06:27:12 -0500 | received badge | ● Notable Question
(source)
|
2015-05-18 07:45:26 -0500 | commented answer | openni2_tracker and rgb data |
2015-05-18 03:45:30 -0500 | commented answer | openni2_tracker and rgb data @Chaos This might be of general interest. Do you have it anywhere in a public repository? |
2015-05-08 10:07:59 -0500 | commented answer | openni2_tracker and rgb data @Chaos Glad it worked :). Out of curiosity do you save as raw format the images or in some other format such as jpg/png? |
2015-05-08 06:02:33 -0500 | received badge | ● Enthusiast
|
2015-05-08 06:02:33 -0500 | received badge | ● Enthusiast
|
2015-05-07 04:14:25 -0500 | received badge | ● Necromancer
(source)
|
2015-05-06 09:41:28 -0500 | commented answer | openni2_tracker and rgb data @Chaos Although it is not ideal to put not cleaned up code, I updated my answer with my complete file that runs fine, for me at least :). Feel free to use it. Make sure you read the bold text as you need to change a couple of variables. |
2015-05-06 09:41:28 -0500 | received badge | ● Commentator
|
2015-05-06 07:15:34 -0500 | commented answer | openni2_tracker and rgb data Hi @Chaos Please use ATtags otherwise it was fluke I saw this comment. Does the "original/unmodified" openni2_tracker run fine? roslaunch openni2_tracker tracker.launch Bear in mind that the above is a snippet rather than the complete file. If you need the complete file I can post it. |
2015-05-06 04:14:46 -0500 | commented answer | object pointer as ROS message @ahendrix @jarvisschultz Thanks. Yes, makes sense as there would be I guess nasty security loopholes then. Unfortunately, it is not a memory issue but rather trying to get openni2_launch with openni2_tracker to work together, only one can open the xtion with the other getting the device descriptor. |
2015-05-06 04:11:55 -0500 | received badge | ● Popular Question
(source)
|
2015-05-05 11:59:35 -0500 | asked a question | object pointer as ROS message Is it possible to pass an object pointer as a ROS message? Below a MWE, that doesn't even get to compile with error message: talker.cpp:20:12: error: invalid conversion from ‘Foo*’ to ‘std_msgs::UInt64_<std::allocator<void> >::_data_type {aka long unsigned int}’ [-fpermissive]
msg.data = &foo;
^
MWE: #include <iostream>
#include "ros/ros.h"
#include "std_msgs/UInt64.h"
class Foo
{
public:
int count;
};
int main(int argc, char **argv)
{
ros::init(argc, argv, "talker");
ros::NodeHandle n;
ros::Publisher chatter_pub = n.advertise<std_msgs::UInt64>("chatter", 1000);
ros::Rate loop_rate(10);
Foo foo;
foo.count = 0;
std_msgs::UInt64 msg;
msg.data = &foo;
while (ros::ok())
{
chatter_pub.publish(msg);
ros::spinOnce();
loop_rate.sleep();
++foo.count;
}
return 0;
}
Might be a silly question give that the pointer is of the object type pointer, but all I need to do is pass the integer hex memory address to another node (and hopefully create a same object pointer that points to the same address). Possibly, a dangerous thing to do but it will suffice. Any help appreciated. |
2015-05-05 09:20:50 -0500 | commented question | Sharing rgbd device between ros and openni2 applications Same issue here. The question is whether and how do you pass this pointer as a ROS message? |
2015-05-05 09:18:06 -0500 | answered a question | openni2_tracker and rgb data Using openni::Videostream. Find below some example relevant source code (header and footer to see where it goes in the main of tracker.cpp). I am also copying the complete file but it is a mess and read the bold text as you need to change two variables. Relevant snippet // Get Tracker Parameters
if(!pnh.getParam("tf_prefix", tf_prefix)){
ROS_ERROR("tf_prefix not found on Param Server! Check your launch file!");
return -1;
}
if(!pnh.getParam("relative_frame", relative_frame)){
ROS_ERROR("relative_frame not found on Param Server! Check your launch file!");
return -1;
}
// Initial OpenNI
if( openni::OpenNI::initialize() != openni::STATUS_OK )
{
ROS_ERROR("openni initial error: \n");
//cerr << "OpenNI Initial Error: " << OpenNI::getExtendedError() << endl;
return -1;
}
// Open Device
openni::Device devDevice;
if( devDevice.open( openni::ANY_DEVICE ) != openni::STATUS_OK )
{
ROS_ERROR("Can't Open Device: \n");
return -1;
}
ROS_INFO("device opened\n");
// NITE Stuff
nite::UserTracker userTracker;
nite::Status niteRc;
nite::NiTE::initialize();
niteRc = userTracker.create(&devDevice);
if (niteRc != nite::STATUS_OK){
printf("Couldn't create user tracker\n");
return 3;
}
// Create color stream
openni::VideoStream vsColorStream;
if( vsColorStream.create( devDevice, openni::SENSOR_COLOR ) == openni::STATUS_OK )
{
// set video mode
openni::VideoMode mMode;
//mMode.setResolution( 640, 480 );
mMode.setResolution( 640, 480 );
mMode.setFps( 30 );
mMode.setPixelFormat( openni::PIXEL_FORMAT_RGB888 );
if( vsColorStream.setVideoMode( mMode) != openni::STATUS_OK )
{
ROS_INFO("Can't apply videomode\n");
//cout << "Can't apply VideoMode: " << OpenNI::getExtendedError() << endl;
}
// image registration
if( devDevice.isImageRegistrationModeSupported( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR ) )
{
devDevice.setImageRegistrationMode( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR );
}
vsColorStream.setMirroringEnabled(false);
}
else
{
ROS_ERROR("Can't create color stream on device: ");// << OpenNI::getExtendedError() << endl;
//cerr << "Can't create color stream on device: " << OpenNI::getExtendedError() << endl;
return -1;
}
// Loop
printf("\nStart moving around to get detected...\n(PSI pose may be required for skeleton calibration, depending on the configuration)\n");
nite::UserTrackerFrameRef userTrackerFrame;
// create OpenCV Window
cv::namedWindow( "User Image", CV_WINDOW_AUTOSIZE );
// start
vsColorStream.start();
// while (!wasKeyboardHit()){
ros::Rate rate(100.0);
while (nh.ok()){
niteRc = userTracker.readFrame(&userTrackerFrame);
if (niteRc != nite::STATUS_OK){
printf("Get next frame failed\n");
continue;
}
// get depth frame
openni::VideoFrameRef depthFrame;
depthFrame = userTrackerFrame.getDepthFrame();
// get color frame
openni::VideoFrameRef vfColorFrame;
cv::Mat mImageBGR;
if( vsColorStream.readFrame( &vfColorFrame ) == openni::STATUS_OK )
{
// convert data to OpenCV format
const cv::Mat mImageRGB( vfColorFrame.getHeight(), vfColorFrame.getWidth(), CV_8UC3, const_cast<void*>( vfColorFrame.getData() ) );
// convert form RGB to BGR
cv::cvtColor( mImageRGB, mImageBGR, CV_RGB2BGR );
vfColorFrame.release();
}
const nite::Array<nite::UserData>& users = userTrackerFrame.getUsers();
for (int i = 0; i < users.getSize(); ++i) {
Complete file and fairly messy but to compare with yours: You need to change in main the following two variables as I am saving the images and the skeleton:
std::string path("/home/yiannis/datasets/recorded/scrap/"); std::string filename_skeleton("/home/yiannis/datasets/recorded/scrap/skeleton.csv"); Full file /*********************************************************************
* Software License Agreement (BSD License)
*
* Copyright (c) 2013, Johns Hopkins University
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice ... (more) |
2014-08-05 05:34:23 -0500 | received badge | ● Teacher
(source)
|
2014-05-19 02:05:37 -0500 | received badge | ● Nice Question
(source)
|
2014-01-28 17:25:48 -0500 | marked best answer | rospy function scheduler Hello, Is there a ropsy scheduler for calling a function after a period of time (or periodically calling it every n secs). The python sched module is kind of misbehaving, I guess it conflicts with ros based on the following example using the ps joystick with the PR2: #!/usr/bin/env python
import roslib; roslib.load_manifest('d3')
import rospy
import sched, time
from sensor_msgs.msg import Joy
class Joystick(object):
def __init__(self, topic_name='/joy'):
self.topic_name = topic_name
self.buttons_names = {'l1':10, 'l2':8}
self.sub = rospy.Subscriber(topic_name, Joy, self.cb)
self.blocked = False
self.sched = sched.scheduler(time.time, time.sleep)
print '>>> press L1 button'
def cb(self, data):
print '>> in callback:', data.buttons[self.buttons_names['l1']], data.buttons[self.buttons_names['l2']]
if not self.blocked:
self.foo(data)
def foo(self, data):
print '>> in foo:', data.buttons[self.buttons_names['l1']], data.buttons[self.buttons_names['l2']]
if data.buttons[self.buttons_names['l1']]:
self.blocked = True
rospy.loginfo('%d', data.buttons[self.buttons_names['l1']])
rospy.loginfo('L1 pressed')
self.sched.enter(1, 1, self.bar, ())
self.sched.run()
def bar(self):
rospy.loginfo('resuming control')
self.blocked = False
if __name__=='__main__':
rospy.init_node('foo', anonymous=False)
joy = Joystick()
rospy.spin()
Thanks! |
2014-01-28 17:23:25 -0500 | marked best answer | can't find linkage dependency Hi, I am trying to compile my package but I am getting the following error: /usr/bin/ld: cannot find -lgripper_click_ui
collect2: ld returned 1 exit status
In my manifest I have the following two dependencies which possibly give me the hard time but it should be including the missing library: <depend package="pr2_gripper_click"/>
<depend package="pr2_pick_and_place_demos"/>
This error occurs when I run make or rosmake. I am using diamondback. Any clues? Many thanks. |
2014-01-28 17:22:06 -0500 | marked best answer | sensor_msgs::Image to sensor_msgs::ImagePtr Hi, it might be simple so apologies for this post... but I can't find the solution. I would like to get the sensor_msgs::Image::Ptr of a sensor_msgs::Image. Basically I would like to do the conversion of a sensor_msgs::Image to an cv IplImage using: sensor_msgs::CvBridge bridge_;
bridge_.imgMsgToCv
which takes actually sensor_msgs::Image::ConstPtr
as found in CvBridge.h (diamondback) IplImage* imgMsgToCv(sensor_msgs::Image::ConstPtr rosimg, std::string desired_encoding = "passthrough")
and hence my failing. Basically, I would like to do the following: sensor_msgs::CvBridge bridge_;
sensor_msgs::Image image_msg = <to a sensor_msgs::Image>;
sensor_msgs::Image::Ptr rosimg;
missing statement??
bridge_.imgMsgToCv(rosimg, "bgr8");
Many thanks. |
2014-01-28 17:22:06 -0500 | marked best answer | publish an array of images Hi, I am trying to publish an array of images. Looking at ros wiki I couldn't find something relevant... I am attaching below my code... The publishing of the array should be at the "try" clause. Any help or hints would be appreciated. Thanks. #include "boost/filesystem.hpp"
#include "sensor_msgs/Image.h"
#include "image_transport/image_transport.h"
#include "cv_bridge/CvBridge.h"
#include "std_msgs/String.h"
#include <iostream>
#include <string>
#include <vector>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <ros/ros.h>
#include <signal.h>
using namespace boost::filesystem;
using namespace std;
void ctrlc(int s)
{
cout << "ctrl-c pressed" << endl;
ros::shutdown();
signal(SIGINT, SIG_DFL);
}
int main(int argc, char *argv[])
{
ros::init(argc, argv, "Static_Image_Publisher");
ros::NodeHandle n;
image_transport::ImageTransport it_(n);
sensor_msgs::CvBridge bridge_;
image_transport::Publisher image_pub_;
image_pub_ = it_.advertise("/camera_sim/image_raw", 1);
int cvDelay = (argc == 2 ? atoi(argv[1]) : 500);
string p(argc == 3 ? argv[2] : ".");
vector<string> filenames;
IplImage *img = NULL;
vector<IplImage*> imgs;
if (is_directory(p))
for (directory_iterator itr(p); itr!=directory_iterator(); ++itr)
if (is_regular_file(itr->status())) {
string str(itr->path().file_string());
filenames.push_back(str);
}
sort(filenames.begin(), filenames.end());
signal(SIGINT, ctrlc);
for(;;) {
imgs.clear();
for (vector<string>::iterator itr = filenames.begin(); itr != filenames.end(); ++itr) {
img = cvLoadImage(itr->c_str());
imgs.push_back(img);
}
try {
cout << ".\n";
// publishes one image at a time, how would I publish the array of images
//image_pub_.publish(bridge_.cvToImgMsg(imgs[0], "bgr8"));
} catch (sensor_msgs::CvBridgeException error) {
ROS_ERROR("error");
}
}
ros::spin();
return 0;
}
|
2013-10-03 04:43:35 -0500 | marked best answer | is there a python equivalent of fromROSMsg/toROSMsg (pcl stack) Hi, is there a python equivalent of fromROSMsg/toROSMsg (pcl stack)? Thanks. |
2013-03-06 02:33:40 -0500 | received badge | ● Taxonomist
|
2013-02-04 22:23:14 -0500 | marked best answer | narf descriptor Hi, I was looking at the source code of the NARF descriptor and I modified the tutorial example (narf_feature_extraction) to calculate all the time the NARF descriptors from a pointcloud coming from a kinect camera. I have noticed a couple of things: Some times there are more descriptors than keypoints... I see that the size of the keypoints is taken from keypoint_indices2 and the descriptors are held in narf_descriptors. Is it correct that there are more descriptors than keypoints computed? I noticed the descritptor values for a pointcloud of a mug for example are around between -0.3 and 0.3, and with any object or the complete pointcould the values are between -0.5 and 0.5... If someone has experience and know the range of values it would be great... as I had a look on the paper (Steder et al., 2011) but I couldn't find out something specifically about it... Many Thanks,
Yianni |