Sending Kinect Visual Studio output data to ROS in order to run on a Baxter robot
Hi all,
ROS beginner here - I have recently been trying to implement people detection on a Baxter robot. Now, I believe that it already works quite well in terms of moving, displaying images and speaking due the high number of pure ROS packages it is running.
My goal is to give it the ability to detect people - skeleton tracking is a solution. Microsoft being the developers for Kinect seem to have developed a fairly well made Visual Studio project using the Xbox One Kinect (2.0) called BodyBasics (available for download from the Kinect 2.0 SDK that needs to be installed first).
What BodyBasics does is it creates a black window - while the Kinect is connected to my laptop, any person going in front of the Kinect will show up on the window as a "skeleton" of colored limbs (or sticks), up to a total of three people. Whenever they move, the window will update with the movement of the skeletons, quite like it is shown here:Kinect SDK skeleton tracker
Unfortunately no packages on ROS can emulate this kind of functionality given that Openni seems not to have an official driver for the Kinect 2.0.
I wish to get the data from the window in real time- which is running Visual Studio on a Windows laptop- to output in some way to the overall ROS bundle that the Baxter Robot operates from.
Would I be able to do this via two laptops (one on Windows and one on Ubuntu) with the aid of rosserial or should I set up something of a ROS network over which I could send the data from the Kinect in real time?
Thank you for reading- I am quite new to ROS so obviously have not explored all of its capacities - such as serial or network communication. (all the tutorials I have followed have been related to virtual robots such as the turtlebot in gazebo)