ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

How to translate KnowRob actions into actual robot movements?

asked 2013-10-27 08:20:01 -0600

micpalmia gravatar image

updated 2013-10-27 08:27:38 -0600

After taking some time to acquaint myself with the KnowRob system, I'm now in the process of writing my own modules using its functionalities. As a simple starting point, I just want to write an action recipe telling my robot to get to the middle of the room (x=0.00, y=0.00), then just move to another point in the map (x=3.00, y=3.00).

I'm using the move_base module for navigation, and the robot is simulated in an empty room (no semantic map is needed as the space is empty).

CPL is not something I want to look into right now, as my time is limited and I think that system would add a layer of complexity that I don't need right now. I know that the KnowRob team has done experiments using action recipes in the past without using CPL and the cogito system.

In order to implement a working application, I decided to extend the KnowRob ontology with data about action execution on one hand, and to write a simple python module to query the prolog system to actually move the robot on the other hand.

This is the core of my extension to the ontology

> <owl:Class rdf:about="&move;GoToPoint">
>     <rdfs:subClassOf rdf:resource="&knowrob;Translation-LocationChange"/>
>     <rdfs:subClassOf>
>         <owl:Class>
>             <owl:intersectionOf rdf:parseType="Collection">
>                 <owl:Restriction> 
>                     <owl:onProperty rdf:resource="&move;providedByMotionPrimitive"/>
>                     <owl:hasValue rdf:resource="&move;move_base" />
>                 </owl:Restriction>
>                 <owl:Restriction> 
>                     <owl:onProperty rdf:resource="&move;destXValue"/>
>                     <owl:cardinality rdf:datatype="&xsd;decimal">1</owl:cardinality>
>                 </owl:Restriction>
>                 <owl:Restriction> 
>                     <owl:onProperty rdf:resource="&move;destYValue"/>
>                     <owl:cardinality rdf:datatype="&xsd;decimal">1</owl:cardinality>
>                 </owl:Restriction>
>             </owl:intersectionOf>
>         </owl:Class>
>     </rdfs:subClassOf> </owl:Class>

I added a MotionPrimitive class, of which move_base is a subclass. Each motion primitive provides a providedByROSAction property: a string containing the name of the appropriate ROS actionlib server. My action recipe is thus simply composed as an intersection of various GoToPoint restrictions.

The robot connects to json_prolog and, on being asked to perform the task, asks for

plan_subevents(move:'MovementTest', SEs)

then queries for the appropriate action primitives and, if it is aware of them, queries for the needed parameters and calls the corresponding servers.

I implemented this approach and it works just fine. I know there are a few minor things that should be fixed (e.g. in the ontology points in space should be represented as properties of a PointInSpace class) but I'm afraid about a major issue that this implementation brings up. Mainly, maintaining both the ontology and the robot executor might become very difficult with the number of action servers getting big. It would obviously be also very prone to errors, as the developers would have to both update one and the other, in two different languages.

Am I proceeding in the right direction with this implementation structure? Am I missing something big? Should I use some other built-in feature I didn't see/notice?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2013-10-29 06:53:39 -0600

moritz gravatar image

If you don't want to use the existing CPL system (which probably makes sense for the beginning, since learning CPL and KnowRob at the same time can be a lot), then this sounds like a reasonable way to go. You can have a look at the action recipes created by the editor, which generate a state machine-like structure that could e.g. be translated into SMACH. I've done this for a non-ROS project in Java, and it was fairly easy.

Regarding the number of interfaces that need to be maintained, I would not expect them to be that many. I guess you can get a long way using just move_base, one action for Cartesian arm control and an action for the gripper. You can also define the mapping for super-classes of the ones that you use in the action recipes and inherit these properties.

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2013-10-27 08:20:01 -0600

Seen: 533 times

Last updated: Oct 29 '13